id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2302.01250
|
Identifying regions of importance in wall-bounded turbulence through
explainable deep learning
|
Despite its great scientific and technological importance, wall-bounded
turbulence is an unresolved problem in classical physics that requires new
perspectives to be tackled. One of the key strategies has been to study
interactions among the energy-containing coherent structures in the flow. Such
interactions are explored in this study for the first time using an explainable
deep-learning method. The instantaneous velocity field obtained from a
turbulent channel flow simulation is used to predict the velocity field in time
through a U-net architecture. Based on the predicted flow, we assess the
importance of each structure for this prediction using the game-theoretic
algorithm of SHapley Additive exPlanations (SHAP). This work provides results
in agreement with previous observations in the literature and extends them by
revealing that the most important structures in the flow are not necessarily
the ones with the highest contribution to the Reynolds shear stress. We also
apply the method to an experimental database, where we can identify completely
new structures based on their importance score. This framework has the
potential to shed light on numerous fundamental phenomena of wall-bounded
turbulence, including novel strategies for flow control.
|
Andres Cremades, Sergio Hoyas, Rahul Deshpande, Pedro Quintero, Martin Lellep, Will Junghoon Lee, Jason Monty, Nicholas Hutchins, Moritz Linkmann, Ivan Marusic, Ricardo Vinuesa
|
2023-02-02T17:34:33Z
|
http://arxiv.org/abs/2302.01250v4
|
# Explaining wall-bounded turbulence through deep learning
###### Abstract
Despite its great scientific and technological importance, wall-bounded turbulence is an unresolved problem that requires new perspectives to be tackled. One of the key strategies has been to study interactions among the coherent structures in the flow. Such interactions are explored in this study for the first time using an explainable deep-learning method. The instantaneous velocity field in a turbulent channel is used to predict the velocity field in time through a convolutional neural network. Based on the predicted flow, we assess the importance of each structure for this prediction using the game-theoretic algorithm of SHapley Additive exPlanations (SHAP). This work provides results in agreement with previous observations in the litera
ture and extends them by quantifying the importance of the Reynolds-stress structures, finding a connection between these structures and the dynamics of the flow. The process, based on deep-learning explainability, has the potential to shed light on numerous fundamental phenomena of wall-bounded turbulence, including the objective definition of new types of flow structures.
keywords: Turbulence, Deep learning, Machine learning, Shapley values, Explainability, Coherent structures +
Footnote †: journal: Journal of Computational Physics
## Introduction
Approximately 140 years ago, Osborne Reynolds published the first and most influential scientific article on turbulent flows [1]. One of the main conclusions of this study was the fact that the Navier-Stokes equations, which describe the behavior of any flow, can only be solved analytically for elementary flow configurations. For nearly a century, the study of turbulence has relied on experimental measurements [2; 3; 4] and theoretical considerations [5]. Almost all flows of practical interest are turbulent, except those relevant to lubrication [6]. In fact, one of the most crucial challenges nowadays, namely the current climate emergency, is closely connected with turbulence and a better understanding of the dynamics of turbulent flows is necessary to reduce greenhouse-gas and pollutant emissions. Approximately 30% of the energy consumption worldwide is used for transportation [7], which, due to the increase in drag caused by turbulent flow, is a problem very closely connected with wall-bounded turbulence. Furthermore, turbulence is critical in combustion processes [8; 9] and aerodynamics [10; 11]. It is also essential in energy generation [12; 13] or urban pollution [14; 15], to name just a few. In
deed, some estimations indicate that 15% of the energy consumed worldwide is spent near the boundaries of vehicles and is therefore related to turbulent effects [16].
The main challenge is the fact that turbulence is a multi-scale phenomenon in both time and space. The energy is mainly transferred from the largest to the smallest scales of the flow, where it is dissipated [5], although there is also an energy path in the opposite direction [17]. There are several orders of magnitude between these scales for any flow in engineering. In the presence of a wall this energy cascade is even more complicated due to the energy and momentum transfer from the wall to the outer flow [18]. This multi-scale behavior implies that integrating numerically the Navier-Stokes equations requires extremely fine computational meshes leading to a prohibitive computational effort for practical applications.
In the 1980s, supercomputers became powerful enough to integrate these equations numerically in some canonical geometries. Kim et al. [19] simulated the simplest complete example of a wall-bounded flow, _i.e.,_ a turbulent channel. They performed a direct numerical simulation (DNS), where all the spatial and temporal scales of the flow are resolved. Note that in DNS there are no additional hypotheses beyond the fact that the flow is governed by the Navier-Stokes equations. This numerical technique provides a complete flow characterization, and almost any imaginable quantity can be computed. Thus, DNS can provide a large amount of high-quality data, and simulations in the Petabyte scale are becoming progressively more common [20]. This enables fully characterizing the kinematics of wall-bounded turbulent flows. However, describing the dynamics of these flows is still an open challenge. It
is then essential to develop novel methods to solve the questions posed 140 years ago.
One of the most successful ideas for studying turbulent flows focuses on the relationship among the different scales and coherent structures of the flow [18; 21]. Note that different definitions of coherent structure have been proposed in the literature. The first examples of coherent structure are the streamwise streaks [3] and the Reynolds-stress quadrants [22], which were first observed experimentally. The latter, also called intense Reynolds-stress events or Q events, are the object of our work. Coherent Q structures are flow regions associated with momentum transfer and turbulent-kinetic-energy (TKE) production. Two particular Q events defined below, ejections and sweeps, are the main contributors to the exchange of streamwise momentum. This process is the main energy source for all the structures present in turbulent flows [22; 23]. Q events are also responsible for the generation of turbulent drag. Even with extensive studies on the contribution of the various coherent structures to the dynamics of turbulent flows, a clear understanding of their actual role still needs to be provided [18].
This study proposes a new technique for the study of wall-bounded turbulence. We have developed a novel methodology based on explainable artificial intelligence (XAI) to gain a more profound knowledge of the flow physics and to evaluate the contributions of the Q events to flow-field prediction. The methodology employs deep convolutional neural networks (CNNs) [24] and the Shapley additive explanation (SHAP) values [25; 26; 27]. CNNs can effectively extract the spatial information in the flow data [28], both in two and three dimensions. The SHAP algorithm is a game-theoretic method that cal
culates the importance of each input feature on the CNNs prediction. SHAP has been shown to correctly identify key aspects of the near-wall cycle that sustains turbulence close to onset [29]. Thus, the main novelty of this work is the explainability of fully-developed turbulence through artificial intelligence. We calculate the relative importance of each Q event for the CNN prediction through SHAP. In doing so, we identify, in a purely data-driven method (without any hypothesis about the physics of the flow), relevant physical processes governing the dynamics of wall-bounded turbulence.
To accomplish this objective, we will first show how CNNs can predict the evolution of turbulent channel flow, as documented in our earlier work [30]. We start with a database of 4,900 instantaneous realizations of the channel flow; 60% of the fields are used for training and validation, while the rest are utilized for testing. For every field, the domain is segmented into Q events (see Methods section), and each one of these structures is considered an input feature to the SHAP algorithm. SHAP ranks the importance of each structure for predicting the following flow field, as shown schematically in Figure 1. This workflow consists of three main stages: prediction of the flow through a CNN, determination of the structure evolution (advance a time step in the simulation), and quantification of the importance of each coherent turbulent structure using SHAP values (and SHAP values per unit of volume) comparing the predicted solution with the simulated flow field in the next time step. By analyzing the characteristics of the highest-ranked structures, we can shed light on the dynamics of wall-bounded turbulence, with direct implications on the questions described above. We find coherent structures representing ejections, where fluid volumes with low streamwise velocity move
from the near-wall towards the outer region; and sweeps, where fluid volumes with high streamwise velocity move from the outer region towards the wall. Note that these are most important for the prediction of the flow. Our study confirms the results obtained by other authors [18, 31], introducing the usage of XAI to analyze turbulent flows and finding a causal connection between sweeps/ejections and the dynamics of the flow.
Figure 1: **Conceptual map of the workflow employed in this study.** (Top-left) Instantaneous Reynolds-stress (Q) events identified in a turbulent channel. Four different kinds of structures exist based on the quadrant analysis [32]: outward interactions (purple), ejections (blue), inward interactions (green) and sweeps (yellow). (Top-right) Total contribution, \(\Phi_{e}/\Phi_{T}\), (left column) and total contribution per unit volume, \(\Phi_{e}^{v}/\Phi_{T}^{v}\), (right column) of each event type to the CNN prediction. Their definition and implications are discussed in the Results section. (Bottom) Workflow comprising three steps: 1) a CNN is used to predict the next instantaneous flow field (time \(t_{i+1}\)) based on the current one (\(t_{i}\)); 2) the structures evolve, so some may dissipate in the next field (yellow), others may be convected (rest of colors), and some may even merge into larger ones (not shown); 3) calculation of the contribution of each structure (gray shade) to the prediction of the next field. The error on the prediction of the flow field of the CNN in \(t_{i}\) with respect to the simulated flow in \(t_{i+1}\) is used to determine the importance of every single structure. In this way, it is possible to rank the various structures in terms of their relative importance to predict the next instantaneous field. The workflow is performed on the full three-dimensional data but shown on a vertical slice of the turbulent channel here for simplicity.
**Results**
The geometry of a turbulent channel flow comprises two parallel planes at a distance of \(2h\). A pressure gradient drives the flow along the channel. The spatial coordinates are \(x\), \(y\), and \(z\), in the streamwise, wall-normal, and spanwise directions, respectively. The length and width of the channel are \(L_{x}=2\pi h\), and \(L_{z}=\pi h\), with streamwise and spanwise periodicity. This computational box is large enough to recover all the statistical information of the flow [32; 33].
The velocity vector is \({\bf U}(x,y,z,t)=(U,V,W)\), where \(t\) denotes time. As the flow is fully developed, its statistical information only depends on \(y\)[6]. Statistically-averaged quantities in \(x\), \(z\), and \(t\) are denoted by an overbar, whereas fluctuating quantities are denoted by lowercase letters, _i.e._, \(U=\overline{U}+u\). Primes are reserved for root-mean-squared (rms) quantities: \(u^{\prime}=\sqrt{\overline{u^{2}}}\), which constitute a measure of the standard deviation from the mean flow.
The simulation was carried out at a friction Reynolds number \(Re_{\tau}=u_{\tau}h/\nu=125\). Note that \(\nu\) is the fluid kinematic viscosity and \(u_{\tau}=\sqrt{\tau_{w}/\rho}\) is the friction velocity (\(\tau_{w}\) is the wall-shear stress and \(\rho\) the fluid density) [6], while \(Re_{\tau}\) is the main control parameter. The value of \(Re_{\tau}\) attainable in numerical simulations has been increasing steadily in the last 35 years due to the advances in computational power and numerical methods [19; 34; 35; 36; 37; 38; 39]. Quantities nondimensionalized with the viscous scales \(u_{\tau}\) and \(\nu\) are denoted with a '+' superscript. Finally, as the channel is statistically symmetric, the upper half-channel statistics are projected symmetrically onto the coordinates of the lower half.
The Q events are coherent regions of instantaneous high Reynolds stress, defined by:
\[\left|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
prediction. As shown in Figure 2, the simulated and predicted flow fields are nearly identical. The larger differences appear in few grid points as the CNN slightly displaces some flow patterns without affecting the global prediction of the velocity field. The mean error between the simulation and the prediction averaged over the whole test database is \(0.47u_{\tau}\), and the maximum error is \(2.33u_{\tau}\). These correspond to \(3.5\%\) and \(15\%\) of the bulk velocity (\(U_{b}=13.42u_{\tau}\)), respectively. Part of these deviations are due to the fact that the CNN uses a mesh coarser than that of the simulation. Note that the employed resolution is sufficient for this study [29] and the reported errors are acceptable in the context of coherent-structure analysis.
_Explainability of the CNN predictions_
The CNN is used for calculating the contribution, based on the SHAP values [25; 40; 41], of the different turbulent structures to the prediction. A deeper explanation of how the SHAP values are used for assessing the importance of each Q event is discussed in the Methods section. The total importance of each typology of Q event, \(\Phi_{e}\), is calculated by summing up the value of every structure belonging to this class of event, \(\phi_{i}^{e}\), where \(i\) indicates a single coherent structure. The total contribution of all events, _i.e._
Figure 2: **Comparison of ground truth and prediction of a representative vertical slice at a single instance in time.** (Top) reference velocity field, (middle) predicted velocity field and (bottom) norm of the difference between the two previous fields, with a mean-squared error of \(0.47u_{\tau}\) averaged over the whole database. The subscripts \(s\) and \(p\) correspond to the fields in the reference simulation and the prediction, respectively.
the summation of all the SHAP values, is denoted by \(\Phi_{T}\). Particularizing to a single class \(e\) (ejections, sweeps, outward or inward interactions), we can define the percentual importance of this class, \(\Phi_{e}/\Phi_{T}\). For this particular application, \(\phi_{i}^{e}\) is always negative. The larger its absolute value, the more critical the structure is to reconstruct the field. In the context of this study, we will use the term _importance_ to refer to this impact. The quantification of \(\phi_{i}^{e}\) for every structure can be used to evaluate their contribution to the turbulent flow. Moreover, this magnitude may be evaluated per unit volume, _i.e._ a SHAP density can be calculated. In this case, a different distribution of relative importance is obtained, identfying highly-important localized structures. The percentual importance per unit of volume is defined as:
\[\Phi_{e}^{v}=\sum_{i=1}^{I_{e}}\left(\frac{\phi_{i}}{V_{i}^{+}}\right)^{e}, \hskip 28.452756pt\Phi_{T}^{v}=\sum_{e=1}^{4}\Phi_{e}^{v}, \tag{2}\]
where \(I_{e}\) is the number of structures of type \(e\) and \(\left(\phi_{i}/V_{i}^{+}\right)^{e}\) is the SHAP value per unit of volume of the structures type \(e\). To avoid spurious results, we filtered out all volumes lower than \(V^{+}=2.7\times 10^{4}\)[31], which corresponds to 0.035% of the total volume of the channel.
In Figure 3 we can see an example of the flow and the relative importance of each structure. A wide variety of structures are present in the flow (top row, subfigures A). Turbulence is transported in self-contained bursts composed of sweep/ejection pairs, which generate streaks as a result [18; 31; 42]. This idea, which was previously proposed by Wallace et al. [23] and Lu and Willmarth [22], was further analysed by Lozano-Duran et al. [31] using probability density functions of the intense Reynolds-stress structures. Using \(\phi_{i}^{e}\), we can quantitatively measure the importance of every single structure.
Complementing this, we can quantify the importance of every Q class. The total SHAP is presented in the top-right bar plot of Figure 1. In absolute terms, ejections are the most important events, as they represent 81.8% of the total SHAP score. They are followed by sweeps, with 16.9%. Inward and outward interactions account for the rest, as expected [18, 43]. To put these percentages in perspective, approximately 60% of the total number of structures in a turbulent channel are either sweeps or ejections, but only 25% of them are attached to the wall [18]. The SHAP analysis associates the total number of sweeps and ejections with 98.7% of the total SHAP or importance on the prediction, reinforcing the idea that the momentum transport relies on the self-contained bursts or ejection/sweep pairs. Moreover, the summation of the wall-attached sweeps and ejections corresponds to a volume of approximately 6.1% of the total, and a contribution of around 30% of the Reynolds-shear-stress profile. These numbers are similar, although lower than the ones reported by Lozano-Duran et al. [31], 8% and 60% respectively, a fact that may be explained by the Reynolds number considered here, which is an order of magnitude lower than those in Ref. [31]. Note that, while in the work by Lozano-Duran et al. [31] the metric used to assess the importance of the various Q events is their respective contribution to the Reynolds-shear-stress profile, in this study we consider the SHAP value instead. Interestingly, based on the SHAP metric the importance of wall-attached sweeps and ejections is 96.8% despite their low combined volume, a fact that suggests that it may be a robust and objective metric to evaluate the importance of various coherent structures. As shown in Figure 1, ejections are the largest structures. The size of ejections can also be appreciated in the range of 100-1000, which is the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-1000, which are the largest structures in the range of 100-10000, which are the largest structures
ciated in the slices A2) and B2) of Figure 3. Using the SHAP density (2), the contribution of each type of structure is modified, and while the ejections are the most influential structures per unit of volume (64.5%), the relative contribution of the sweeps increases (31.9%). Due to their small volume, the inward and outward interactions have a larger impact than in absolute terms, with approximately 3% of the total SHAP score. However, this is still small compared with ejections and sweeps.
Furthermore, two different families of structures are observed [44]: wall attached, in which the lowest point is located at \(y^{+}<20\) (Figure 3, A2), and wall detached, in which the lowest point is located at \(y^{+}\geq 20\) (Figure 3, small structures in A1). As stated by Lozano-Duran et al. [31] and Jimenez [18], the most important structures are the large ejections attached to the wall as they transport most of the Reynolds stress. To further analyze this situation, the SHAP value of the structures has been represented as a function of their volume, Figure 4 (left). Wall-attached ejections are confirmed as the most important structures, and sweeps have an undoubtedly smaller value. This asymmetry between sweeps and ejections has been known since the work of Nakagawa and Nezu [45] and has also been discussed by many authors, see Ref. [18]. Lozano-Duran et al. [31] estimated that the Reynolds stress associated with the sweeps is weaker than that of the ejections. In Figure 4 it is shown that the wall-attached ejections are also the most influential structures per unit volume, a conclusion in agreement with the work of Jimenez [42]. Note that wall-attached structures are associated with the energy production while the wall-detached ones are related to dissipation, and the work by Jimenez [42] focused on the former. However, this figure ev
idences the presence of important wall-detached ejections per unit of volume. This can be visualized in Figure 3 A1), where the most influential structures per volume are shown with solid colors and in Figure 4 (right). Additionally, note the presence of high-importance-per-volume inward interactions. These structures are of reduced volume and transport a low Reynolds-stress magnitude. It is important to note that the relevance of these small structures was not identified by the traditional methodologies focused on the contribution to the Reynolds stress [18; 31]. Finally, note that the large ejections located along the streamwise direction are not the most important per unit of volume, being the moderate-size wall-attached ejections, small-size ejections and some inward interactions the most relevant, see Figure 3 C1).
Figure 3: **Instantaneous visualization of the turbulent structures.** This Figure shows (views A) the type of turbulent structure, (views B) the SHAP values, and (views C) the SHAP values divided by the volume of the corresponding structures. The three-dimensional perspective is presented in images A3, B3, and C3. The side view of the turbulent channel (left) highlights the more influential structures (views A2 and B2). The most important structures per unit of volume are highlighted in views A1 and C1. Note that the highest SHAP values are obtained for large wall-attached ejections, while the smaller-size wall-attached ejections and small wall-detached structures exhibit the highest influence per unit of volume. The dashed line marks \(y^{+}=20\), which was used in previous studies [31] to separate wall-attached and wall-detached structures. The visualization is presented for half of the channel in all the subfigures.
As mentioned above, in this work we use the SHAP score to assess the importance of the various Q events in the flow, as opposed to calculating their contribution to the total Reynolds-shear-stress profile \(\overline{uv}_{\text{tot}}\) as previously done in the literature [46]. Here we study the differences between both methods by computing the total Reynolds stress associated with each structure \(\overline{uv}_{e}\), defined as:
\[\overline{uv}_{e}=\int_{e}u(x,y,z)v(x,y,z)\text{d}V, \tag{3}\]
where the integration is done for every structure \(e\) and \(V\) is the volume of
Figure 4: **SHAP values (left) and SHAP values per unit of volume (right) of the structures for different turbulent events as a function of their volume.** The SHAP values determine the importance of the various turbulent structures, _i.e._ the most relevant structures exhibit a higher modulus of the SHAP value. High-volume ejections are the most important structures for the predictions, while the higher value per volume is obtained for smaller wall-attached ejections and small-size structures. Wall-detached structures, mainly ejections, exhibit a high importance per volume. These structures are often associated with a low Reynolds stress and therefore their importance is typically not identified by the methods based on contribution to the Reynolds-stress profile.
the structure. Without taking into account the volume, see Figure 5 (left), there exists a clear correlation between \(\overline{uv}_{e}/\overline{uv}_{\rm tot}\) and the SHAP values. The larger a structure is, the more Reynolds stress the structure carries and the larger its SHAP is. However, when scaling the SHAP and \(\overline{uv}_{e}/\overline{uv}_{\rm tot}\) distributions by the volume, the results are very different as shown in Figure 5 (right): the correlation between these two quantities becomes much weaker. Most of the structures are located in region A, which represents a band with increasing module of the SHAP value per volume for increasing \(\overline{uv}_{e}/\overline{uv}_{\rm tot}\) per unit volume. However, some structures, mainly wall-attached sweeps, are located outside this band. In addition, the most important structures (located in region C) are not obtained for the maximum Reynolds stress per volume (found in region B). Note that the structures with the maximum specific shear stress are of low volume, while the large structures are associated with a medium specific shear stress. According to the relation between the specific shear stress and the volumetric importance of each structure, the SHAP value provides additional information compared to the Reynolds stress. Moreover, the SHAP score can detect relatively small structures with the highest impact on the momentum transport.
To further confirm these results, the relative importance of every structure has been quantified by calculating the normalized kernel-density estimation (KDE) of the SHAP values and the SHAP values per unit volume, shown in Figure 6. Again, the ejection/sweep pair is the most influential structure. In addition, ejections are more important than sweeps in absolute terms. When taking into account the structure volume to analyze the SHAP value, the ejections are still the most important Q events, but the importance of all the smaller structures increases. This is the case of some inward inter
Figure 5: **SHAP values as a function of the fractional contribution to the total Reynolds shear stress (left) and same quantities scaled with the structure volume (right).** The left figure shows a clear relationship between the SHAP values and the contribution to the Reynolds stress; this correlation is connected with the structure size. The right panel shows some differences, since the highest SHAP values are obtained for structures which do not have the highest fractional contribution to the total Reynolds shear stress. In this panel we highlight different regions: region A (yellow) is a band containing most of the structures, region B (purple) contains the structures with the highest fractional contribution to the total Reynolds shear stress and region C (green) contains the structures with the highest SHAP per volume. The size of the markers in both figures is directly proportional to the volume of the coherent structures.
actions, which exhibit a bi-modal distribution with higher presence in the low-importance-per-volume region. In the case of the outward interactions, a normal distribution centered at the valley of the inward interactions is observed. Even if the global impact of the inward and outward and interactions is low (see Figure 1), the regions of the domain in which they are located have a relatively higher importance per unit volume. These nuanced relations among structures suggest that using novel methods to study wall-bounded turbulence, such as the SHAP framework proposed here, may help to shed light on these complex phenomena. Additionally, the plateau of the curves is obtained for a value of the normalized density of approximately 1.5 in the case of sweeps, ejections and in the case of joining the outward and inward interactions.
Figure 6: **Normalized kernel-density estimation (KDE) of the SHAP values (top) and the SHAP values per unit volume (middle) for the different coherent structures. (Bottom) Distributions when both inward and outward interactions are considered together.** KDE represents a smooth normalized point-centered block histogram of the various events. The present results are based on using a Gaussian kernel function for smoothing the estimation.
Regarding the shape of the dominant structures, Figure 7 shows the SHAP value of ejections and sweeps as a function of the aspect ratio of the structures. Although the importance per unit volume of the outward and inward interactions is not negligible, they are not analyzed due to their comparatively smaller SHAP values. The aspect ratios are measured on the bounding box of each structure, being \(\Delta x\), \(\Delta y\), and \(\Delta z\) are the streamwise, wall-normal, and spanwise dimensions of the box, respectively. As mentioned above and discussed by Lozano-Duran et al. [31] and Jimenez [42], the most important structures are large wall-attached ejections. These structures are expected to exhibit [31] an aspect ratio of \(\Delta x\approx 3\Delta y\) and \(\Delta z\approx\Delta y\). Although these results were obtained for a higher Reynolds number, the present methodology yields similar results (\(3\Delta y<\Delta x<6\Delta y\)) for the wall-attached structures. Note that these structures are elongated in the streamwise direction [47]. In addition, the spanwise length is similar to their wall-normal size, \(\Delta z\approx\Delta y\). Furthermore, the Q events of high volume and importance, which cross the centerline, exhibit a relationship between the spanwise and wall-normal directions closer to \(\Delta z\approx 1.5\Delta y\), as previously reported in Ref. [44]. Note that the isotropic structures (\(\Delta x\approx\Delta y\)) are negligible in terms of their importance for the flow prediction due to their small size. These structures are decoupled from the shear and form the local Kolmogorov inertial range [18]. Apart from the previous analysis, the most significant structures per unit volume exhibit ratios in the streamwise and wall-normal directions similar to those of the wall-attached structures (\(3\Delta y<\Delta x<6\Delta y\)) and exhibit lower aspects ratios in the spanwise and normal-wall directions (\(0.3\Delta y<\Delta x<0.7\Delta y\)). This suggests that the struc
tures with higher niluence per unit volume are slender, spanning only a few grid points in \(z\). This geometric characteristic is mainly observed for small-size structures, as shown in Figure 3. Therefore, a correct understanding of these structures is crucial in the analysis of wall-bounded turbulence.
Finally, the points not belonging to any structure are irrelevant for the
Figure 7: **SHAP values (top) and SHAP values per unit volume (bottom) of the ejections (left) and the sweeps (right), where we show the aspect ratio of the boxes circumscribing the Reynolds-stress structures.** Note that the most important structures are located below the \(\Delta z^{+}/\Delta y^{+}=\Delta x^{+}/\Delta y^{+}\) diagonal. Consequently, these structures exhibit values of \(\Delta z^{+}\) lower than those of \(\Delta x^{+}\), _i.e._ they are elongated in the streamwise direction.
energy transfer [31; 43]. To show that, every point not belonging to any Q event has been merged into a single background structure. The absolute SHAP of this structure is negligible compared with the SHAP of the Q events, even if this one is by far the largest structure of the flow.
Note that these results, obtained in a purely data-driven manner, are in excellent agreement with those by Jimenez [18], Lozano-Duran et al. [31], a fact that confirms the potential of the present framework to discover relevant trends in turbulence data. Firstly, the wall-attached structures have been demonstrated to carry most of the tangential Reynolds stress. These wall-attached structures are mainly ejections and sweeps, being outward and inward interactions negligible for turbulence prediction. In addition, the large ejections which extend along the whole channel concentrate most of the tangential Reynolds stress. With respect to the aspect ratio of the main structures, they are in the range \(3<\Delta x/\Delta y\leq 6\) and \(\Delta z/\Delta y\approx 1\).
In addition, the explainable-CNN methodology also provides information about the importance per unit volume of the coherent structures. Relative to this point of view, the low-volume wall-attached ejections are the most influential ones, followed by low-volume wall-detached ejections and some outlier low-volume inward interactions. The aspect ratio in the \(z\) direction of the most important structures is lower than 1, indicating that they are slender structures, spanning only a few grid points in \(z\).
**Discussion and conclusions**
This article uses XAI to predict the importance of intense Reynolds-stress structures. Ejections and sweeps are shown to be the most relevant struc
tures for fully-developed turbulence in a channel. A similar conclusion was reported by Lozano-Duran et al. [31] by considering the contribution of the Q events to the Reynolds shear-stress profile as the metric indicating structure importance. The SHAP value considered here relies on the contribution of the structures to predict the following flow field, a fact that leads to a more objective assessment of structure importance. This framework has been used, for the first time, to quantify the relative importance concerning turbulent dynamics of each intense Reynolds-stress structure in the flow. The structures were extracted from the simulation of a turbulent channel and used to segment the computational domain. Then, their importance on the flow was calculated by measuring their contribution to the CNN prediction.
Ejections and sweeps are highly important in generating turbulent self-contained bursts [18; 42] and are associated with turbulence production. The higher-volume structures are the most influential when assessing the global prediction of the next instantaneous field. The larger structures correspond to wall-attached ejections extending throughout the channel, transporting a substantial fraction of the total Reynolds stress [32]. These ejections exhibit the largest modulus of the SHAP value, meaning that their presence is essential for the correct prediction by the CNN. However, different trends are obtained when analyzing structure influence relative to the volume. In this case, the most influential structures per unit volume are smaller wall-attached ejections. Nevertheless, in the previous works of Jimenez [18], Lozano-Duran and Jimenez [32], the authors focused only on the wall-attached structures (the most important in terms of shear stress and absolute SHAP), but this methodology evidences the local importance
of small-size wall-detached structures. These structures are mainly ejections, although some inward interactions have shown local importance. These results support the idea of using SHAP values for analyzing turbulent structures, thus enabling the extraction of deeper knowledge on the turbulent flow.
Relative to the shape of the structures, the most influential structures per unit volume exhibit larger aspect ratios in the streamwise direction than in the spanwise direction. In addition, the wall-normal length is 3-6 times smaller for the structures with highest SHAP and SHAP per unit volume. Furthermore, the structures with higher specific importance are contained in the \(xy\) plane, being the aspect ratio in the \(z\) direction lower.
The framework presented here has enabled, in a purely data-driven manner, to confirm and expand some of the basic knowledge of wall-bounded turbulence available in the literature [18, 31]. Here we can obtain an objective quantification of the importance of various types of coherent flow structures, finding a causal connection between sweeps/ejections and the flow dynamics. Future work will aim at objectively identifying other types of coherent structures, shedding light on the fundamental phenomena of wall-bounded turbulence. In terms of turbulence modeling, a similar approach may be taken to first quantify and subsequently understand the significance of coherent structures and of dynamical processes such as vortex stretching and strain amplification in data-driven subgrid-scale representations, for instance, when based on invariants of the velocity-gradient tensor.
Furthermore, the present methodology may help to gain tremendous insight into the basic mechanisms of wall-bounded turbulence. As indicated
above, turbulent flows are ubiquitous in a wide range of problems of great industrial and environmental interest, such combustion, aerodynamics, energy generation, transportation and the current climate emergency. Obtaining detailed knowledge on the building blocks of turbulence will be instrumental to be able to control these flows, thus obtaining great gains in all these important applications. Note however that, in order to use the SHAP framework presented here, it is important to obtain a detailed representation of the coherent structures in the flow. One approach is to perform DNS, which is progressively enabling detailed simulations of complex flows, such as turbulent wings, where coherent structures can be identified [48]. However, the very high computational cost of DNS [49] precludes the application of this method for full-scale applications, at least at the moment. However, rapid development of computational facilities, particularly in the context of graphics-processing-unit (GPU)-accelerated architectures, may enable very detailed simulations at very high Reynolds numbers in the next years. Furthermore, experimental work in fluid mechanics is progressively benefiting more from machine learning [50], and it might be possible to obtain high-fidelity flow representations at much higher Reynolds numbers, thus enabling the usage of SHAP frameworks in more practical flow cases. In any case, flow control by deep reinforcement learning (DRL) is already leading to impressive drag-reduction rates in turbulent flows [51], and being able to leverage DRL to control the most important flow structures identified via SHAP may constitute a novel paradigm in terms of flow control, increasing the potential for reducing energy consumption in transportation.
## Methods
### Numerical simulations and flow case under study
The CNN was trained using 4,900 instantaneous velocity fields obtained through DNS. The simulations are calculated in a box with periodic boundaries confined between two parallel plates and driven by an imposed pressure gradient. The employed code is LISO [52], which has been used to run some of the largest simulations of wall-bounded turbulence [20]. The convergence of the turbulence statistics was assessed based on the criterion of linear total shear [53]. The data obtained with LISO has been extensively validated against experimental and other numerical studies [36, 54] and is broadly used [55, 56, 57].
### Deep-neural-network architecture and prediction
A convolutional neural network (CNN) is used for predicting the velocity field, as discussed by Schmekel et al. [30]. Note that CNNs and other computer-vision architectures have been successfully used in the context of turbulent-flow predictions [58, 59, 60, 28, 61]. The convolution operation is described by Equation (4), where \(f\) is the input three-dimensional (3D) tensor, \(h\) the filter, \(G\) the output, and \(m\), \(n\) and \(p\) the indices of the output tensor:
\[G(m,n,p)=(f*h)(m,n,p)=\sum_{i}\sum_{j}\sum_{k}h(i,j,k)f(m-i,n-j,p-k). \tag{4}\]
The network employed in this work consists of 4 layers of 3D CNN blocks, which contain plain convolutional and residual blocks [62]. The architecture, similar to the one used by Schmekel et al. [30], is shown in Figure 8. Each convolution comprises a total of 16 filters with size \(5\times 3\times 5\), where the
number of layers and filters of the model were selected to obtain adequate accuracy with an optimum computational cost. The network uses a total of 65,135 parameters (where 99.8% of them are trainable). The network input consists of 3D volumes with \(67\times 64\times 64\) grid points corresponding to downsampled instantaneous flow fields, which originally contain \(201\times 96\times 192\) grid points. Here, 60% of the flow fields are used for the training-and-validation process (out of which 80% are used for training and 20% for validation). Rectified-linear-unit (ReLU) activation functions are used for the hidden layers due to their low computational cost and the reduction of the vanishing-gradients problem [63]. Nevertheless, a sigmoid function is used for the output layer to obtain a smooth normalized output. For this process, an RMSprop optimizer is used [64]. The remaining 40% of fields are reserved for testing and explainability analysis, and are not seen by the network during training. The training process is concluded when the mean-square-error-based loss function is lower than \(10^{-4}\), corresponding to \(1.5\times 10^{4}\) epochs, where all training data is used once in a single epoch.
### Explainability of the neural network
Despite the excellent results achieved with deep learning, the relationships between inputs and outputs are complex, and it is, in general, challenging to explain the predictions based on a particular input field. The SHAP framework [25] is used to evaluate the contribution of each feature (coherent structure in this case) to the prediction of the next flow field. In this work, the importance of the Reynolds-stress structures is of interest. Thus, a mask function is defined to convert from the original space into the space of the structures. The importance is calculated as the influence that each structure has on the prediction by means of the SHAP-kernel method [25]. The workflow of this method is shown in Figure 9.
Figure 8: **Schematic architecture of the convolutional neural network.** (Top) Architecture of the CNN including convolutional and residual blocks. The various activation functions, namely the rectified linear unit (ReLU) and the sigmoid, are also shown. (Bottom) Simplified schematic of the convolution operation.
The SHAP-kernel method is based on two distinct components: LIME [65] and Shapley values, while the explanation value is represented as an additive feature attribution method. The SHAP values explain the CNN prediction as a sum of the marginal contributions of the input features through a linear-regression model:
\[g(q^{\prime})=\phi_{0}+\sum_{j=1}^{N_{Q}}\phi_{j}q^{\prime}_{j}, \tag{5}\]
where \(g\) is the linear-regression function, \(\phi_{0}\) the reference output of \(g\), \(\phi_{j}\) the SHAP values, \(q^{\prime}_{j}\) the binary value of the input feature (coherent-structure presence: 1 if present and 0 if absent) and \(N_{Q}\) is the number of features. When a feature is absent, the grid points contained in the structure are substituted by the corresponding nodes in a reference matrix \(\mathbf{U}_{r}\). In the
Figure 9: **Simplified SHAP-kernel algorithm.** The SHAP-kernel algorithm can be divided into different stages. In the first one, the domain is segmented. Then, the results are predicted using the CNN for each structure. The error between this prediction and the simulated solution is used to determine the SHAP value of each feature.
present work, the reference is defined as a null velocity-fluctuation matrix to neglect the turbulence in the deleted Q events. Therefore, as the input of the CNN is a velocity-fluctuation matrix, the reference matrix contains all zeros: \(\mathbf{U}_{r}=\left[0\right]_{67\times 32\times 64\times 3}\). The SHAP values are optimized by reducing the mean-squared-error loss function, which is given as follows for the training data \(Q^{\prime}\):
\[\mathcal{L}(\hat{f},g,\pi_{x})=\sum_{q^{\prime}\in Q^{\prime}}\left[\hat{f}(h_ {x}(q^{\prime}))-g(q^{\prime})\right]^{2}\pi_{x}(q^{\prime}), \tag{6}\]
where the squared error of each \(q^{\prime}\) is obtained by calculating the difference between the error of the CNN model with respect to the simulation, \(\hat{f}\), and the linear model, \(g\). To obtain these predictions, the features must be mapped, through the function \(h_{x}\), to assign each feature of the \(q^{\prime}\) vector to the group of grid points corresponding to the coherent structure. The contribution of each feature to the predictions, \(\pi_{x}\), is calculated as follows, where \(|q^{\prime}|\) is the number of present features:
\[\pi_{x}(q^{\prime})=\frac{N_{Q}-1}{\left(\begin{array}{c}N_{Q}\\ |q^{\prime}|\end{array}\right)|q^{\prime}|(N_{Q}-|q^{\prime}|)}. \tag{7}\]
Since the CNN provides a \(67\times 32\times 64\times 3\) output tensor, an additional layer must be added to the neural-network model to obtain a single output to calculate the function \(\hat{f}\). Thus, the objective function to evaluate is the mean-squared error of the three velocity components in each field, which is given by:
\[\hat{f}(h_{x}(q^{\prime}))=\frac{\sum_{ijkl}\sqrt{\left(\mathbf{U}_{s_{ijkl}}- \mathbf{U}_{p_{ijkl}}(h_{x}(q^{\prime}))\right)^{2}}}{67\times 32\times 64 \times 3}, \tag{8}\]
where \(s\) and \(p\) denote data from the original simulation and the CNN prediction, respectively.
Due to the fact that the reference tensor neglects the velocity fluctuations, providing the normalized value of the null fluctuation velocity in the three components, the reference value of the error (_i.e._ the error when the prediction of the reference matrix is evaluated) is higher than the error when any Q (feature) is included. This definition has two important consequences: first, the prediction error is lower when the reference values are used in the areas with low-velocity fluctuation (background structure), as they are similar to the reference matrix; thus, these features are expected to exhibit a lower SHAP value. Second, as including the structures will reduce the mean error concerning the reference, all the SHAP values should be negative. Finally, it is also important to note that in the present application the SHAP value quantifies the modification that including a feature produces in the prediction error. In this case, neglecting the velocity fluctuation of the whole computational domain produces the highest prediction error. Therefore, the SHAP values are negative and including a feature (coherent structure) reduces this error. Consequently, the importance of each feature is determined by its modulus.
## Acknowledgments
The deep-learning-model training was enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS)
at Berzelius (NSC), partially funded by the Swedish Research Council through grant agreement no. 2022-06725. This project has been partially funded by the Spanish Ministry of Science, Innovation, and University through the University Faculty Training (FPU) program with reference FPU19/02201 (AC). The data has been obtained with support of grant PID2021-128676OB-I00 funded by MCIN/AEI/10.13039/ 501100011033 and by "ERDF A way of making Europe", by the European Union (SH). RV acknowledges the financial support from ERC grant no. '2021-CoG-101043998, DEEPCONTROL'. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
**Author Contributions**
**Cremades, A.:** Methodology, Software, Validation, Investigation, Writing - Original Draft, Visualization **Hoyas, S.:** Data curation, resources, Writing - Original Draft, Funding acquisition **Quintero, P.:** Writing - Review & Editing, Funding acquisition **Linkmann, M** Writing - Review & Editing **Lellep, M.** Writing - Review & Editing **Vinuesa, R.:** Conceptualization, project definition, methodology, resources, Writing - Review & Editing, Supervision, Project administration, Funding acquisition.
**Data availability**
The data and codes used to produce this study will be made available for open access as soon as the article is published.
|
2308.14160
|
A Unified Transformer-based Network for multimodal Emotion Recognition
|
The development of transformer-based models has resulted in significant
advances in addressing various vision and NLP-based research challenges.
However, the progress made in transformer-based methods has not been
effectively applied to biosensing research. This paper presents a novel Unified
Biosensor-Vision Multi-modal Transformer-based (UBVMT) method to classify
emotions in an arousal-valence space by combining a 2D representation of an
ECG/PPG signal with the face information. To achieve this goal, we first
investigate and compare the unimodal emotion recognition performance of three
image-based representations of the ECG/PPG signal. We then present our UBVMT
network which is trained to perform emotion recognition by combining the 2D
image-based representation of the ECG/PPG signal and the facial expression
features. Our unified transformer model consists of homogeneous transformer
blocks that take as an input the 2D representation of the ECG/PPG signal and
the corresponding face frame for emotion representation learning with minimal
modality-specific design. Our UBVMT model is trained by reconstructing masked
patches of video frames and 2D images of ECG/PPG signals, and contrastive
modeling to align face and ECG/PPG data. Extensive experiments on the
MAHNOB-HCI and DEAP datasets show that our Unified UBVMT-based model produces
comparable results to the state-of-the-art techniques.
|
Kamran Ali, Charles E. Hughes
|
2023-08-27T17:30:56Z
|
http://arxiv.org/abs/2308.14160v1
|
# A Unified Transformer-based Network for Multimodal Emotion Recognition
###### Abstract
The development of transformer-based models has resulted in significant advances in addressing various vision and NLP-based research challenges. However, the progress made in transformer-based methods has not been effectively applied to blossensing research. This paper presents a novel Unified Biosensor-Vision Multi-modal Transformer-based (UBVMT) method to classify emotions in an arousal-valence space by combining a 2D representation of an ECG/PPG signal with the face information. To achieve this goal, we first investigate and compare the unimodal emotion recognition performance of three image-based representations of the ECG/PPG signal. We then present our UBVMT network which is trained to perform emotion recognition by combining the 2D image-based representation of the ECG/PPG signal and the facial expression features. Our unified transformer model consists of homogeneous transformer blocks that take as an input the 2D representation of the ECG/PPG signal and the corresponding face frame for emotion representation learning with minimal modality-specific design. Our UBVMT model is trained by reconstructing masked patches of video frames and 2D images of ECG/PPG signals, and contrastive modeling to align face and ECG/PPG data. Extensive experiments on the MAHNOB-HCI and DEAP datasets show that our Unified UBVMT-based model produces comparable results to the state-of-the-art techniques.
Emotion recognition, Transformers, Biosensors, Multi-model representation learning.
## 1 Introduction
Over the past few years, there has been an increasing interest in adopting a bio-sensing perspective for emotion analysis. Certainly, the popularity of bio-sensing research extends beyond affective computing. Various fields such as robotics [1, 2], health [3, 4], and virtual reality [5] have also embraced bio-sensing as a valuable research tool. Bio-sensing systems, including those that measure electrocardiogram (ECG), Photoplethysmography (PPC), electroencephalogram (EEG), galvanic skin response (GSR), etc, have been available for decades; however, their size and complexity have limited their use to controlled laboratory settings and hospitals. The emergence of wearable biosensing systems has fueled the recent interest in utilizing bio-sensing systems for various applications by simplifying and speeding up data collection.
Several research studies have demonstrated the feasibility of recognizing human emotions through facial expressions captured in images and videos [6, 7, 8]. Although facial expression recognition systems have shown remarkable success on databases captured under controlled conditions, their performance significantly degrades [9, 10] when they are applied to real-life situations. The system's unreliability in natural environments is primarily due to various factors, such as varying head pose, illumination conditions, and occlusion of different parts of the face. Additionally, recognizing emotions from facial expressions may not be entirely dependable since they can be easily concealed or manipulated. Furthermore, facial expressions can be influenced by social and cultural differences, as human expressiveness varies among individuals. In contrast, physiological signals offer a more accurate reflection of emotions and their subtle changes. Therefore, multimodal emotion recognition techniques that combine facial information with physiological data have the potential to compensate for the drawbacks of unimodal methods and achieve more precise recognition outcomes [11, 12].
Many emotion recognition techniques have been proposed in the past years combining different modalities. For instance, in [13] Shang et al. recognized emotions by fusing EEG, EOG, and EMG signals. Similarly, in [14], Miranda et al. combined EEG, ECG, and GSR modalities to perform emotion analysis. In [15], Koelstra et al. classified emotions along the valance and arousal axes by leveraging face information and EEG signals. Most of these multimodal emotion recognition methods combine different modalities with the
Fig. 1: The framework of UBVMT-based multimodal emotion recognition
EEG signal. However, several research studies show that there is a strong correlation between features extracted from the ECG signal and human emotions. For instance, in [16], Hany et al. extracted ECG features that represent the statistical distribution of dominant frequencies by applying spectrogram analysis to classify emotions. Similarly, in [17], time and frequency domain features are extracted from the ECG signal to perform emotion analysis. Building on those approaches, this paper presents a novel multimodal emotion recognition technique that fuses information from face video frames and ECG/PPG data. To the best of our knowledge, this is the first study that investigates the fusion of face information with the ECG/PPG signal to classify emotions.
Many attempts have been made to create effective techniques for integrating multimodal information. Initially, methods involved simply combining high-level features from all modalities to predict outcomes (known as "early fusion") or adding up decisions made by each unimodal system with learnable weights (called "late fusion") to produce the final inference. While these methods outperformed unimodal techniques to some extent, the lack of inter-modality interactions during training limited the potential improvement. In recent times, Transformers [18] have emerged as a highly successful approach for multimodal representation learning. Due to the advances in attention mechanisms and transformers, later studies have focused predominantly on using these techniques to develop more sophisticated multimodal fusion methods [19, 20, 21, 22, 23]. However, most of these transformer-based multimodal emotion recognition techniques leverage video, audio, and text modalities to learn joint emotion representation. In this paper, we present a unified transformer-based model that employs ECG/PPG data along with face information to recognize emotions. The main framework of the proposed method is shown in Figure 1. Based on our literature survey, no previous work has investigated the application of transformers in learning emotion representation by using face and ECG/PPG modalities.
Inspired by the compact architecture of the recently published Textless Vision-Language Transformer (TVLT) [22], we present a Unified Biosensor-Vision Multimodal Transformer (UBVMT) network to recognize emotions by fusing face and ECG/PPG data. We first investigated the effectiveness of various image-based representations of ECG/PPG signals for biosensor-based unimodal emotion recognition. Several previous methods have employed an image-based representation of an ECG/PPG signal for heart rate estimation [24, 25, 26, 27, 28]. For instance, in [24], Song et al. obtained a spatiotemporal representation of a pulse signal in a time-delayed way by constructing a Toeplitz matrix. The Toeplitz matrix is then converted into an image that is fed to a CNN to estimate heart rate information. The simplicity of the Toeplitz representation allows it to preserve both the morphological and chronological details of the 1D pulse signal. As a result, the CNN can accurately extract the correct HR values from input feature images.
In [25], average pooling is applied to various blocks within an ROI region. The spatiotemporal maps are then generated by arranging these temporal sequences into rows, which are then fed to a deep network to get HR values. In [26], the sample mean sequences of the R, G, and B channels from the ROI of videos are extracted, and Short-Time Fourier Transform (STFT) is applied to construct the 2D time-frequency representations of the sequences. In [27], a spatiotemporal map is first computed from the input video as a representation of the BVP signal, and the spatiotemporal images are then mapped to BVP signals using the generative adversarial learning technique. In [28], Khare et al. converted the time-domain EEG biosignals into a time-frequency representation by employing Smoothed Pseudo Wigner-Ville distribution (SPWVD).
Apart from the image-based representation of ECG signals for heart rate estimation, some works [29] and [30] have extracted deep learning features from the scalogram of biosensor signals to recognize cognitive load and hypertension risk stratification, respectively. They converted the PPG signals to Scalograms by using Continuous Wavelet Transform. Scalograms were used instead of spectrograms as the image representation for PPG signals. This is because scalograms provide better highlighting of the low-frequency or rapidly changing frequency components of the signal as compared to spectrograms.
In this paper, we first investigate and compare the performance of three image representations of the ECG/PPG signal for emotion recognition. More specifically, we perform unimodal emotion recognition by converting the time domain ECG/PPG signal into 1. spatiotemporal maps [24], 2. time-frequency representation by employing Smoothed Pseudo Wigner-Ville distribution (SPWVD) [28], and 3. Scalograms [29][30]. We show that the best emotion recognition performance is obtained by employing the scalogram representation of the ECG and PPG signals. Therefore, our Unified Biosensor-Vision Multimodal Transformer (UBVMT) network is trained and validated by using the scalogram representation of the ECG/PPG signal along with the face images. The effectiveness of the proposed UBVMT-based multimodal emotion recognition technique is validated on two datasets: the MAHNOB-HCI [31] and DEAP [32] datasets. Extensive experiments show that the proposed approach produces comparable results to the state-of-the-art emotion recognition techniques. Overall, the main contributions of this paper are as follows:
* Comparison of the performance of three image-based representations, namely spatiotemporal maps, time-frequency representation, and scalograms of the ECG/PPG signal for emotion recognition, and evidence that scalograms better represent the emotion features contained by the ECG/PPG signals.
* A novel technique that employs transformer architecture for biosensor-based multimodal emotion recognition.
* The Unified Biosensor-Vision Multimodal Transformer (UBVMT) network consisting of homogenous blocks that learn vision-and-biosensor emotion representation with minimal modality-specific design, thus, making it compact and applicable in real-time emotion analysis.
* Experimental results showing that the proposed Unified Biosensor-Vision Multimodal Transformer-based (UBVMT) technique learns effective multimodal emotion representation by fusing face with
ECG/PPG data.
* A baseline for emotion recognition research created by the fusion of face data with an ECG/PPG signal.
## 2 Related Work
Psychologically, human emotions can be characterized based on two main frameworks: categorical or dimensional representations. In the categorical framework of emotions, emotions are classified into distinct labels such as joy, sadness, anger, happiness, fear, and so on. This approach considers emotions as discrete categories rather than continuous dimensions. In dimensional conceptualizations of emotions, a commonly used framework involves representing emotions within a two-dimensional space. In this space, valence is positioned along one axis, while arousal is positioned along the other [33]. Valence, often positioned on the horizontal axis, signifies the extent of pleasantness or unpleasantness. On the other hand, arousal, typically placed on the vertical axis, represents the level of activation or energy associated with the emotion. Within a two-dimensional (2D) space, as shown in Figure 2, emotions are portrayed by their valence and arousal level. While the categorical approach to emotions is conceptually straightforward, it faces certain challenges. It struggles to represent compound emotions that do not fit neatly into a single category, and it does not provide a means to quantify the degree or intensity of an emotional state. Therefore, in this paper, a novel multimodal emotion recognition technique is developed to classify emotions along the valence-arousal axis.
### _Emotion Recognition with ECG/PPG Signals_
Previous research shows that there is a strong correlation between ECG/PPG signals and human emotion. In [34], Yu et al. converted the rPPG signal of an input image into time-frequency domain spectrogram images to classify the short-term emotions. In [35], Ismail et al. employed ECG and PPG signals to develop an emotion recognition system, and compared the performance of both signals for classifying emotions. Lee et al. in [36] investigated the ability of PPG signals to recognize emotions along the arousal-valence axis. Sepulveda el al. in [37] performed emotion recognition from ECG signals using a wavelet scattering algorithm. Emotion features of the ECG signal are extracted at different time scales leveraging the wavelet scattering technique, and the extracted features are then fed into different classifiers to perform emotion analysis. Sarkar et al. in [38] exploited a self-supervised deep multi-task learning framework to develop an emotion recognition algorithm using the ECG signals. Mellouk et al. presented a rPPG signal-based emotion classification method using a deep learning architecture, which combines a one-dimensional convolution neural network (1DCNN) and a long short-term memory (LSTM) [39]. In [16], Ferdinando et al. extracted emotion features from ECG signals using spectrogram analysis of intrinsic mode function after applying the bivariate empirical mode decomposition to ECG. Similarly, in [17], Ferdinando et al. derived heart rate variability (HRV) features from ECG signals to categorize emotions in an arousal-valance space. In [29], Gasparini et al. used a scalogram-based representation of a PPG signal to extract deep learning features for the recognition of cognitive load. Gasparini et al. showed that the deep learning features outperformed hand-crafted features to classify cognitive load, especially by leveraging feature selection methods to avoid the curse of dimensionality. Similarly, Liang et al. in [30], converted the PPG signals into scalogram-based representations using continuous wavelet transforms that are input to a deep learning method for the classification and evaluation of hypertension.
### _Emotion Recognition with Facial Images_
Facial images and videos have been widely used by the research community to recognize emotions. Mostly, facial information is used to analyze emotions by using facial expression recognition techniques [40, 41, 42, 43, 44]. However, in this paper, a transformer-based technique is presented that estimates the continuous dimensions related to emotions. Based on the dimensional approach, affective behavior can be characterized using various underlying continuous dimensions. These dimensions offer a more accurate representation of the emotions individuals experience in their daily lives. As noted in Figure 2, two widely utilized dimensions are valence, which reflects the positivity or negativity of an emotional state, and arousal, which gauges the intensity of emotional activation [45, 46, 47].
To estimate valence and arousal from facial images, Dimitrios et al. [48] used a Convolutional and Recurrent (CNN-RNN) deep neural architecture to extract emotion features for dimensional emotion recognition. Deng et al. [49] used the ResNet-50 model for multi-task expression recognition and implemented the teacher-student architecture to enhance the training data without labels. This approach aims to address the imbalanced data distribution across different tasks. In [50], Kuhnke et al. utilized facial landmarks to align the image and eliminate conflicting data. They achieve this by leveraging the correlation among different representations to generate pseudo-labels. In [51], the features extracted from a 3D-CNN are inputted into the 3D VGG and 2D SENet-101 networks. Subsequently, Gated Recurrent Units (GRUs) are employed to enhance the model's ability to learn temporal features.
### _Multi-modal Emotion Recognition_
There have been many studies focusing on the analysis of multimodal systems for recognizing human emotions.
Fig. 2: Arousal valence emotion model
These studies have demonstrated that, by integrating information from multiple modalities, the performance of models for recognizing emotions can be enhanced [52]. Therefore, in [53], Yin et al. fused information from ECG signal with Electroencephalogram (EEG), Electrooculography (EOG), Galvanic Skin Response (GSR), Electromyography (EMG), skin temperature, blood volume, and respiration to recognize emotions. Miranda et al. [14] recorded EEG, GRS, and ECG signals using wearable sensors and combined EEG and GRS information with the ECG signal to categorize human emotions. Soleymani et al. [31] employed Hidden Markov Models (HMMs) to classify emotions by integrating ECG features with EEG, GSR, respiration, and skin temperature data. In [54], Stamos et al. fused emotion features extracted from the ECG signal with the features obtained from the EEG signal to perform emotion analysis. Santamaria et al. [55] integrated the modalities of ECG and GSR to recognize emotions by employing Deep Convolutional Network (DCNN). Elalamy [56] performed emotion analysis using the spectrogram and recurrence plots (RP) of ECG, EDA, and Photoplethysmography (PPG) signals individually, and investigated the performance by combining this information for multi-modal emotion recognition. Most of the aforementioned techniques fuse the ECG signal with other modalities such as EEG, EMG, GSR, etc., for multi-modal emotion analysis. But, unlike the EEG signal, which has been combined with the facial information as proposed in [15, 57, 58], the ECG signal has not been fused with facial features to perform emotion recognition. In this paper, a Unified Biosensor-Vision Multimodal Transformer-based (UBVMT) emotion recognition technique is presented that integrates ECG/PPG data with facial information.
## 3 Research methods
This section discusses various 1D-to-2D transformation methods to extract emotion features from 2D representation of ECG/PPG signals. We then present our Unified Biosensor-Vision Multimodal Transformer (UBVMT) architecture that leverages the 2D representation of ECG/PPG signals along with face information to recognize emotions.
### _The 2D Representation of ECG/PGG Signals_
In this paper, one of our goals is to find an effective method that can be used to transform the 1D time domain signal of ECG/PPG data into an image-based representation for emotion recognition. Therefore, in this section, we investigate the effectiveness of three different 1D-to-2D transformation techniques to perform 2D image-based emotion analysis using ECG/PPG data. More specifically, we present a unimodal emotion recognition network using ResNet-18 [62] by converting the time domain ECG and PPG signals into (1) spatiotemporal maps [24], (2) time-frequency representation by employing Smoothed Pseudo Wigner-Ville distribution (SPWVD) [28], and (3) Scalograms [29, 30] as shown in Figure 3.
#### 3.1.1 Spatiotemporal Representation of ECG/PPG signal
In this section, a novel emotion recognition technique using a spatiotemporal representation of an ECG/PPG signal is presented. Rencheng et al. [24] converted a 1D ECG and PPG signal into a 2D spatiotemporal map to estimate heart rate information. It has been reported that the performance degradation of the conventional PPG signals for heart rate estimation due to noise can be overcome by employing deep learning techniques on the spatiotemporal maps of the PPG signal [24]. Therefore, in this section, we apply the technique proposed in [24] to convert the 1D ECG/PPG signal into a 2D spatiotemporal map and perform emotion analysis using the spatiotemporal maps as input to the classifier. The spatiotemporal feature maps can retain both the morphological and chronological features of ECG/PPG signals.
_Construction of ECG/PPG Spatiotemporal Map:_ The spatiotemporal feature map of an ECG/PPG signal is constructed by creating a square Toeplitz matrix. Suppose the 1D input signal \(S=(s_{1},s_{2},...,s_{P})\) has P samples and P is an even number. The first row of the matrix consists of \(s_{1}\) to \(s_{P/2}\) samples. Similarly, the second row contains samples from the second point, i.e., \(s_{2}\) to the \((P/2+1)th\) sample, and so on. Therefore, a square Toeplitz matrix T with a size equal to \(P/2\) is constructed as:
\[T=\begin{bmatrix}s_{1}&s_{2}&...&s_{P/2}\\ s_{2}&s_{3}&...&s_{P/2+1}\\.&.&.&.\\.&.&.&.\\ s_{P/2}&s_{P/2+1}&...&s_{P-1}\\ \end{bmatrix}\]
A gray image is constructed by converting the matrix T into an image representation. The obtained gray image has a clear structure because the input signal is quasiperiodic. The second row of Figure 4 shows the spatiotemporal image representation of 1D signals. As it can be seen in Figure 4 that the vertical patterns preserve the period information of the 1D signal. This suggests that the 2D Toeplitz representation can effectively capture and portray the periodic nature of a 1D signal. The morphological and chronological information of the 1D signal is preserved in this simple 2D spatiotemporal representation. Therefore, emotion features can be extracted from these 2D maps of the ECG/PPG signals. The spatiotemporal image representations of ECG/PPG signals were then adjusted to the size of \(224\times 224\times 3\), and fed to a pre-trained ResNet-18 [62] to perform emotion analysis.
#### 3.1.2 Time-frequency Representation of ECG/PPG Signal
To leverage the effectiveness of Convolutional Neural Networks (CNNs), Khare et al. [28] converted 1D EEG signals
Fig. 3: Unimodal emotion recognition
into 2D maps using a Time-Frequency (TF) representation for emotion recognition. More specifically, filtered EEG time-domain signal is transformed into time, frequency, and amplitude representation by employing Smoothed Pseudo-Wigner-Ville Distribution (SPWVD). The transformation of time-domain signals into Time-Frequency (TF) representation enables the preservation of signal information in the spectral domain. TF map is a representation that combines time, frequency, and amplitude in a spatial format simultaneously. Therefore, a time-frequency representation of the ECG/PPG signal can also be obtained by using other time-frequency analysis techniques such as Short-Time Fourier Transform (STFT), Wigner-Ville distribution, and so on. The time-frequency representation obtained by using STFT is known as a spectrogram. To perform Short-Time Fourier Transform (STFT), certain parameters need to be defined, such as the window size, shape, and sampling frequency. It is crucial to maintain a consistent window length across the entire signal. However, the resulting spectrogram from STFT often exhibits limited resolution due to the trade-off between time and frequency localization. In contrast, the utilization of the Wigner-Ville distribution for obtaining the time-frequency representation results in the presence of cross-terms and attenuation for low frequencies. As reported in [59], SPWVD offers enhanced time-frequency resolution, effectively resolving the limitations associated with STFT. The challenges associated with the Wigner-Ville distribution are tackled in SPWVD by incorporating a cross-term reducing window in the frequency domain.
In this paper, one of our goals is to find an effective method that can be used to transform the 1D time domain signal of ECG/PPG into an image-based representation. Therefore, to address the aforementioned constraints of STFT and the Wigner-Ville distribution, in this section, the technique of Smoothed Pseudo-Wigner-Ville Distribution (SPWVD) is employed to convert time-domain ECG/PPG signals into a time-frequency representation. The performance of a SPWVD-based TF representation of ECG/PPG signals is then investigated for emotion recognition.
_Construction of ECG/PPG TF Map:_ SPWVD provides a straightforward depiction of the localization of signal energy in both time and frequency domains. The choice of the length and type of the cross-term reducing window in both the time and frequency domains can be made independently. Due to the independent selection of the length and type of the cross-term reducing window, SPWVD exhibits favorable time-frequency cluster characteristics. The representation of SPWVD in mathematical terms can be expressed as [59, 60].
\[SPWVD(t,f)=\int_{t_{1}}\int_{f_{1}}u(t-t_{1})v(f-f_{1})W(t_{1}f_{1}),dt_{1}df_ {1} \tag{1}\]
\[W(t,f)=\int_{-\infty}^{\infty}x(t+\frac{\tau}{2})x^{*}(t-\frac{\tau}{2})e^{-j2 \pi f\tau}d\tau \tag{2}\]
where \(u(t)\) represents the smoothing window in the time domain, \(v(f)\) denotes the smoothing window in the frequency domain, \(\tau\) corresponds to a lag, and \(x(t)\) is the input signal. Controlling the smoothing scales in both the time and frequency domains is straightforward. The length of the windows for \(u(t)\) and \(v(f)\) can be chosen independently. The TF representation of time-domain ECG/PGG signals obtained by employing the SPWVD technique is shown in the third row of Figure 4.
To mitigate cross-terms in both time and frequency domains, the Kaiser window is employed. However, selecting a window size that is too small may lead to diminished resolution, while overly large windows can significantly increase the image size. Consequently, a medium-sized window with a length of 31, as suggested in [28], is chosen. For efficient computation, the window size is maintained at \(2^{n}\)-1, where n represents the number of bits. The TF representations of time-domain ECG/PGG signals are then adjusted to the size of \(224\times 224\times 3\), and fed to a pre-trained ResNet-18 [62] to perform emotion recognition.
#### 3.1.3 Scalogram of ECG/PPG Signal
Gasparini et al. [29] converted the monodimensional photoplethysmography (PPG) data into a bidimensional representation, and applied a pre-trained CNN to classify the different levels of the subject's hypertension. More specifically, the 1D PPG signals were converted into scalograms by leveraging Continuous Wavelet Transform [61]. Gasparini et al. opted to utilize the scalogram instead of the spectrogram as the image representation for PPG signals. The reason for this choice is that scalograms offer enhanced highlighting of low-frequency or rapidly changing frequency components in the signal, compared to spectrograms. Similarly, in [30], Liang et al. converted 1D PPG signals into 2D scalograms by using Continuous Wavelet Transform to classify hypertension. In this paper, we investigate the performance of scalogram representation of ECG/PPG signals for deep learning-based emotion recognition.
Fig. 4: 2D representations of ECG/PPG signal
_Construction of ECG/PPG scalograms:_ Continuous Wavelet Transform (CWT) has been used for decades as a valuable technique for analyzing both time and frequency information. In this paper, each segment of the ECG/PPG signal was transformed into a time-frequency representation, known as a scalogram, using the continuous wavelet transform method. A scalogram represents the absolute values of the continuous wavelet transform coefficients of a signal, displayed as a graph of time and frequency. In contrast to a spectrogram, a scalogram provides improved capability in identifying the low-frequency or rapidly changing frequency components of the signal. To convert the PPG signal into a scalogram, we obtained the absolute values of the wavelet coefficients for each signal segment using the analytic morse (3,60) wavelet. The scalogram representation of ECG/PGG signals obtained by employing the CWT technique is shown in the fourth row of Figure 4. The scalograms were then adjusted to the size of \(224\times 224\times 3\), and fed to a pre-trained ResNet-18 [62] to perform emotion recognition.
### _The Unified Transformer-based Multi-modal Network_
In this section, we discuss our Unified Biosensor-Vision Multimodal Transformer (UBVMT) network where homogeneous transformer blocks take 2D representations of ECG/PPG signals and raw visual inputs for multi-modal emotion representation learning with minimal modality-specific design. UBVMT is trained by employing masked autoencoding [63] and contrastive modeling [22] to learn effective emotion representation. The overall architecture of our Unified Biosensor-Vision Multi-modal Transformer (UBVMT) network is shown in Figure 5. The objective of masked autoencoding is to reconstruct masked patches of 2D representations of ECG/PPG signals and face video frames, while contrastive modeling is applied to align ECG/PPG and face information. We argue that, due to the unified architecture of UBVMT, the computational redundancy and complexity of our method is reduced as compared to conventional transformer-based multi-modal emotion recognition techniques.
The input to the UBVMT is the integration of face patch embedding, ECG/PPG patch embedding, and modality embedding. To obtain face embeddings, the face image of \(224\times 224\times 3\) pixels is divided into a list of \(16\times 16\) sized patches. The pixel values of each patch are normalized, and a linear projection layer is used to convert face patches into a 768-dimensional patch embedding. For an input face image of size \(224\times 224\times 3\), the size of the resultant face embedding is \(14\times 14\). The spatial embedding involves incorporating spatial information for each input patch by adding a distinct trainable vector to the height and width axes of the \(14\times 14\) embeddings. For ECG/PPG embedding, the 2D representation of an ECG/PPG signal is divided into patches, and a liner projection layer is applied on each patch to obtain a 768-dimensional patch embedding. Similar to the face modality, a patch size of \(16\times 16\) is used, and trainable temporal and frequency embeddings are utilized to denote the temporal and frequency information of the patches.
#### 3.2.1 The architecture of UBVMT
Similar to the transformer model proposed in [18], UBVMT has an encoder E which has 12 layers (hidden size 768), and a decoder D with 8 layers (hidden size 512). After pretraining, we only employ the encoder \(E\) part of the transformer and finetune it for emotion recognition.
#### 3.2.2 Pretraining UBVMT
UBVMT is pre-trained by employing masked autoencoding [63] and contrastive modeling [22] to learn effective emotion representation. The pretraining objective function \(\mathcal{L}\) is the weighted sum of masked autoencoding loss \(\mathcal{L}_{\mathcal{M}}\) and the contrastive modeling loss \(\mathcal{L}_{\mathcal{C}}\):
\[\mathcal{L}=\lambda_{M}\mathcal{L}_{\mathcal{M}}+\lambda_{C}\mathcal{L}_{ \mathcal{C}} \tag{3}\]
where \(\lambda_{M}=0.4\) and \(\lambda_{C}=1\).
_Masked Autoencoding Objective Function:_ The main objective of masked autoencoding is to learn effective unimodal representations in biosensor-and-vision settings. This objective involves masking random patches of ECG/PPG images and the face video frames, allowing us to reconstruct missing inputs effectively. Specifically, we implement a random dropout mechanism on a portion of ECG/PPG embedding \(x^{B}\) and face embedding \(x^{F}\). Subsequently, we input the remaining patch embeddings to the encoder \(E\). To generate the input to the decoder \(D\), we add the dropped embeddings as trainable vectors labeled as [MASK] and position them in the same locations as the original inputs (indicated by gray boxes in Figure 5. Additionally, the corresponding positional, and frequency embeddings, which are separately parametrized, are incorporated into the decoder input. The objective function of masked autoencoding is a mean squared error between the reconstructed and original ECG/PPG images and face video frames:
\[\mathcal{L}_{\mathcal{M}}=\frac{1}{N_{m}^{B}}\sum_{i\in\text{masked}}\|x_{i}^ {B}-\hat{x}_{i}^{B}\|_{2}^{2}+\frac{1}{N_{m}^{F}}\sum_{j\in\text{masked}}\|x_{j }^{F}-\hat{x}_{j}^{F}\|_{2}^{2} \tag{4}\]
where \(N_{m}^{B}\) and \(N_{m}^{F}\) are the number of masked patches for ECG/PPG images and face video frames, respectively. Loss \(\mathcal{L}_{\mathcal{M}}\) is computed only on masked patches. Note that the ECG/PPG and face output of the encoder are fed to the decoder separately.
_Constrastive Modeling Objective Function:_ The main objective of contrastive modeling is to perform ECG/PPG-vision matching and learn an effective cross-modal representation, as shown in Figure 5. For every face image, a positive vision-ECG/PPG pair \((x^{F}+;x^{B})\) is generated. Additionally, we form half of the vision-ECG/PPG pairs within a batch as mismatched (negative) pairs \((x^{F-};x^{B})\) by substituting the face images \(x^{F+}\) with randomly selected face images \(x^{F-}\) from the training dataset.
Similar to previous multimodal transformers [64, 65, 66, 22], we incorporate a linear layer with sigmoid activation as the classification head. This is applied to the encoder output of the first [CLS] token, resulting in the matching probability \(p\) by employing binary cross-entropy loss:
\[\mathcal{L}_{\mathcal{C}}=-y\log p \tag{5}\]
where the value of y is set to 1 when the input vision-ECG/PPG pair \((x^{F};x^{B})\) is positive (match), and 0 otherwise. Throughout the training process, \(\mathcal{L}_{\mathcal{M}}\) and \(\mathcal{L}_{\mathcal{C}}\) are computed using separate forward passes.
## 4 Experimental Details
### _Database Description_
The effectiveness of transformer models to learn multimodal representation is improved by pretraining them on large datasets [65, 66, 67]. As such, UBVMT is pre-trained on the large, comprehensive CMU-MOSEI [68] dataset. Similar to [24, 25, 69], rPPG signals are extracted from the CMU-MOSEI video clips using the MTTS-CAN [70] method. After pertaining, MAHNOB-HCI [31] and DEAP [32] datasets are used for multimodal emotion analysis.
The **CMU-MOSEI** dataset comprises 23,454 movie review clips, totaling over 65.9 hours of YouTube video content. It features contributions from 1000 speakers and encompasses 250 distinct topics. Videos having non-human faces and faces with more than \(80^{\circ}\) head rotations from the frontal position are discarded from the dataset. From each video, multiple video clips of 1.1 secs are extracted, with the number of clips being dependent on the length of each video. For our case, 90,037 video clips are extracted from the entire CMU-MOSEI dataset. These video clips are then fed to the MTTS-CAN [70], and only the pulse signals from the MTTS-CAN are used to obtain the rPPG signals of these video clips.
The **MAHNOB-HCI**[31] (Multimodal Human-Computer Interaction) dataset is a widely used publicly available dataset designed for research in affective computing and multimodal emotion recognition. The dataset includes 527 facial video recordings of 27 participants engaged in various tasks and interactions, while their physiological signals such as 32-channel electroencephalogram (EEG), 3-channel electrocardiogram (ECG), 1-channel galvanic skin response (GSR), and facial expressions are captured. The ECG signals were sampled at a rate of 256 Hz. The ECG data is precisely synchronized and aligned with the face video recordings. Similar to [71], only the EXG2 signal from the 3 ECG channel system is extracted for emotion analysis.
The Database for Emotion Analysis using Physiological Signals **(DEAP)**[32] is a widely used and publicly available database designed for research in emotion analysis and affective computing. The DEAP database contains data from 32 participants, aged between 19 and 37 (\(50\%\) female), who were recorded watching 40 one-minute music videos. Each participant was asked to evaluate each video by assigning values from 1 to 9 for arousal, valence, dominance, like/dislike, and familiarity. Face video was recorded for 22 out of the 32 participants, and the proposed UBVMT method is evaluated using this particular group of subjects. For each dataset, we conduct a subject-independent 10-fold cross-validation evaluation.
### _Pre-processing Steps_
The MAHNOB-HCI emotion elicitation data contains unstimulated baseline and stimulated response ECG signals. Furthermore, the database includes a synchronization signal that facilitates the separation of the two. Our experiments exclusively utilized the ECG signals recorded during the stimulated phase. To remove motion artifacts, the original signal is subtracted by the smoothing signal [31]. Subsequently, similar to [16, 17] a notch filter was applied at \(60\) Hz to eliminate power line interference. Baseline drift was mitigated by implementing a highpass filter at \(0.4\) Hz. Additionally, other noises were eliminated using a low-pass filter set at \(200\) Hz. Due to the variability in the length of the ECG signals, they are partitioned into segments of 5 seconds each.
The PPG and rPPG signals from the DEAP and CMU-MOSEI datasets are preprocessed and segmented into segments of pulses corresponding to 1.1 secs following [36].
Fig. 5: Multimodal emotion recognition
PPG signal can be affected by disturbances, such as movement at the sensor attachment site. Consequently, PPG signals in the DEAP dataset contain movement noise. Hence, before segmenting PPG signals into single pulses of 1.1 secs, a high-order polynomial (an order of 50 polynomials) is fitted to the PPG signal. Subsequently, the fitted curve is subtracted from the original PPG signal, effectively eliminating any movement noise. To partition the PPG signal into single pulses of 1.1 secs, we first find the maximum or peaks of the signal and then use the peak value as the center of the segmenting window. The next pulse is segmented by moving the center of the segmenting window to the next peak, and so on. The biosignal data vary from person to person; as such, the PPG/rPPG signals are normalized to mitigate variations in PPG signal size among individuals. We must exercise caution to retain the personal characteristics of the signal, as they vary depending on the emotions. The PPG signals are normalized by employing personal maximum and minimum [36]:
\[\bar{z_{i}}=\frac{(z_{i}-min_{person})}{(max_{person}-min_{person})}\times\alpha \tag{6}\]
where \(z_{i}\) is the PPG signal, \(\bar{z_{i}}\) is the normalized signal, and \(\alpha\) is set to \(1000\) to normalize \(z_{i}\) between \(0\) to \(1000\). After extracting the signal segments and their corresponding video frames, we balance the dataset by discarding some segments that contain neutral facial expressions. We employ the off-the-shelf TER-GAN [6] FER model trained on the in-the-wild AffectNet [10] dataset to detect segments containing non-neutral facial expressions.
### _Unimodal Emotion Recognition Details_
This section presents the experimental details of the unimodal emotion recognition experiments that were performed to investigate and compare the performance of the three 2D representations of ECG/PPG signals. These experiments were conducted by inputting only the 2D image-based representations of the ECG signal (in the case of MAHNOB-HCI dataset), and the PPG signal (in the case of DEAP dataset) to a pre-trained ResNet-18 [62]. ResNet-18 was employed to classify emotions in an arousal-valence space using both the MAHNOB-HCI and DEAP datasets. In the case of the MAHNOB-HCI dataset, the arousal dimension consisted of three categories: 'calm','medium', and 'activated', while the valence dimension included the categories 'unpleasant', 'neutral', and 'pleasant', similar to the class distribution presented in [17]. The arousal and valence classes in the DEAP dataset are annotated on a 1-9 scale. Therefore, following [36], the arousal and valence are split into two binary classes based on a threshold of 5, indicating high and low arousal, and high and low valence, respectively.
ResNet-18 is finetuned by using Adam [72] optimization algorithm with a learning rate of 0.001, batch size of 128, and cross-entropy loss function. The ResNet-18 is trained for 50 epochs, and 10-fold cross-validation is employed for the evaluation of the 2D ECG/PPG representation-based unimodal emotion recognition technique. The best emotion recognition performance is obtained by converting 1D ECG and PPG signals into the 2D scalogram representation. Further details and comparison of the performance of the three 2D representation techniques are discussed in section 5.1.
### _Multimodal Emotion Recognition Details_
Multimodal emotion recognition is performed by applying our novel Unified Biosensor-Vision Multi-modal Transformer (UBVMT) network for emotion recognition in an arousal-valence space using the facial expression information and 2D representation of ECG/PPG data from the MAHNOB-HCI and DEAP datasets, respectively. As far as we are aware, we are the first ones to model the co-relation between the face and ECG/PPG data for emotion recognition employing a transformer network. For the extraction of the face information, since the ECG and PPG signals are synchronized with the facial videos of the participants in both datasets, we partitioned the videos into clips of 5 secs for the MAHNOB-HCI dataset and 1.1 secs for the DEAP database. Subsequently, following [7], we extract the first frame from each video clip, pass it to MTCNN [77] to extract only the face region as a facial expression representation, and input it to the UBVMT network along with the 2D representation of the corresponding ECG/PPG signal. As mentioned in section 4.3, the best unimodal emotion recognition performance is obtained by transforming the 1D ECG and PPG signals into a 2D scalogram representation. Therefore, multimodal emotion analysis is performed by combining the facial expression images with the scalograms of ECG and PPG signals using our proposed UBVMT network.
UBVMT is trained using the Adam [72] optimizer, with a batch size of 4, learning rate of 1e-4, and using a cosine schedule [73] with a decay rate set at 0.001. To formulate the pretraining objective in equation 3, the values of \(\lambda_{M}\) and \(\lambda_{C}\) are set to 0.4 and 1, respectively. Following MAE [63] and TVLT [22], a random masking strategy is applied, where \(75\%\) of the face and ECG/PPG patches are randomly masked. After pretraining, the encoder of UBVMT is detached, and a two-layer MLP is added on top of the encoder representation for emotion analysis. The encoder plus the MLP layers are fine-tuned using the Adam [72] optimizer, the learning rate of 1e-4, and a decay rate of 0.001. All the training and validation processes are carried out using dual NVIDIA Tesla V100 GPUs.
\begin{table}
\begin{tabular}{|l|c|c|} \hline Method & Valance & Arousal \\ \hline Spatio-temporal maps [24] & 60.26 & 62.71 \\ SPWVD [28] & 64.03 & 65.19 \\ Scalogram [29, 30] & **76.51** & **77.02** \\ \hline \end{tabular}
\end{table} TABLE II: DEAP dataset: Performance comparison of 2D representations of PPG signal for unimodal emotion recognition.
\begin{table}
\begin{tabular}{|l|c|c|} \hline Method & Valance & Arousal \\ \hline Spatio-temporal maps [24] & 37.62 & 41.04 \\ SPWVD [28] & 38.39 & 42.75 \\ Scalogram [29, 30] & **42.91** & **49.14** \\ \hline \end{tabular}
\end{table} TABLE I: MAHNOB-HCI dataset: Performance comparison of 2D representations of ECG signal for unimodal emotion recognition.
## 5 Results and Analysis
In this section, the experimental results of both the unimodal emotion recognition and the multimodal emotion analysis using the proposed UBVMT method are discussed in detail.
### _Results and Analysis of Unimodal Method_
The unimodal emotion recognition performance of the three 2D representations of ECG/PPG signals for MAHNOB-HCI and DEAP datasets are presented in Table I and Table II as an average of the 10-fold cross-validation. As it can be seen, the scalogram representation of both ECG and PPG signals outperforms the emotion recognition performance of the spatiotemporal maps and the SPWVD-based 2D representation. The superior performance of scalogram representation is because it is more effective at identifying the low-frequency or rapidly-changing frequency components of the signal, as both ECG and PPG are low-frequency signals. Similarly, the emotion recognition accuracy of SPWVD is higher than the accuracy of spatio-temporal maps because SPWVD offers better time-frequency resolution, and thus captures emotion features more effectively than the spatio-temporal maps.
The performance of our unimodal emotion recognition method is also compared with the state-of-the-art unimodal emotion recognition techniques on both the MAHNOB-HCI and DEAP datasets, as presented in Table III. For the MAHNOB-HCI dataset, we compare our result with the ECG-based emotion analysis technique proposed by Ferdinando et al. [17] and rPPG-based method of Yu et al. [76]. In [17], Ferdinando et al. extracted heart rate variability (HRV) features from ECG signals to classify emotions. As it can be seen in Table III, our scalogram-based method outperforms the emotion recognition technique proposed in [17]. In [76], Yu et al. extracted ten-dimensional HRV features from rPPG signals of MAHNOB-HCI video clips, and fed them to a support vector machine for emotion classification. Table III shows that they achieve a higher recognition accuracy in the case of valence, while our scalogram-based method outperforms their method in arousal categorization. Similarly, in Table III, the performance of our PPG-based emotion classification method is compared with the state-of-the-art emotion analysis techniques using the DEAP database. For the comparison of the techniques involving only the PPG signal, our 2D scalogram-based method is compared with the PPG-based technique proposed by Lee et al. [36] and various 2D representations of PPG signal employed by Elalamy et al. [56]. Table III shows that our scalogram-based PPG technique is more effective than the one-dimensional convolutional neural network-based (1D CNN) emotion analysis method proposed by Lee et al. [36]. Similarly, our technique produces higher recognition accuracy than the 2D spectrogram-based method proposed by Elalamy et al. [56]. Also, our 2D scalogram representation outperforms the emotion recognition performance of the Recurrence PLot-based (RP) 2D representation technique used in [56]. Most of the physiological-based emotion analysis methods employ EEG signals for emotion recognition more than any other physiological signal due to its capability to capture emotion features more effectively. Therefore, in Table III, we also compare our unimodal method with the state-of-the-art unimodal emotion analysis techniques using EEG signals. Table III shows that our scalogram-based PPG representation produces higher recognition accuracy as compared to the EEG-based emotion recognition techniques of Patras et al. [32], Chung et al. [74] and Campos et al. [75]. These results imply that the scalogram-based 2D representation of the PPG signal can be used as an alternative to the EEG signal. Obtaining and utilizing EEG signals, the most prevalent bio-signal in emotion recognition, can be inconvenient due to the high cost of EEG measurement devices and the cumbersome measurement process, especially for participants from neurodiverse populations such as kids with ASD.
shown in Table IV. However, since our method provides a baseline for multimodal emotion recognition by fusing the ECG/PPG signal with the face information, there are no state-of-the-art methods with which we can compare our results. Nonetheless, as can be seen in Table IV that the proposed method outperforms the multimodal techniques fusing EEG information with other physiological signals, especially for the recognition of arousal. For the MAHNOB-HCI dataset, we compare our results with the EEG-based multimodal methods proposed by Soleymani et al. [31] and Koelstra et al. [15]. Here again, the proposed method outperforms both of these techniques by a large margin in the recognition of arousal by producing an average accuracy of \(83.84\%\). In contrast, the method proposed by Soleymani et al. [31] achieves the highest accuracy of \(76.10\%\) for the classification of the valence class. We argue that the superior performance of our technique in recognizing the arousal class is because UBVMT is pre-trained by employing an rPPG signal, and it has been reported in past research that PPG signals are more effective in categorizing arousal [29]. Therefore, given a large enough ECG dataset to pre-train the UBVMT network, we posit that we can improve the classification accuracy of our method for the valence class as well. For the DEAP dataset, we compare our results with the multimodal techniques fusing PPG information with other signals like EEG, EDA GSR, etc. As can be seen in Table IV, our method outperforms the state-of-the-art multimodal emotion classification techniques both in recognizing valence and arousal with average accuracies of \(81.53\%\) and \(82.64\%\), respectively. It is interesting to note that our bimodal method outperforms the multimodal technique proposed by Yin et al. [53], where the information from EEG, ECG, EOG, GSR, EMG, Skin temperature, blood volume, and respiration signals are fused to classify emotions. Similarly, comparing our technique with the bimodal method proposed by Siddharth et al. [7] where EEG and face information are fused for emotion analysis, Table IV shows that the fusion of PPG and face information using UBVMT is more effective in recognizing emotions. Our PPG-face method is also more accurate in classifying emotions compared to the fusion of EEG, EOG, and EMG signals [13]. In [56], similar to our approach, Elalamy et al. transformed the 1D PPG and EDA signals into 2D representations and fuse them to categorize emotions using deep networks. Table IV shows that our scalogram-based PPG representation, when fused with the face data, produces higher recognition accuracy by employing our proposed technique. The confusion matrices and the F1 scores of the multimodal emotion recognition results are shown in Figure 6 and Table V, respectively.
## 6 Conclusion
In this paper, a novel approach called Unified Biosensor-Vision Multi-modal Transformer-based (UBVMT) method is presented to classify emotions in an arousal-valence space. The proposed technique combines a 2D representation of an ECG/PPG signal with facial information for emotion recognition. Initially, we investigated and compared the unimodal emotion recognition performance using three image-based representations of the ECG/PPG signal. Then, the Unified Biosensor-Vision Multi-modal Transformer-based network is used for emotion recognition by combining the 2D image-based representation of the ECG/PPG signal and facial information. Our unified transformer model comprises homogeneous transformer blocks that take the 2D representation of the ECG/PPG signal and associated face frame as input for emotion representation learning with minimal modality-specific design. The UBVMT model is trained using a reconstruction loss involving masked patches of video frames and 2D images of ECG/PPG signals, along with contrastive modeling to align face and ECG/PPG data. Extensive experiments on the MAHNOB-HCI and DEAP datasets demonstrate that our Unified Biosensor-Vision Multi-modal Transformer-based model achieves comparable results to state-of-the-art techniques.
## Acknowledgments
The authors would like to thank Mustansar Fiaz, Sachin Shah, Robinson Vasquez, Lisa Dieker, Shaunn Smith. Rebecca Hines, Ilene Wilkins, Kate Ingraham, Caitlyn Bukaty, Karyn Scott, Eric Imperiale, Wilbert Padilla, Maria Demesa for the discussions and suggestions throughout this research. This research was supported in part by grants from the National Science Foundation Grants 2114808, and from the U.S. Department of Education Grants H327S210005, H327S200009. The lead author (KA) was also supported in part by the University of Central Florida's Preeminent Postdoctoral Program (P3). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.
|
2303.16405
|
Topological error correcting processes from fixed-point path integrals
|
We propose a unifying paradigm for analyzing and constructing topological
quantum error correcting codes as dynamical circuits of geometrically local
channels and measurements. To this end, we relate such circuits to discrete
fixed-point path integrals in Euclidean spacetime, which describe the
underlying topological order: If we fix a history of measurement outcomes, we
obtain a fixed-point path integral carrying a pattern of topological defects.
As an example, we show that the stabilizer toric code, subsystem toric code,
and CSS Floquet code can be viewed as one and the same code on different
spacetime lattices, and the honeycomb Floquet code is equivalent to the CSS
Floquet code under a change of basis. We also use our formalism to derive two
new error-correcting codes, namely a Floquet version of the $3+1$-dimensional
toric code using only 2-body measurements, as well as a dynamic code based on
the double-semion string-net path integral.
|
Andreas Bauer
|
2023-03-29T02:32:18Z
|
http://arxiv.org/abs/2303.16405v3
|
# Topological error correcting processes from fixed-point path integrals
###### Abstract
We propose a unifying paradigm for analyzing and constructing topological quantum error correcting codes as dynamical circuits of geometrically local channels and measurements. To this end, we relate such circuits to discrete fixed-point path integrals in Euclidean spacetime, which describe the underlying topological order: If we fix a history of measurement outcomes, we obtain a fixed-point path integral carrying a pattern of topological defects. As an example, we show that the stabilizer toric code, subsystem toric code, and CSS Floquet code can be viewed as one and the same code on different spacetime lattices, and the honeycomb Floquet code is equivalent to the CSS Floquet code under a change of basis. We also use our formalism to derive two new error-correcting codes, namely a Floquet version of the \(3+1\)-dimensional toric code using only \(2\)-body measurements, as well as a dynamic code based on the double-semion string-net path integral.
###### Contents
* I Introduction
* II From fixed-point path integrals to error-correcting codes
* II.1 Fixed-point path integrals
* II.2 Dynamic codes
* II.3 Imaginary versus real time
* II.4 From path integrals to circuits
* III Known codes in terms of path integrals
* III.1 Stabilizer toric code
* III.2 Subsystem toric code
* III.3 CSS Floquet code
* III.4 Honeycomb Floquet code
* IV New codes from tensor-network path integrals
* IV.1 Floquet toric code in 3+1 dimensions
* IV.2 Dynamic double-semion string-net code
* V Discussion and outlook Acknowledgments
## I Introduction
One of the most promising routes towards scalable fault-tolerant quantum computation is topological quantum computation, where logical quantum information is stored in the ground state space of a topological phase on a topologically non-trivial spatial configuration [1; 2; 3]. Topological order has been shown to be robust under arbitrary local perturbations [4]. In a similar vein, topological quantum error correction (QEC) is believed to provide a threshold for arbitrary local noise. Despite the similarity, these two notions of robustness are technically very different: Whereas topological order concerns ground-state properties, captured by the imaginary-time evolution, topological QEC executes a dissipative real-time evolution including syndrome measurements and corrections.
Both topological phases and topological QEC can be described by path integrals that are discrete in space and time. For QEC, these are mixed-state circuits of quantum channels and measurements. For topological phases, they are fixed-point models in the form of state-sum TQFTs [5; 6; 7; 8; 9] or tensor-network path integrals [10]. In this paper, we present a picture for topological QEC, at whose core is the relation between the two types of discrete path integrals. Concretely, there exists a history of "trivial" measurement outcomes (often \(+1\) for Pauli based codes) such that the QEC circuit becomes a fixed-point path integral. The other histories of measurement outcomes then correspond to the same path integral including different pattern of topological defects such as anyons. The path integral is locally invariant under certain changes of the positions of the defects, giving rise to equivalences between different defect pattern. The corrections correspond to the insertion of additional segments of defects, which are chosen by the classical decoder to ensure that the total pattern of defects is equivalent to the trivial one. The correspondence between QEC circuits and path integrals with defects provides a single simple criterion for topological fault tolerance.
Our formalism has two major practical applications. The first application is to systematically analyze existing codes. In particular, the correspondence to path integrals can be used to assign a topological phase to any topological code. This phase determines the logical dimension on different topologies, the possible boundary conditions, anyons, or other sorts of defects that can be introduced, as well as the logical operations that can be performed. Codes within the same phase can be seen as distinct microscopic representations of one another. To illustrate this, we focus on recently developed _Floquet codes_[11; 12; 13; 14]. These are specified by a sequence of gauge checks measured in a fixed schedule. Since the
checks are non-commuting, they can be analyzed using the formalism of _subsystem codes_[15; 16; 17]. However, due to the fixed schedule, they manage to protect a certain number of logical qubits even though the subsystem formalism predicts less or none at all. Lately, there has been a quest to better understand the relation between stabilizer, subsystem, and Floquet codes, and between different Floquet codes among each other. Our formalism helps to establish direct relations between different codes, often by finding that they to belong to a common phase. Concretely, we find that the stabilizer toric code, the subsystem toric code, and the CSS Floquet code correspond to the same path integral on different spacetime lattices, and thus belong to a single code family in our spacetime perspective. This can be seen as a spacetime analogue of viewing stabilizer toric codes on different spatial lattices as part of a single code family. We also find that the underlying path integrals of the CSS Floquet code and the honeycomb Floquet code are equal up to a local change of basis. All four codes belong to the toric code phase. They only differ by the microscopic representation of the underlying path integral, as well as by the locations of the defect segments corresponding to the non-trivial measurement outcomes.
The second application is to systematically construct new codes. Roughly speaking, we start with a fixed-point path integral and interpret it as a non-unitary circuit. Then we turn each non-unitary operator into an instrument that measures the absence or presence of a topological defect. The circuit of instruments then defines a fault tolerant dynamic code. By making use of the rich and developed mathematical theory of fixed-point models, this yields a great variety of new dynamic topological codes. First, we can start from different models in different families of fixed-point path integrals, corresponding to different phases. Further, topological fixed-point path integrals have a notion of exact combinatorial topological invariance, which is at the heart of their success in classifying topological phases. So we can put the fixed-point path integrals on arbitrary spacetime lattices. Finally, even if the path integral and lattice are fixed, they can be turned into a non-unitary circuit in various ways by choosing different causal orderings. An interesting feature of our approach that goes beyond much of the quantum error-correction literature is that there is no necessity for the resulting codes to be based on Pauli/Clifford measurements or operations. Concretely, we illustrate the capability of finding new codes through two examples. First, we construct a Floquet version of the \(3+1\)-dimensional toric code that uses only \(2\)-body \(XX\) and \(ZZ\) measurements. The code lives on a triangulation with \(4\)-colorable vertices, with a qubit on every left-handed tetrahedron. In each of \(8\) rounds we perform measurements on the qubits adjacent to each edge of a certain type. Second, we present a non-Pauli dynamic code based on the double-semion Turaev-Viro path integral. We sketch a presentation of this code as a circuit of common \(2\) and \(3\)-qubit gates and measurements. Due to its relatively large spacetime overhead it merely serves as an illustrative example rather than as a practical QEC code.
The structure of the remainder of this work is as follows. In Section II, we review fixed-point path integrals and their defects, and introduce the main definition of a fixed-point path integral code. In Section III, we use our formalism to analyze four examples of existing codes as mentioned above. In Section IV.1, construct the two new dynamic codes mentioned above.
## II From Fixed-Point Path Integrals to Error-Correcting Codes
### Fixed-point path integrals
In this section, we review fixed-point path integrals for topological phases, which are the key to understanding and constructing topological QEC codes in this work. Fixed-point path integrals are defined on lattices representing a discrete Euclidean-signature spacetime. The most common formulation of such path integrals is as _state sums_[5; 6; 7; 8; 9]. To this end we associate discrete variables to certain types of places (for example, all edges) in the lattice, and weights to other places (for example, all volumes). Each weight depends on the configuration of the nearby variables. We then perform a sum over all configurations of the variables, where the summand is given by the product of all the weights. Such path integrals are commonly used as partition functions in classical statistical physics on a space-only lattice, for example in the Ising model. Here we will use them on a spacetime lattice to represent the imaginary-time evolution of a quantum system.
An equivalent formulation that is better suited for our purpose are _tensor-network path integrals_[10]. These are tensor networks whose tensors are located at some places (for example, all volumes) of the spacetime lattice, and nearby tensors share bonds (for example, at every face, connecting the two adjacent volumes). Note that this is different from MPS or PEPS which live in space only and describe states, not path integrals. In particular, tensor-network path integrals have no open indices except for when we cut the tensor network at some "spatial" surface, and there is no distinction between "physical" and "virtual" indices.
Topological fixed-point path integrals have one single powerful property that makes them exactly solvable, namely discrete topological invariance. To this end, we first define the path integral not only on regular lattices, but on arbitrary triangulations or cellulations. Let us consider here the case of \(2+1\) dimensions where most of topological error correction takes place. One possibility is to put the same \(4\)-index tensor (in black) onto every
tetrahedron of a 3-dimensional triangulation (in orange),
\[\includegraphics[scale=0.5]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/figfig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/fig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/figfig/figfig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/figfig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/figfigfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/fig/figfig/figfigfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfigfigfig/figfig/figfigfig/figfigfig/figfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfig
code, the equations above are a subset of the \(ZX\) calculus [19; 20].
We could also extend the definition of the path integral to manifolds with boundary, for example by adding two additional tensors associated to boundary edges and faces and imposing a boundary version of topological invariance. More generally, we could introduce any sort of defects, which are lower-dimensional manifolds along which the path integral is altered, including domain walls, twist defects, corners between boundaries, anyons, and so on. In order to turn path integrals into fault tolerant circuits, we will use a special kind of defects that we will refer to as _syndrome defects_. For most examples in this paper, the syndrome defects will be anyons, which live on 1-dimensional worldlines inside a 3-dimensional space-time. For the toric code, there are two generating types of anyon worldlines, namely \(e\) and \(m\) anyons. \(e\) anyon worldlines are closed paths of edges in the lattice. We can introduce an \(e\) anyon by replacing all \(\delta\)-tensors on the worldline by a _charged \(\delta\)-tensor_,
\[\begin{array}{c}\includegraphics[width=142.26378pt]{figs/2-1-
In general, one could use different types of syndrome defects whose configurations obey different local constraints than the cocycle condition, and have different equivalences than homology. Examples for this are topological phases with non-abelian anyons that are only topologically (not homologically) invariant, or fracton phases where syndrome defects are to some extent restricted to rigid submanifolds. Also, homological syndrome defects can have a higher dimension, for example they could be located on _2-cycles_, which are closed-membrane configurations as used in Section IV.1. All other examples in this paper just use closed-loop homological syndrome defects such as the \(e\) and \(m\) anyons in the toric code.
There is a natural equivalence relation for path integrals. Namely, two tensor-network path integrals \(X\) and \(Y\) are equivalent if they are related by applying local tensor-network equations. Applying such an equation means to remove the left-hand side from the tensor network and insert the right-hand side, or vice versa. More precisely, we apply such equations in parallel everywhere in the tensor network, for a constant number of rounds. Alternatively, applying the tensor-network equations only inside some region \(A\) yields a domain wall between \(Y\) on \(A\) and \(X\) on the complement \(\overline{A}\). By applying the tensor-network equations we can make \(A\) larger or smaller, which freely moves around the domain wall, making it a topological domain wall. Since we can also remove, fuse, or generate new \(A\) islands, the domain wall is also _invertible_. Equivalence classes under invertible domain walls will be called _fixed-point phases_. Fixed-point phases are the natural notion of phases of matter for fixed-point path integrals, analogous to how local unitary circuits can be used to define phases in fixed-point Hamiltonians. Exact tensor-network equations still provide an interesting equivalence relation for general (non-fixed-point) path integrals. However, for them to capture phases of matter in this general context, some notion of approximation will be necessary. This is because applying exact tensor-network equations cannot change the correlation length of a path integral. In this light, consider tensor-network equations imposing topological invariance such as Eq. (2) or Eq. (9): They imply that the path integral on one lattice is in the same fixed-point phase as that same path integral on another lattice, for any way of superimposing the two lattices. For more detail and examples we refer the reader to Ref. [10].
### Dynamic codes
In this paper we are thinking of QEC as a dynamic processes, or more technically, as a circuit executed in spacetime. An error-correcting process needs to be able to filter out noise introduced into the system by extracting entropy. Thus, the corresponding circuits are circuits of quantum channels rather than unitaries. It is useful to consider channels that simultaneously act on classical and quantum degrees of freedom, even though these can always be embedded into purely quantum channels. Mathematically, such a quantum/classical hybrid channel is a tensor where every classical or quantum degree of freedom, either at the input or the output of the channel, corresponds to one index. More precisely, a qu-\(d\)-it is represented by a pair of \(d\)-dimensional indices, one for the ket and one for the bra part, whereas a classical \(d\)-it is just a \(d\)-dimensional index. For example, a channel with one quantum and one classical input, and two quantum and one classical output can look like:
\[\includegraphics[width=142.26378pt]{Fig-142.eps}, \tag{16}\]
where the time direction is from bottom to top, like everywhere in this paper.
A proper channel needs to fulfill two conditions: First, it needs to be _completely positive_: For every fixed value of the classical indices, block all ket indices and all bra indices such that the tensor becomes a matrix. This matrix has to be non-negative, for example,
\[\includegraphics[width=142.26378pt]{Fig-142.eps}\to[M^{ij}]_{(abc),(def)} \geq 0\quad\forall i,j\;. \tag{17}\]
Second, it needs to be _trace preserving_: When closing all quantum output (double-)indices with a trace, and all classical output indices with a sum, we obtain a trace and sum at all classical and quantum input indices, for example,
\[\includegraphics[width=142.26378pt]{Fig-142.eps}=\includegraphics[width=142.26378pt]{ Fig-142.eps}. \tag{18}\]
Here the black dot is the \(\delta\)-tensor in Eq. (3) with one index, that is, a vector with all entries equal to 1.
In _topological_ QEC, we demand the circuit to be geometrically local. Only then it is fair to assume that also the noise occurring in the process is local. The great achievement of topological QEC is fault tolerance with respect to arbitrary local noise, and thereby any noise that is possible. We will refer to this type of QEC, where the complete circuit of quantum/classical channels is geometrically local, as _fully local QEC_. Topological QEC has the additional property of being uniform in spacetime, or at least to scale in a uniform way. Fully local topological QEC is not only of practical but also of fundamental physical interest since it might provide a model for the process of cooling a topologically ordered material. Fully local QEC can also be viewed as self-correction using engineered dissipation, formulated in discrete time. Examples for fully local QEC circuits are given by cellular automaton decoders [21]. While fault tolerant fully
local decoders are known exist in \(4+1\) dimensions, the situation is unclear in \(3+1\) and \(2+1\) dimensions.
Since the feasibility of fully local topological QEC in low dimensions is an open question, we consider _quantum-local QEC_ as a second type of topological QEC, where only the quantum part of the circuit is assumed to be local. Quantum-local QEC consists of a geometrically local circuit of channels with additional open classical inputs and outputs. These inputs and outputs are then coupled to a purely classical _decoder_ that is not implemented by a classical circuit in the same spacetime, but treated as a black box that can be evaluated instantly and without noise. In practice, the efficiency of this decoder is of course still of great importance, and any reasonable decoder should be executable in at most a polynomially larger spacetime. An example for this is minimum-weight-perfect-matching decoding of the toric code: The quantum parts, namely the stabilizer measurements and corrections, are local, while the classical decoding algorithm has more-than-constant runtime even if we allowed for instant non-local communication. From a fundamental point of view, quantum-local QEC is not scalable with a fault tolerant threshold. This is because for large enough system sizes the quantum circuit has to wait for the results of the decoder, and during this waiting time additional errors accumulate. Nonetheless, quantum-local QEC might have a practical impact, since current implementations of qubits are by orders of magnitude larger, slower, and noisier, than classical information technology. A toy example for quantum-local QEC in a \(1+1\)-dimensional spacetime looks like
(19)
where we have semi-transparently drawn some of the classical bonds connecting the circuit and the decoder \(D\), and omitted the remaining ones. Note that for a real topological error-correcting circuit we would need at least \(2+1\) spacetime dimensions. The shown example has a special layout where we first apply only hybrid channels without classical inputs for a time \(T\sim L\). Physically, such hybrid channels are known as _instruments_ in quantum information theory and describe measurements, which are 2-qu-\(d\)-it measurements in the example above. We will refer to the recorded classical outputs/measurement results as _spacetime syndrome_. Then, at time \(T\), we perform a constant-depth _correction_ layer of quantum channels with an additional classical input, which are single-qu-\(d\)-it operators in the example above. The inputs to these correction channels are obtained from applying the decoding algorithm \(D\) to the spacetime syndrome.
Note that in general, corrections could also be applied in every time step like measurements, and not only after a time \(T\sim L\). This might be necessary for example for topological error correction based on non-abelian phases. However, for all examples considered in this paper, a layout as in Eq. (19) works.
### Imaginary versus real time
In topological quantum computation, we store information in the ground space of topologically ordered models defined on spatial configurations of non-trivial topology. In order to perform logical operations we change the topology, either by adiabatic variation of the model parameters or by code deformation. It is in principle possible to perform computation by only changing the topology of some bare spatial manifold. However, the set of accessible logic gates becomes much richer if we introduce defects, such as boundaries, anyons, domain walls, twist defects, and so on. We refer to such defects as _computational defect_ to stress that they serve a very different purpose from the syndrome defects introduced in Section II.1. Computational defects are also necessary for implementing computation in practice where we need to faithfully embed the topological manifolds into the Euclidean space we happen to live in. Consider the following two examples of processes involving defects, with time flowing from bottom to top,
(20)
The left side shows a process where two of four anyons on a disk are exchanged. 1 The right side shows a code on a rectangle with two different types of boundary like the surface code, blue at the back and front and green on the left and right. An anyon is moved from the green boundary on the right to the green boundary on the left.
Footnote 1: Note that for this to define a non-trivial logical operation, the anyons have to be non-Abelian, or we have to replace anyon worldlines by tube-like holes of extensive diameter.
The operations implemented on the logical quantum degrees of freedom only depend on the topological phase of the bulk, boundaries, anyons, etc. Since the phase is a ground state property, it is captured by a path integral in a spacetime with an imaginary time direction, that is, with a Euclidean signature. Now assume we are given a
Euclidean fixed-point path integral for the bulk, boundary, anyons, and other computational defects involved. The logical operations corresponding to a spacetime process are obtained by simply evaluating this path integral, which can be done for a minimal cellulation. Note that the blue and green boundaries in Eq. (20) are _physical boundaries_ where the tensor-network path integral terminates at some special tensors and bonds without open indices. In contrast, the gray-red boundaries at the bottom and top are _spatial boundaries_, where we simply cut the tensor network resulting in open indices. So the evaluation yields a linear operator from the bottom to the top open indices of the path integral. This operator is only non-zero inside the ground state subspace at both input and output, and restricted to this ground state subspace yields the logical operation.
In other words, performing topological quantum computation is the same as executing the imaginary time evolution of some topological phase on some spacetime manifold, possibly including computational defects. However, in the real world, we can only perform real time evolution. Real time evolution can be described by a tensor-network path integral as well, namely as a unitary circuit. However, the tensors of the imaginary time path integrals are not at all unitaries, and therefore it is impossible to execute the Euclidean path integral in the real world. In this paper, we will understand how topological QEC is precisely a solution to this problem. That is, topological QEC constructs a real-time path integral that is equal to a given imaginary-time fixed-point inside the ground-state subspace. As argued in Section II.2, the resulting real-time path integrals will in fact not be unitary but circuits of quantum/classical hybrid channels.
Let us now try to formalize the relation between the imaginary-time path integrals and a corresponding fault-tolerant real-time fully local uniform QEC circuit on a high level. To this end, we consider the "transfer" operator corresponding to executing one time period of the circuit. The following three conditions should hold: (1) There is a set of highest-magnitude eigenvalues, which are contained in an interval that shrinks exponentially with the system size \(L\). (2) The remaining eigenvalues are separated from the highest-magnitude subset by a gap that shrinks at most polynomially with \(L\). The dimension of the high-magnitude subspace equals the ground state degeneracy of the Euclidean fixed-point path integral of the phase. (3) In order to perform computation, we consider a circuit that varies in time. The circuit acts like the imaginary-time path integral when restricted to the high-magnitude subspace, and decouples from the orthogonal complement, up to an error exponentially small in \(L\).
Note that for the transfer operator of the imaginary-time evolution, the gap in (2) does not shrink at all, but is constant. However, for a real-time fully local QEC process, this gap must shrink at least like \(L^{-1}\) due to the finite propagation speed of information. Namely, if we insert an "error" operation of size \(L\) into the circuit, then it takes time \(L\) to correct this error and return to the steady state. In contrast, a gapped operator returns to the steady state from any starting point at a system-size independent rate. For a quantum-local circuit, a similar relation could be formulated, though one might need to consider the transfer operator for an extended time \(T\) instead of only one time period.
### From path integrals to circuits
The three conditions formulated in the previous section are neither simple to verify for a generic circuit, nor do they directly help with constructing such QEC circuits. In this section we describe an explicit general method to construct topological QEC circuits from topological fixed-point path integrals. In fact, we will only construct the quantum part of the QEC circuit. Which classical decoder works depends on the nature of the syndrome defects used. However, we do propose a type of decoder that works for all examples given in this paper.
We start by putting the path integral on some regular lattice and choosing a time direction. Then we interpret the tensor network as a geometrically local circuit of operators, where each operator corresponds to a single tensor, or a patch of a few tensors. The indices of each tensor or patch are divided into input and output in accordance with the chosen time direction. This can always be done, however the resulting operators, like
(21)
are not in general unitaries, or equivalently, stacking two copies does not result in a channel that is normalized as in Eq. (18),
(22)
In fact, it does never happen that all operators are unitary, since the operator corresponding to a full layer of imaginary-time evolution is a projector of low rank, and thus not a unitary.
Even though \(T_{1}\) does not define a channel, it can always occur as part of an instrument. To this end, we choose further tensors \(T_{2},T_{3},\ldots\), that we combine into one single tensor using an additional classical output index,
(23)
We then use this tensor to define an instrument,
(24)
The small dot on the right denotes a \(\delta\)-tensor as defined in Eq. (3), though here it serves a different function and the bond dimension can be different from 2. The normalization condition in Eq. (18) of this instrument reduces to the following condition for \(T\):
\[\tikzfig{fig:1} \tag{25}\]
In other words, we are looking for tensors \(T_{2},T_{3},\ldots\), such that the collection \(\mathbf{T}=(T_{1},T_{2},T_{3},\ldots)\) forms an isometry.
We now turn the fixed-point path integral into a circuit of instruments as in Eq. (24). If we happen to always get the trivial measurement outcome corresponding to \(T_{1}\), then we have successfully executed the imaginary-time fixed-point path integral. However, if some of the outcomes are non-trivial, we have performed another path integral including some tensors \(T_{2},T_{3},\ldots\), and need to apply corrections. In order to know how to correct non-trivial outcomes, also these must correspond to an exactly solvable fixed-point path integral of some sort. This is where we use the syndrome defects such as anyons: We choose \(T_{2},T_{3},\ldots\) such that each of these tensors corresponds to a piece of fixed-point path integral that includes one or more segments of syndrome defect. Then every configuration of classical outputs corresponds to a topological path integral with a pattern of syndrome defects. The corrections are then implemented by classically controlled operations in the circuit that insert additional segments of syndrome defects depending on the classical control. This motivates the following definition.
**Definition 1**.: A _fixed-point path integral code_ is a uniform geometrically local circuit of quantum channels with additional classical inputs and outputs, such that the following holds:
* When fixing a configuration of classical inputs and outputs, the circuit becomes a mixed-state tensor-network path integral. This path integral is a stack of two copies of the same (pure-state) path integral, with one of them complex conjugated.
* This path integral is (in the same fixed-point phase as) a fixed-point path integral for a topological phase, including a pattern of syndrome defects. This pattern only depends locally on the classical inputs and outputs.
In order to turn a fixed-point path integral code into a completely specified process, we have to couple the classical inputs and outputs to a classical decoder \(D\). Very vaguely speaking, the resulting process is error correcting if \(D\) yields a total defect pattern (formed by the outputs and inputs) that is equivalent to the trivial one. If there is noise, the total defect pattern does not fulfill the local constraints, so instead we take the closest defect pattern that does. More concretely, let us give a decoder that works if the defect pattern form (co-)cycles, which is the case for all examples given in this paper. This can be viewed as a generalization of decoding the toric code in the presence of measurement errors [2].
**Proposition 1**.: A fixed-point path-integral code whose syndrome defects are (co-)cycles can be turned into a complete fault tolerant process as follows. The overall circuit layout is that of Eq. (19), where we first record measurement outcomes for a time \(T\sim L\) (\(L\) is the linear system size), and then perform corrections at time \(T\). Thereby, we need to insert enough controlled operations at time \(T\) to be able to close off any measured defect pattern. The classical decoder \(D\) is given as follows:
1. Consider the (co-)chain(s) corresponding to the recorded spacetime syndrome by definition of the fixed-point path integral code. Choose a minimum-weight fix turning the (co-)chain(s) into (co-)cycle(s). Thereby, treat the time-like boundary at time \(T\) as "open", such that (co-)cycles can freely terminate there. In contrast, treat the initial time-like boundary at time \(0\) as "closed", such that (co-)cycles are not allowed to terminate there.
2. Consider the endpoints of the (co-)cycle(s) at time \(T\). Choose a set of defect segments at time \(T\) that together with the fixed (co-)cycle(s) in the spacetime forms homologically trivial (co-)cycle(s). This set of defects determines the input to the classically controlled correction operations.
Let us give a rough argument for why this process has a fault tolerant threshold under local noise, a detailed proof will appear elsewhere. If we perform the circuit without noise, then the classical outputs correspond to a defect pattern consisting of (co-)cycle(s). Otherwise the path integral evaluates to zero as in Eq. (13), and the corresponding configuration of outcomes is measured with probability zero. However, if we perturb the circuit by adding (weak) noise, the (co-)cycle(s) are (slightly) broken. We find that (1) the probability that they are broken everywhere inside a connected region is exponentially small in the size of that region, (2) two cycles of different homology classes differ inside a region of at least size \(\sim L\), and (3) the number of connected regions of size \(L\) is at most exponential in \(L\). Thus for weak enough noise, the probability for the minimum-weight fix to yield the wrong cohomology class is exponentially small in \(L\).
For all examples in this paper except for Section IV.1, the syndrome defects are anyon worldlines. In this case, the correction operators closing the spacetime string net pattern at time \(T\) are known as _string operators_. Fixing the string net pattern in the presence of noise means pairing up the string endpoints in spacetime. A polynomial-time algorithm solving this problem is known as _minimum weight perfect matching_[22].
## III Known codes in terms of path integrals
In this section, we consider four different examples of fixed-point path-integral codes, which we all find to be equivalent to existing codes, namely the stabilizer toric code, subsystem toric code, CSS Floquet code, and honeycomb Floquet code. The first three examples are all based on the toric-code path integral introduced in Section II.1, which we put on different spacetime lattices with different choices of time direction. The fourth example differs from the previous ones by only a change of basis of the tensor-network path integral.
### Stabilizer toric code
As a first example let us consider the first of all topological error-correcting codes, namely the toric code on a square lattice [1; 2]. The underlying tensor-network path integral is the toric-code path integral from Section II.1 on a cubic lattice, whose unit vectors we call \(x\), \(y\), and \(z\). The time direction \(t\) is coincident with \(z\),
(26)
where the background cubic lattice is in orange and the tensor-network diagram is in black. We now view the tensor-network path integral as a circuit of operators, where each operator corresponds to one or a few tensors. There are two types of operators, as marked above in semi-transparent blue. Both operators act on 4 qubits that correspond to \(t\)-directed bonds in the tensor-network diagram. Specifically, there is an operator \(T_{1}\) at each \(xy\) face, and an operator \(V_{1}\) at every \(t\) edge,
(27)
Note that these diagrams are identical to well-known \(ZX\) diagrams for the vertex and plaquette terms of the toric code [23; 24; 25]. In order to get the decomposition, we need to split up all the 4-index \(\mathbb{Z}_{2}\) tensors at the \(xt\) and \(yt\) faces into two 3-index \(\mathbb{Z}_{2}\)-tensors,
(28)
As shown, this splitting up can be represented geometrically as dividing each plaquette into two triangles. After this, \(V_{1}\) corresponds to a \(t\) edge together with the adjacent triangles. As shown (see also Eq. (8)), there are two different ways to split up the plaquette/tensor. As we will discuss more later, these correspond to different orderings in which \(V_{1}\) at neighboring \(t\) edges act on the same qubit. Dually, we need to split each 4-index \(\delta\)-tensor at a \(x\) or \(y\) edge into two 3-index \(\delta\)-tensors. Geometrically, this corresponds to splitting a 4-valent edge into two 3-valent edges separated by a 2-gon face yielding a configuration as shown in Eq. (7). After this, \(T_{1}\) corresponds to a \(xy\) face together with the adjacent 3-valent edges. Neither \(T_{1}\) nor \(V_{1}\) are unitary, which is not a surprise given that the path integral represents an imaginary, and not a real time evolution. In fact, \(T_{1}\) is the projector onto the \(+1\) eigenspace of the Pauli operator \(Z_{0}Z_{1}Z_{2}Z_{3}\), and \(V_{1}\) the projector onto the \(+1\) eigenspace of \(X_{0}X_{1}X_{2}X_{3}\). To fix this, we define a second projector \(T_{m}\) corresponding to a \(xy\) face carrying a segment of \(m\) worldline,
(29)
This way, \(T_{1}\) is extended to an isometry \(\mathbf{T}\),
\[\mathbf{T}\coloneqq(T_{1},T_{m})=\raisebox{-14.226378pt}{\includegraphics[]{ tm}}. \tag{30}\]
\(\mathbf{T}\) defines an instrument \(I[\mathbf{T}]\) via Eq. (24), which is in fact just a projective \(Z_{0}Z_{1}Z_{2}Z_{3}\) measurement. Dually, we can define an operator \(V_{e}\) carrying an \(e\) anyon segment along a \(t\) edge,
(31)
This gives rise to an isometry \(\mathbf{V}\),
\[\mathbf{V}\coloneqq(V_{1},V_{e})=\raisebox{-14.226378pt}{\includegraphics[]{ tm}}. \tag{32}\]
\(\mathbf{V}\) yields an instrument \(I[\mathbf{V}]\), which is just a projective \(X_{0}X_{1}X_{2}X_{3}\) measurement. The presence of the _Hadamard_ matrix,
\[\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{fig/H-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1
are single qubit Pauli operators,
\[\raisebox{-1.0pt}{\includegraphics[scale=0.5]{fig/qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubit_qubit_qubitqubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubit_qubitqubit_qubit_qubit_qubitqubit_qubit_qubitqubit_qubitqubit_qubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubit_qubitqubit_qubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubit_qubit_qubitqubit_qubit_qubit_qubitqubit_qubit_qubitqubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_qubit_\(\)
instead of \(z\), we choose \(t=x+y+z\) as the time direction,
(45)
The operators of the circuit are now individual tensors at the edges and faces as marked in blue above. Traversing the path integral in the \(t\) direction gives a natural direction to each tensor, acting as 2-qubit operators
\[T_{1}\coloneqq\raisebox{-14.226378pt}{\includegraphics[]{fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/figfig/fig/figfig/fig/fig/figfig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/fig/fig/figfig/figfig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/figfig/fig/figfig/fig/fig/figfig/fig/fig/figfig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/figfig/fig/figfig/figfig/fig/fig/figfig/fig/figfig/figfig/fig/figfig/figfig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/fig/figfig/figfig/figfig/fig/figfig/fig/figfig/figfig/figfig/fig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfigfig/figfig/figfig/figfig/figfigfig/figfig/figfigfig/figfig/figfigfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfig/figfigfigfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfigfig/figfigfig/figfigfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfig
is to arrive at a circuit that consists only of 2-body measurements without any swap operations. This fully determines the time-like strings by the way the inputs and outputs are paired in Eq. (49) and Eq. (51). Geometrically, these time-like strings are sequences of adjacent faces and edges. In the space projection, there is one such sequence for every triangle \(F\) as follows,
\[\begin{array}{c}\includegraphics[width=142.26378pt]{Fig4}.\end{array} \tag{55}\]
Here the labels 0, 2, 4 correspond to projections of edges, and the labels 1, 3, 5 at triangles correspond to projections of faces formed by this triangle together with \(F\). Then the sequence \(0-1-2-3-4-5\) is the time-like string within one \(t\) period.
As we have seen, there is one qubit associated to each triangle. For each edge, the instrument \(I[\mathbf{T}]\) acts on the qubits at the two triangles adjacent to its projection. For each face, \(I[\mathbf{V}]\) acts on the qubits at the two triangles contained in its projection. Note that the instruments \(I[\mathbf{T}]_{01}\) act on the same pairs of qubits as the instruments \(I[\mathbf{V}]_{120}\), and analogous for cyclic permutation of the numbers/colors. Taking into account that \(I[\mathbf{T}]\) is a \(Z_{0}Z_{1}\) measurement and \(I[\mathbf{V}]\) is a \(X_{0}X_{1}\) measurement, we can rewrite Eq. (53) as
\[\begin{array}{c}\to ZZ_{01}\to XX_{20}\to ZZ_{12}\\ \to XX_{01}\to ZZ_{20}\to XX_{12}\to\.\end{array} \tag{56}\]
After going to the dual hexagonal lattice, we recover the CSS Floquet code as introduced in Refs. [12; 13; 14].
Let us briefly describe the general decoding procedure in Proposition 1 for the present code. The spacetime lattice in which we fix the measured \(e\) 1-chain (\(m\) 2-cochain) is the rotated modified cubic lattice. If we choose the correction time \(T\) after the \(I[\mathbf{T}]_{01}\) measurements, the spatial slice of the lattice looks like
\[\begin{array}{c}\includegraphics[width=142.26378pt]{Fig4}.\end{array} \tag{57}\]
The fixed spacetime \(e\) 1-cycle (\(m\) 2-cocycle) terminates at a 0-cycle (2-cocycle) on this spatial lattice. It is closed in a homologically trivial way by inserting 2-valent edges (2-gon faces) potentially carrying \(e\) (\(m\)) anyon worldlines similar to Eq. (38). The static toric code on this spatial lattice also coincides with the _instantaneous stabilizer group_ of the code at time \(T\). In contrast to the stabilizer and subsystem toric code, the edges (dual edges) where measurements potentially yield \(e\) (\(m\)) anyon worldlines are not aligned with the \(t\) direction. Furthermore, the graph formed by these edges (dual edges) is much more connected. In the absence of noise, any 1-cycle (2-cocycle) supported these edges (dual edges) is measured with equal probability. So as for the subsystem toric code the measurement results are non-deterministic, but now they are even more fluctuating and may include homologically non-trivial loops. This is not a problem for decoding though, since these homologically non-trivial loops are recorded and can be corrected.
We have seen that the CSS Floquet code and the stabilizer toric code are both based on the cubic-lattice toric code path integral, but with different time directions. If we superimpose the two cubic lattices such that the time directions align, the path integrals are different. Nonetheless, they are in the same fixed-point phase as defined at the end of Section II.1. The tensor-network equations applied to get from the cubic lattice to the rotated cubic lattice are the equations imposing topological invariance, for example, Eq. (8) and Eq. (9). So the time evolutions of the two codes postselected to the trivial 0 (or \(+1\in\{\pm 1\}\)) measurement outcomes are locally equivalent. In both codes, the non-trivial 1 (or \(-1\in\{\pm 1\}\)) outcomes correspond to \(e\) or \(m\) anyon worldline segments. However, the positions of these segments in spacetime are different for the two codes. The subsystem toric code, as well as the honeycomb Floquet code discussed in the following section, are related in the same way.
### Honeycomb Floquet code
In this section, we consider the honeycomb Floquet code introduced in Ref. [11]. The underlying tensor-network path integral will be referred to as the _honeycomb path integral_. It has the same geometry as the cubic-lattice toric-code path integral used in previous sections. However, it involves a third kind of tensor,
\[\begin{array}{c}\includegraphics[width=142.26378pt]{Fig4}.\end{array} \tag{58}\]
We will refer to this tensor as \(\mathbb{C}\)_-tensor_ since it is related to the two-dimensional real algebra of complex numbers. Note that the tensor depends on a choice of arrow direction at each index, which we indicate by an arrow at the incoming indices. The honeycomb path integral has \(\delta\)-tensors at every \(z\) edge and every \(xy\) face, \(\mathbb{Z}_{2}\)-tensors at every \(x\) edge and \(yz\) face, and \(\mathbb{C}\)-tensors at every \(y\) edge
and \(xz\) face,
\[\begin{split}\includegraphics[scale=0.5]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/figfig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/figfig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/fig/fig/figfig/fig/figfig/fig/fig/fig/figfig/figfig/fig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/fig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfigfig/figfigfigfig/figfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfigfig/fig
All in all, we find that the condition of Definition 1 still holds, just that now each measurement outcome corresponds to multiple defect segments of different types.
Let us now look at the combinatorics of the resulting circuit. The overall geometry is as for the CSS Floquet code in Eq. (53), just that the type of measurement now depends on the orientation of the edge or face and not on the time step:
\[\begin{split}\to&\ (I[\mathbf{T}]_{\geq 01},I[\mathbf{V}]_{ \geq 01},I[\mathbf{W}]_{\geq 01})\\ &\ \ \to(I[\mathbf{T}]_{\geq 012},I[\mathbf{V}]_{\geq 012},I[ \mathbf{W}]_{\geq x012})\\ &\ \ \ \to(I[\mathbf{T}]_{\geq 12},I[\mathbf{V}]_{\geq x12},I[ \mathbf{W}]_{y12})\\ &\ \ \to(I[\mathbf{T}]_{\geq 0},I[\mathbf{V}]_{\geq 120},I[ \mathbf{W}]_{\geq x120})\\ &\ \ \ \to(I[\mathbf{T}]_{\geq 20},I[\mathbf{V}]_{x20},I[ \mathbf{W}]_{y20})\\ &\ \ \to(I[\mathbf{T}]_{\geq y201},I[\mathbf{W}]_{y201},I[ \mathbf{W}]_{xz201})\to\.\end{split} \tag{68}\]
After projecting the cubic lattice along time as in Eq. (54), \(x\), \(y\), and \(z\) refer to the three different directions of edges in the resulting triangular lattice. The measurements at \(x01\) and \(yz120\) (and analogous pairs) act on the same pair of qubits and are in fact the same type of measurement. We thus find that the circuit repeats already after three rounds, yielding
\[\begin{split}\to&\ (ZZ_{z01},XX_{x01},YY_{y01})\\ &\ \ \ \to(ZZ_{z02},XX_{x20},YY_{y20})\\ &\ \ \to(ZZ_{z12},XX_{x12},YY_{y12})\to\.\end{split} \tag{69}\]
After going to the dual hexagonal lattice, we obtain the honeycomb code as presented in [11].
It has been argued in Ref. [11] that the honeycomb Floquet code is closely related to the toric code since the instantaneous stabilizer group of the former is equivalent to the latter. Here we will make this relation precise by showing that the underlying path integrals are in the same fixed-point phase. The sequence of tensor-network equations transforming the toric-code path integral into the honeycomb path integral (or vice versa) is as follows: We first insert a resolution of the identity, \(\mathbb{1}=GG^{-1}\) at every bond. \(G\) is an invertible matrix that depends on the bond within a unit cell, but not on the unit cell. Then we contract each 4-index tensor with the four surrounding matrices \(G\) or \(G^{-1}\), yielding a new 4-index tensor at that place. Note that this is just a complicated way of saying that the two tensor networks are equivalent up to a basis change at every bond. The matrices \(G\) are built from the Hadamard matrix \(H\) in Eq. (33), together with the following two matrices,
\[\begin{split}\includegraphics[height=85.358268pt]{figs.eps}:=S \coloneqq\begin{pmatrix}1&0\\ 0&i\end{pmatrix}\,\quad\includegraphics[height=85.358268pt]{figs.eps}:=U \coloneqq HSH\.\end{split} \tag{70}\]
\(H\), \(S\), and \(U\) are all unitary,
\[\begin{split}\includegraphics[height=85.358268pt]{figs.eps}& =\includegraphics[height=85.358268pt]{figs.eps},\quad\includegraphics[height=85.3582 68pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[height=85.3582 68pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[height=85.3582 68pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[height=85.3582 68pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[height=85.3582 68pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[height=85.3582 68pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[height=85.3582 68pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[height=85.3582 68pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[height=85.3582 68pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[height=85.3582 68pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[height=85.3582 68pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[height=85.3582 68pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[ 85.358268pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:= \includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[height=85.3582 68pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[ 85.358268pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:= \includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[height=85.3582 68pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[ 85.358268pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:= \includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[height=85.3582 68pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[ 85.358268pt]{figs.eps}:=\includegraphics[height=85.358268pt]{figs.eps}:= \includegraphics[height=85.358268pt]{figs.eps}:=\includegraphics[85.358268pt]{figs.
In order to find \(G(a,b,+)\), we write out all potential \(G(a,\ldots,+)\) and \(G(\ldots,b,+)\) and take any common element. A solution is given by
\[\begin{array}{l|l}\frac{a\ -\ b}{x\ -\ xy}&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
their indices into inputs and outputs,
\[T_{1}\coloneqq\vbox{\hbox{\includegraphics[scale=0.4]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/figfig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/figfig/fig/fig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfig/figfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfig/figfigfigfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig
the \(t=w+x+y+z\) axis. To this end, we choose new basis vectors
\[\begin{split}\overline{x}&=\frac{1}{2}w+\frac{1}{2}x- \frac{1}{2}y-\frac{1}{2}z\;,\\ \overline{y}&=\frac{1}{2}w-\frac{1}{2}x+\frac{1}{2}y- \frac{1}{2}z\;,\\ \overline{z}&=\frac{1}{2}w-\frac{1}{2}x-\frac{1}{2}y+ \frac{1}{2}z\;,\end{split} \tag{81}\]
orthogonal to \(t\). The projected 0 and 2 vertices then form a cubic lattice \(A\) with unit vectors \(\overline{x}\), \(\overline{y}\), and \(\overline{z}\). The 1 and 3 vertices form a second cubic lattice \(B\) shifted by \(\frac{1}{2}(\overline{x}+\overline{y}+\overline{z})\), such that the vertices of \(A\) are the centers of the cubes of \(B\) and vice versa. Within \(A\), 0 and 2 vertices alternate in a checkerboard manner, and the same for 1 and 3 vertices within \(B\). The projected edges have length \(\sqrt{\frac{3}{4}}\) and connect each \(B\) vertex with the 8 corner vertices of the corresponding \(A\) cube, and vice versa. The edges of the \(A\) and \(B\) lattice themselves are not projected edges of the 4-dimensional cubic lattice. The following depicts a section of the lattice with four layers of vertices in \(\overline{y}\) direction, projected edges in gray, edges of \(A\) and \(B\) in black, and edges connecting vertices of the two back layers dotted:
(82)
The edges of \(A\) and \(B\) together with all the projected edges define a triangulation where each tetrahedron has one 0, one 1, one 2, and one 3 vertex. The projections of spacetime faces are rhombi consisting of two triangles. The projections of the spacetime cubes are (rhombic) cubes consisting of 6 tetrahedra, 3 left-handed and 3 right-handed ones. If a cube is adjacent to a face, then one of the right-handed tetrahedra contains one of the triangles of the face.
As usual, qubits can be identified by following the time-line of the bonds in the tensor-network/circuit diagram. There is one such timeline for every tetrahedron \(F\) that is right-handed relative to the vertex ordering 0123,
(83)
Let \(F_{i,i+1,i+2}\) be the spacetime face whose projection is spanned by the \((i,i+1)\) and \((i+1,i+2)\) edges of the tetrahedron, where all numbers are understood mod 4. Let \(F_{i,i+1,i+2,i+3}\) be the spacetime cube whose projection is spanned by the \((i,i+1)\), \((i+1,i+2)\), and \((i+2,i+3)\) edges of the tetrahedron. Then, within a fixed \(t\)-period, the timeline of bonds is given by the following sequence of adjacent faces and cubes,
\[\begin{split} F_{012}-F_{0123}-F_{123}-F_{123}-F_{230}\\ -F_{2301}-F_{301}-F_{3012}-\;.\end{split} \tag{84}\]
To go from the face \(F_{i,i+1,i+2}\) to the face \(F_{i+1,i+2,i+3}\) inside the projection of the cube \(F_{i,i+1,i+2,i+3}\), we have to either rotate left or right when looking in the direction \(i\to i+3\). Since the tetrahedron is right-handed relative to the orderings 0123 and 2301 but left-handed for 1230 and 3012, we rotate right for \(i=0\) and \(i=2\), and right for \(i=1\) and \(i=3\). This fits our choice of labeling the faces of each cube by \(i_{0}\), \(\ldots\), \(o_{2}\), which we have discussed in the paragraph after Eq. (79): As can be seen in Eq. (78), in order to go from \(i_{x}\) to \(o_{x}\) we turn either right or left in the spatial projection of the cube when looking from bottom to top. We turn left for even time steps (\(p=0\) which we identify with \(i=1\) or \(i=3\)), and right for odd time steps (\(p=1\), which is \(i=0\) or \(i=2\)).
As we have seen, there is one qubit associated to every right-handed tetrahedron. Each instrument \(I[\mathbf{T}]\) at a spacetime face acts on the qubits at the two right-handed tetrahedra adjacent to the two triangles that are contained in the projection of the face. Alternatively, these two right-handed tetrahedra are the ones adjacent to the diagonal \((i,i+2)\) edge of a \((i,i+1,i+2,i+3)\) cube.
So in total we obtain the following dynamic code. Consider two shifted cubic lattices \(A\) and \(B\) together with all length-\(\sqrt{\frac{3}{4}}\) edges connecting \(A\) and \(B\), defining a triangulation whose vertices are 4-colorable as 0, 1, 2, or 3. There is one qubit at every right-handed tetrahedron. The sequence of measurements consists in 8 rounds,
\[\begin{split} ZZ_{02}\rightarrow(XX,XX)_{30}\to ZZ_{ 13}\rightarrow(XX,XX)_{01}\\ \to ZZ_{20}\rightarrow(XX,XX)_{12}\to ZZ_{31}\\ \rightarrow(XX,XX)_{23}\rightarrow\;.\end{split} \tag{85}\]
In each round we measure either \(Z_{0}Z_{1}\) on the two adjacent right-handed tetrahedra adjacent to all edges of the specified type, or we measure \(X_{0}X_{1}\) and \(X_{1}X_{2}\) on the three right-handed tetrahedra adjacent to all edges of that type. Note that the rounds 0 and 4 (numbered starting from 0), as well as 2 and 6 in Eq. (85) are identical.
This Floquet code can be generalized to arbitrary triangulations with 4-colored vertices. In every round, we measure \(Z_{0}Z_{1}\), \(Z_{1}Z_{2}\), \(\ldots\)\(Z_{i-1}Z_{i}\) on the set of right-handed tetrahedra adjacent to the specified type of edges
in the lattice, or the same for \(X\) instead of \(Z\). The Poincare dual to such a lattice has 4-colorable volumes and is used in the definition of the 3-dimensional color code [28]. However, our code involves only half of the qubits. The dual lattice of the triangulation depicted in Eq. (82) is known as _bittruncated cubic honeycomb_[29]. The volumes are bit truncated cubes,
\[\includegraphics[width=142.26378pt, width=142.26378pt]{figures/b1.eps}\,. \tag{86}\]
The drawn volume is dual to a 3 vertex. The blue shaded 6-gon faces are dual to 23 edges, and the red shaded 6-gon faces to 30 edges. The green shaded 4-gon faces are dual to 13 edges. The red, green, and blue edges are dual to 123-triangles, 230-triangles, and 301-triangles, respectively. The overall lattice also contains faces dual to 01 edges, 12 edges and 02 edges, as well as edges dual to 012 triangles, but none of these are contained in the boundary of the 3 volume shown above. There are qubits on all the full vertices, and none at the empty vertices. The measurements in the dual lattice take place on the faces and involve the qubits at the vertices. For example, the \(ZZ_{13}\) measurements take place simulteneously on all green 4-gon faces shown above.
Let us briefly look at the decoding procedure from Proposition 1 for the present code. The spacetime syndrome measured over some time \(T\sim L\) consists of one outcome at every face and two outcomes at every cube of the hypercubic lattice. The syndrome yields a \(e\)\(2\)-chain and an \(m\)\(3\)-cochain inside the 4-dimensional modified hypercubic lattice, supported on the pillow-like volumes and the dividing \(f\) and \(g\) faces. The boundary of the \(m\)\(3\)-cochain is a (0-dimensional) 4-cocyle, and the boundary of the \(e\)\(2\)-chain is a (1-dimensional) 1-cycle. We then use the classical decoder \(D\) to find a low-weight fix that turns \(e\) into a 2-cycle and \(m\) into a 3-cocycle. For closing off the \(e\)\(2\)-cycle and \(m\)\(3\)-cocycle, we choose \(T\) to be after a round of \(I[\mathbf{T}]_{123}\) instruments. The corresponding spatial slice of the modified hypercubic lattice at this time is obtained by (1) taking only the 123 faces in the lattice in Eq. (82), and (2) replacing every face by two copies separated by a pillow-like volume. The non-pillow volumes of this spatial slice are rhombic dodecahedra, each formed by the four 0123 cubes adjacent to a 0 vertex in Eq. (82). Each 123 face in Eq. (82) has two adjacent qubits, so there is one qubit for every face of the spatial slice. The \(m\)\(3\)-cocycle restricted to this spatial slice is again a 3-cocycle, that is, a collection of rhombic dodecahedra and pillow volumes. We close this 3-cochain by a 2-cochain, and apply a Pauli-\(X\) operator to the qubits at each face of this 2-cochain. The \(e\)\(2\)-cycle restricted to the spatial slice becomes a 1-cycle, that is, a collection of edges. We close this 1-cycle by a 2-chain, and apply a Pauli-\(Z\) operator to the qubits at each face of this 2-cochain. Note that the \(e\) part of the syndrome could also be corrected by a local cellular automaton shinking the corresponding 1-cycle in each time step using a mechanism similar to _Toom's rule_[21].
### Dynamic double-semion string-net code
In this section we will give an example for a non-Pauli fixed-point path integral code, which is based on the double-semion Turav-Viro/Dijkgraaf-Witten model [6; 7], the state-sum version of the double-semion string-net model [30; 31]. Note that there are in fact non-Pauli as well as Pauli stabilizer codes for this phase (and any Abelian non-chiral anyon model) [32; 33; 34]. Here, we present a dynamic non-Pauli and non-stabilizer code. This code can be seen as somewhere between stabilizer and Floquet codes, since the anyon worldlines forming the spacetime syndrome move in a fixed direction, but this direction does not coincide with the \(t\) direction. Apart from this, our code has some similarities to recent protocols for syndrome extraction for the non-Abelian double-Fibonacci string-net model presented in Ref. [35]. The goal here is not to produce a particularly practical code, but rather to demonstrate the applicability of our framework beyond the toric-code phase.
We consider a path integral defined on any \(2+1\)-dimensional triangulation with a branching structure, that is, a direction for all the edges that is acyclic around every triangle. As for the toric code, the path integral is a sum over \(\mathbb{Z}_{2}\)-valued 1-cocycles \(A\), but now there is a non-trivial action \((-1)^{A\cup A\cup A}\). That is, the state sum has an additional weight,
\[\omega(a,b,c)=\omega_{a,b,c}=(-1)^{abc}\;, \tag{87}\]
at every tetrahedron,
\[\includegraphics[width=142.26378pt, width=142.26378pt]{figures/b1.eps}\,. \tag{88}\]
The state sum can be written as a tensor network with one \(\delta\)-tensor at every edge, one 3-index \(\mathbb{Z}_{2}\) tensor at every face, and one 3-index \(\omega\) tensor at every tetrahedron. This path integral is invariant under Pachner moves including the one depicted in Eq. (2). The equations corresponding to this invariance are equivalent to the fact that \(\omega\) is a \(\mathbb{Z}_{2}\) group 3-cocycle. The string-net picture of this model is obtained by considering space-only Poncare dual lattices.
We can equip this path integral with anyon worldlines. Geometrically, these worldlines are represented by sequences of cylinder-like 3-cells or _tube segments_ embedded into the triangulation. The boundary of such a tube
segment consists of two _anyon 1-gons_ (in red at the bottom and top) and one rectangle (wrapping around the side) which can be divided into two triangles,
(89)
There are further tube segments attached to the two anyon 1-gons, and ordinary tetrahedra attached to the two triangles. There are no additional state-sum variables other than the group elements at each edge, but there is an additional state-sum weight
\[\rho_{g,h} \tag{90}\]
at each tube segment with \(\mathbb{Z}_{2}\) variables as in Eq. (89). There are four types of anyons, 1, \(s\), \(\bar{s}\), and \(s\bar{s}\), and the associated weights are
\[\begin{split}\rho^{1}_{g,h}&=\delta_{g,0}\;,\\ \rho^{\bar{s}}_{g,h}&=\delta_{g,1}i^{h}\;,\\ \rho^{\bar{s}}_{g,h}&=\delta_{g,1}(-i)^{h}\;,\\ \rho^{s\bar{s}}_{g,h}&=\delta_{g,0}(-1)^{h}\;.\end{split} \tag{91}\]
The different \(\rho^{x}\) are irreducible representations of the _tube algebra_ defined by \(\omega\)[36, 37]. For a review of defects in the path integral language used here, see Appendix D of Ref. [38]. The string-net analogue of this way of introducing anyons as explicit defects is given in Ref. [39].
We now consider this path integral on a triangulation consisting of two cubic lattices \(A\) and \(B\) with unit vectors \(x\), \(y\), and \(z\), shifted by relative to each other by \(\frac{1}{2}x+\frac{1}{2}y+\frac{1}{2}z\). Each tetrahedron is formed by one \(A\) edge, one nearby \(B\) edge, as well as four length-\(\sqrt{\frac{3}{4}}\)-edges connecting \(A\) vertices with nearby \(B\) vertices. So this is the same as the lattice depicted in Eq. (82), just that we color all \(A\) vertices red and all \(B\) vertices green. The branching structure can be chosen such that for every directed edge with associated vector \(ax+by+cz\), we have \(a+b+c>0\).
We turn the path integral into a circuit of operators choosing \(t=z\) as the time direction. There are two kinds of operators in the circuit which correspond to different volumes as follows. For every \(t\) edge, there is an operator \(T_{1}\) consisting of the four adjacent tetrahedra, acting on 8 qubits (here with coloring for a \(t\) edge of \(B\)),
(92)
\(P_{\text{cocycle}}\) acting on a triangle with edge labels \(a\), \(b\), and \(c\) is the projector onto the _cocycle subspace_, spanned by the configurations that fulfil \(a+b=c\). Here and in the following, we also use \(P_{\text{cocycle}}\) for the product of \(P_{\text{cocycle}}\) on all the triangles that are currently acted on. As shown, \(T_{1}\) contains the \(\omega\)-tensors of the involved tetrahedra, and the \(\mathbb{Z}_{2}\)-tensors at the internal and bottom faces. The \(\delta\)-tensors at the edges of the lattice are split between the adjacent volumes.
For every \(x\) or \(y\) edge of \(A\) or \(B\) there is an operator \(V_{1}\) consisting of the tetrahedron spanned by this edge and the \(y\) or \(x\) edge of \(B\) or \(A\) whose center is shifted by \(\frac{1}{2}t\),
(93)
Neither \(T_{1}\) nor \(V_{1}\) are unitary since we have
\[T_{1}=P_{\text{cocycle}}T_{1}=T_{1}P_{\text{cocycle}}=P_{\text{cocycle}}T_{ 1}P_{\text{cocycle}}\;, \tag{94}\]
and the same for \(V_{1}\) instead of \(T_{1}\). So the support of \(T_{1}\) and \(V_{1}\) is contained in the cocycle subspace of the involved triangles. Restricted to this cocycle subspace, \(V_{1}\) is indeed unitary,
\[V_{1}^{\dagger}V_{1}=P_{\text{cocycle}}=\raisebox{-14.226378pt}{\includegraphics[ ]{fig/P_1\(1\)2_2\(3\)4_5\(6\)7_8\(9\)9_100102}}\;. \tag{95}\]
On the right, we have depicted the corresponding volume that arises from gluing the tetrahedron with a reflected copy. 2 This is not the case for \(T_{1}\), whose support is
contained in but not equal to the cocycle subspace. We will now show how to extend \(T_{1}\) to an isometry that is fully supported on the cocycle subspace, and later extend both \(T_{1}\) and \(V_{1}\) to the full Hilbert space using a different method. To this end, we slightly modify the spacetime lattice to incorporate anyon worldlines running along the \(x+y+t\) direction. We consider all the edges aligned with the \(x+y-t\) direction. We split every such edge into two edges separated by a 2-gon perpendicular to the \(x+y+t\) direction. Then we insert an anyon 1-gon into each such 2-gon, at the vertex with the smaller \(t\) component, for example,
\[\includegraphics[width=14.226378pt, width=14.226378pt]{figs_1.eps}\to \includegraphics[width=14.226378pt, width=14.226378pt]{figs_2.eps}. \tag{96}\]
The \(T_{1}\) volume then gets two anyon 1-gons at its boundary, which we connect using an anyon tube along the \(x+y+t\) edge,
\[\includegraphics[width=14.226378pt, width=14.226378pt]{figs_2.eps}. \tag{97}\]
With this, we can replace \(T_{1}\) by a collection of partial isometries \(\mathbf{T}=(T_{x})_{x\in\{1,s,\bar{s},s\bar{s}\}}\),
\[\begin{split} T_{x}\coloneqq\includegraphics[width=14.226378pt, width=14.226378pt]{figs_3.eps}\,\end{split}\, \tag{98}\] \[\begin{split}\omega_{e,f,f+j+y}\omega_{f,f+j+y,g+j+y}\omega_{i +y,g+j+y,j}\omega_{e,g,j}\omega_{e,j,g}\end{split}\.\]
Here we have used a cellulation of the volume with one anyon tube and 7 tetrahedra. \(\mathbf{T}\) is indeed an isometry when restricted to the cocycle subspace,
\[\mathbf{T}^{\dagger}\mathbf{T}=\sum_{x}T_{x}^{\dagger}T_{x}=P_{\text{cocycle}}. \tag{99}\]
In order to see this, we compute \(T_{x}^{\dagger}T_{x}\) by gluing Eq. (98) with a time-reflected copy and using the topological invariance, yielding a projector,
\[T_{x}^{\dagger}T_{x}=\includegraphics[width=14.226378pt, width=14.226378pt]{figs_3.eps}\, \tag{100}\]
where the bottom and top 1-gon are connected via a tube segment along the \(t\) edge. 3 Then we compute the sum over all tube segments,
Footnote 3: This is a projector since gluing two copies of this volume stacked on top of each other yields the same volume, which corresponds to the equation \(P^{2}=P\).
\[\rho_{g,h}^{1}+\rho_{g,h}^{s}+\rho_{g,h}^{\bar{s}}+\rho_{g,h}^{s\bar{s}}= \delta_{h,0}. \tag{101}\]
Setting \(h\) to \(0\) geometrically corresponds to removing the anyon tube and the \(t\) edge in Eq. (100), and identifying the loop edges at the top and bottom. So we obtain the following volume of solid-torus topology:
\[\mathbf{T}^{\dagger}\mathbf{T}=\sum_{x}T_{x}^{\dagger}T_{x}=\includegraphics[width=14.226378 pt]{figs_3.eps}\ =P_{\text{cocycle}}. \tag{102}\]
For the last equation we have used that this spacetime volume can be obtained from gluing one volume as in Eq. (95) for every pair of neighboring triangles. With this, using \(\mathbf{T}\) in Eq. (24) defines an instrument \(I[\mathbf{T}]\) restricted to the cocycle subspace.
We will now discuss how to extend \(I[\mathbf{T}]\) and \(I[\mathbf{V}]\) to full instruments also outside the cocycle subspace. The first step is to choose arbitrary extensions \(\widetilde{\mathbf{T}}\) and \(\widetilde{\mathbf{V}}\) to the full Hilbert space. 4 However, the circuit consisting of the extended instruments \(I[\widetilde{\mathbf{T}}]\) and \(I[\widetilde{\mathbf{V}}]\) clearly violates Definition 1. This can be fixed by introducing a new channel \(C\) to the circuit, with the following task: \(C\) measures whether the cocycle constraint is violated at any of the triangles, and maps back to the cocycle subspace if yes. Roughly speaking, this works because (1) \(\widetilde{\mathbf{T}}\) and \(\widetilde{\mathbf{V}}\) still preserve the cocycle subspace,
Footnote 4: In general, this might also involve enlarging the output dimension by adding new measurement outcomes. This is not necessary in the present case though.
\[\begin{split}\widetilde{T}_{x}\circ P_{\text{cocycle}}& =T_{x}=P_{\text{cocycle}}\circ T_{x}\,\\ \widetilde{V}_{x}\circ P_{\text{cocycle}}&=V_{x}=P_{ \text{cocycle}}\circ V_{x}\,\end{split} \tag{103}\]
and (2) \(P_{\text{cocycle}}\) consists of the same triangle terms for each isometry.
Concretely, it suffices to apply a channel \(C\) before every \(I[\widetilde{\mathbf{T}}]\) instrument. The space that \(\widetilde{\mathbf{T}}\) acts on is given by
\[\includegraphics[width=14.226378pt]{figs_3.eps}\, \tag{104}\]
and \(C\) acts on that same space. \(C\) is the product of one 3-qubit channel \(C^{t}\) for each of the 5 different triangles,
\[C^{t}_{c,e,f}\to C^{t}_{d,e,g}\to C^{t}_{f,a,h}\to C^{t}_{g,b,i}\to C^{t}_{h,i,j}. \tag{105}\]
Each instrument \(C^{t}\) acts on the qubits at the three edges of the triangle, as indicated by the labels which refer to Eq. (104). The 3-qubit instrument \(C^{t}_{a,b,c}\) is defined as follows. First we measure \(x=a+b+c\mod 2\), which is the same as a \(Z_{0}Z_{1}Z_{2}\) measurement just that we label the outcome with \(x\in\{0,1\}\) instead of \(\pm 1\). Then we apply a classically controlled operation \(c\to c+x\), which is the same as a CNOT after turning the classical bit \(x\) into a qubit. In other words, \(C^{t}_{a,b,c}\) fixes the cocycle condition by flipping the edge \(c\), and \(C\) pushes potential cocycle constraint violations into the anyon 1-gon. It is easy to see that \(C\) (1) maps everything into the cocycle subspace,
\[C=(P_{\text{cocycle}}\otimes P_{\text{cocycle}})\circ C\;, \tag{106}\]
and (2) acts as the identity inside the cocycle subspace,
\[C\circ(P_{\text{cocycle}}\otimes P_{\text{cocycle}})=P_{\text{cocycle}} \otimes P_{\text{cocycle}}\;. \tag{107}\]
With this, the complete QEC circuit consists of 6 rounds of channels/instruments. First we apply \(I[\widetilde{\mathbf{T}}]\) for every \(t\) edge of \(A\) whose center is within a fixed \(xy\) plane of the \(B\) lattice, and apply the according operator \(C\) before that. Then we apply \(I[\widetilde{\mathbf{V}}]\) at all \(x\) and all \(y\) edges of \(B\) inside this \(xy\) plane. We then shift the \(xy\) plane by \(\frac{1}{2}t\) and perform the same instruments with \(A\) and \(B\) exchanged. In total we obtain
\[\begin{split}&\to C_{At}\to I[\widetilde{\mathbf{T}}]_{At}\to(I[ \widetilde{\mathbf{V}}]_{Bx},I[\widetilde{\mathbf{V}}]_{By})\\ &\to C_{Bt}\to I[\widetilde{\mathbf{T}}]_{Bt}\to(I[ \widetilde{\mathbf{V}}]_{Ax},I[\widetilde{\mathbf{V}}]_{Ay})\to\;.\end{split} \tag{108}\]
Let us now show that this circuit defines a valid path-integral QEC circuit according to Definition 1. To this end, we use the tensor-network equations Eq. (106), Eq. (103), and Eq. (107) transform the circuit in Eq. (108) into the circuit
\[\begin{split}&\to I[\mathbf{T}]_{At}\to(I[\mathbf{V}]_{Bx},I[ \mathbf{V}]_{By})\\ &\to I[\mathbf{T}]_{Bt}\to(I[\mathbf{V}]_{Ax},I[\mathbf{V}]_{ Ay})\to\;.\end{split} \tag{109}\]
Specifically, applying Eq. (106) to all channels \(C_{At}/C_{Bt}\) inserts \(P_{\text{cocycle}}\) on all triangles of the corresponding spatial cut of the lattice (here coloring like before \(C_{Bt}\)),
\[\begin{split}\includegraphics[scale=0.6]{fig-1.eps}\end{split} \tag{110}\]
Then applying Eq. (103) moves \(P_{\text{cocycle}}\) to different spatial cuts. Finally, applying Eq. (107) removes all the channels \(C_{Bt}/C_{At}\). The remaining \(P_{\text{cocycle}}\) can be absorbed into the following \(I[\mathbf{T}]_{Bt}/I[\mathbf{T}]_{At}\) using Eq. (94). The transformation implies that the circuit in Eq. (108) is in the same fixed-point phase as the circuit in Eq. (109). Since for the circuit in Eq. (109), every spacetime syndrome corresponds to a fixed-point path integral with anyon worldlines, the circuit in Eq. (108) fulfils Definition 1 as well.
Depending on how we map the circuit onto a fixed set of qubits, \(I[\widetilde{\mathbf{T}}]\) acts on at least 10 qubits. So in order to implement it in practice we should decompose it into smaller gates. Surely, any gate can be written as a circuit using a small fixed universal gate set, but this circuit might be approximate and finding it might be hard for such a large operator. However, a first decomposition can be obtained by decomposing the volume in Eq. (98) into tetrahedra or at least smaller volumes. Let us give such a decomposition as a sequence of spatial lattices that we get from gluing these smaller volumes step by step,
\[\begin{split}\includegraphics[scale=0.6]{fig-1.eps}\end{split} \tag{111}\]
In the first step we glue two tetrahedra, applying twice a 5-qubit operators \(U_{1}\). The same happens in the last step with an operator \(R_{1}\). \(U_{1}\) and \(R_{1}\) are the same as \(V_{1}\) shown in Eq. (93) except that the involved edges have different directions. In the second step, the volume we glue can be cellulated with an anyon tube together with two tetrahedra, defining an operator \(S_{x}\) acting on 6 qubits,
\[\begin{split}\includegraphics[scale=0.6]{fig-1.eps}\end{split} \tag{112}\]
In the third step, we glue a tetrahedron at a single face, yielding a 6-qubit operator \(W_{1}\),
\[\begin{split}\includegraphics[scale=0.6]{fig-1.eps}\end{split} \tag{113}\]
As discussed before, we now arbitrarily extend \(\mathbf{U}\), \(\mathbf{R}\), \(\mathbf{S}\), and \(\mathbf{W}\) into isometries \(\widetilde{\mathbf{U}}\), \(\widetilde{\mathbf{R}}\), \(\widetilde{\mathbf{S}}\), and \(\widetilde{\mathbf{W}}\) supported on the full Hilbert space. Then, we replace the instrument \(I[\widetilde{\mathbf{T}}]\) by a sequence of up-to-6-qubit instruments
\[(I[\widetilde{\mathbf{U}}],I[\widetilde{\mathbf{U}}])\to I[\widetilde{ \mathbf{S}}]\to I[\widetilde{\mathbf{W}}]\to(I[\widetilde{\mathbf{R}}],I[ \widetilde{\mathbf{R}}])\;. \tag{114}\]
To extend the operators, we essentially just remove the \(P_{\rm cocycle}\) terms from the corresponding definitions. This way, \(V_{1}\) in Eq. (93) becomes a unitary
\[\widetilde{V}_{1}\left|d,a,b,e\right\rangle=\omega_{d,a,b}\left|d,a,b,e+d+b \right\rangle\;, \tag{115}\]
acting trivially on the label \(c\). This unitary can be written as a circuit of controlled-\(X\) and controlled-controlled-\(Z\) gates,
\[\begin{array}{c}\includegraphics[width=142.26378pt]{fig/cc}\end{array}. \tag{116}\]
\(S_{x}\) in Eq. (112) is a map from 6 to 3 qubits. Since there are 4 anyons and thus 4 measurement results \(x\), we need to measure one further qubit to turn \(\mathbf{S}\) into an isometry on the full Hilbert space. In order to fulfil Definition 1, the measurement outcome for this further qubit must be deterministic inside the cocycle subspace. This can be done by measuring the cocycle constraint, e.g., on the \((a,c,e)\) triangle in Eq. (112). Using \(\omega_{e,f,a}\omega_{e,a,f}=1\) and \(f=d+e\) inside the cocycle subspace, we obtain an isometry
\[\widetilde{S}_{x}\left|c,e,d,a\right\rangle=\rho_{d+e,a}^{x}\left|c,c+e+a \right\rangle\;, \tag{117}\]
acting trivially on \(b\) and \(f\). \(\widetilde{S}_{x}\) be expressed as a circuit,
\[\begin{array}{c}\includegraphics[width=142.26378pt]{fig/cc}\end{array}. \tag{118}\]
Here we have split \(x\rightarrow(x_{0},x_{1})\) into two qubits using \(1\rightarrow(0,0)\), \(s\rightarrow(1,0)\), \(\bar{s}\rightarrow(1,1)\), and \(s\bar{s}\rightarrow(0,1)\). So the qubits labeled \(x_{0}\), \(x_{1}\), and \(c+e+a\) are measured after applying the above isometry. \(\rho\) is a 2-qubit gate which in fact equals a Hadamard on the \(a\) qubit followed by a controlled-\(S\) gate. The operator \(W_{1}\) in Eq. (113) becomes an isometry
\[\widetilde{W}_{1}\left|a,b,c\right\rangle=\sum_{y}\omega_{y,y+b,c}\left|a,b,c, y,y+b,y+a\right\rangle\;. \tag{119}\]
\(\widetilde{W}_{1}\) can be written as a circuit,
\[\begin{array}{c}\includegraphics[width=142.26378pt]{fig/cc}\end{array}. \tag{120}\]
We have thus decomposed our QEC process as a circuit of common 2 or 3-qubit gates. For a practical implementation it might again be useful to write this circuit in terms of measurements and unitaries acting on qubits on a fixed spatial lattice. This is straight-forward, but might involve auxiliary qubits and swap operations.
## V Discussion and Outlook
In this paper we have proposed a perspective on topological quantum error correction based on topological fixed-point path integrals. Our approach provides a unified view on topological stabilizer, subsystem, and Floquet codes, as demonstrated in Section III. In particular, we have seen that the stabilizer toric code, subsystem toric code, and CSS Floquet code can be considered the same code on different spacetime lattices. The approach can also describe topological QEC codes that are not based on Pauli/Clifford operations as we have demonstrated in Section IV.2. As summarized in Definition 1 and Proposition 1, we have given a simple unified criterion for when a circuit of measurements forms a fault-tolerant topological error-correcting code. Namely that, for every spacetime history of measurement outcomes, we obtain a topological fixed-point path integral including syndrome defects.
Our framework provides a way to systematically construct new codes. To this end, we start with some known fixed-point path integral, and possibly apply some tensor-network equations to obtain another path integral in the same fixed-point phase. Then we interpret this path integral as a circuit of operators by setting a time direction. Dressing every operator with segments of syndrome defects, we obtain a circuit of instruments with the desired properties. We have demonstrated this at hand of two examples in Section IV. First, we have presented a Floquet version of the \(3+1\)-dimensional toric code, by considering the tensor-network path integral on a hypercubic lattice and traversing it in the \(t=x+y+z+w\) direction. The model has qubits living on the right-handed tetrahedra of a triangulation with 4-colored vertices. The code cycles through 8 rounds, in each of which we perform 2-body measurements among the qubits adjacent to edges of a certain type. Second, we have constructed a Floquet code based on the double-semion string net. This code is not designed to be particularly practical for implementation, but is decomposed into a sequence of common 2 or 3-qubit gates.
While this paper was being finalized, Ref. [25] appeared on the arXiv which proposes a similar perspective based on the \(ZX\) calculus. In that reference, it was independently recognized that the tensor-network diagrams for the stabilizer toric code and CSS Floquet code are the same, just traversed in a different direction. In addition to this, our work provides a clear physical interpretation of the tensor networks as topological fixed-point path integrals including topological defects. We also give a neat
geometric interpretation of the phaseless \(ZX\) diagrams as cellulations, the \(ZX\) rules as topological invariance, and the _Pauli webs_ or detection cells as volumes and vertices. As can be seen from Ref. [25] also fusion-based topological quantum computing [40] is described by our formalism. This holds true for topological measurement-based quantum computing [41] in general. A relation between the fusion-based model and Floquet CSS codes has also been pointed out in Ref. [42]. In contrast to all of the above examples, our formalism is not limited to the \(ZX\) calculus or stabilizer framework, but works for arbitrary tensor-network path integrals, as demonstrated in Section IV.2. 5
Footnote 5: Even though any tensor can be written as a \(ZX\) diagram, it can be beneficial to work with elementary operations that are not elementary \(ZX\) tensors.
The framework can be generalized in various directions. First, topological state-sum path integrals do not cover all zero-correlation length path integrals, and similarily not all gapped phases. Exemptions can be obtained from topological path integrals by inserting a rigid network of topological defects, which we refer to as _foliation defects_. To this end, we choose some cubic "superlattice" with a potentially larger unit cell than the topological path integral. Then (in 2+1 dimensions) we introduce domain walls at all superlattice faces, which meet at 1-dimensional foliation defects along the edges, which in turn meet at the vertices. Examples for this in 2+1 dimensions seem to yield topological path integrals again after choosing a larger unit cell, and thus correspond to a "weak breaking of translation symmetry", as we have seen in Section III.4. In 3+1 dimensions however, topological defect networks can describe fracton phases [43], and potentially more if we also insert foliation defects perpendicular to time [44]. Floquet codes based on fracton phases have been presented in Refs. [45; 13].
A second straight-forward generalization is to consider spacetime lattices that change with time. By changing the topology of the spatial configuration, we obtain circuits that do not only fault-tolerantly store, but also process logical information. Both storing and processing of logical information becomes much more versatile if we equip the topological path integral with computational defects such as boundaries, domain walls, or other sorts of interfaces and defects. For example, we can then perform computation via braiding with anyons or via lattice surgery with boundaries.
Another direction is to consider path integrals where the defects that we use for error correction (such as anyons) do not possess abelian fusion rules. In this case the scheme of Proposition 1 outlined in Eq. (19) cannot work, since there is not necessarily a unique way to perform a correction. For example, consider a path integral QEC circuit based on the double-Fibonacci phase, and assume we measure the following spacetime syndrome on a torus,
\[\includegraphics[width=142.26378pt, width=142.26378pt]{Fig1}, \tag{121}\]
with the left and right, as well as front and back identified. There are two ways of fixing the syndrome inside the red dashed circle, namely
\[\includegraphics[width=142.26378pt, width=142.26378pt]{Fig2}, \tag{122}\]
which correspond to different logical operations acting on the ground space on a torus. There is no way to find out which superposition of these logical operations will correctly undo the error that occured. A decoding strategy that has been tested successfully is based on a hierachical decomposition of the lattice into _colonies_[46; 47]. A different strategy that might work is to "continuously" apply small corrections in every timestep instead of one large correction after a large time \(T\sim L\). That is, in every time step, we choose a new low-weight fix of the spacetime syndrome in all of its past. Then we consider the set of string operators that could be used to close the repaired spacetime syndrome in a cohomologically trivial way inside the current spatial cut. We pick a low (e.g., minimum) weight representative from this set. Then, we apply only a single segment of this closing string operator near each of its endpoints. Independent of the choice of classical decoder, it will be interesting to see whether and how our framework can be used to construct syndrome extraction circuits for arbitrary non-abelian phases.
Another very interesting question concerns chiral phases, that is, topological phases in \(2+1\) dimensions whose anyon theory is described by a unitary modular tensor category that is not a Drinfeld center. It is a common believe that chiral phases do not allow for exactly solvable fixed-point zero-correlation length descriptions, and no such descriptions are known to date. Concretely, it has been shown that chiral phases do not admit commuting-projector Hamiltonian models [48]. However, there are indications that going from Hamiltonians to discrete path integrals might resolve this problem [49]. In contrast to condensed matter physics, discrete path integrals (i.e., circuits) are the much more common in topological QEC. Thus, it is natural to look there for candidates of chiral topological fixed-point path integrals. Indeed, subsystem codes based on chiral topological phases exist. Already more than a decade ago, Ref. [16] presented a subsystem code that appears to be in a 3-fermion phase. Recently, subsystem codes based on arbitrary (including chiral) abelian anyon theories have been constructed in Ref. [50] using a mechanism of "gauging out" anyons. A clear definition for the topological
phase of a code can be obtained by applying our formalism in reverse direction. To this end, we consider a history of measurement outcomes (usually +1 for Pauli codes) that does not require any correction, and take the path integral for this spacetime syndrome. Applying this to chiral subsystem codes yields a discrete path integral for which we have good reasons to believe that it is in a chiral phase. It will be very interesting to see whether these path integrals do genuinely represent chiral phases, and whether one can show their discrete topological invariance.
###### Acknowledgements.
I would like to thank Julio Magdalena de la Fuente, Alex Townsend-Teague, Alexander Nietner, Ansgar Burchards, Jens Eisert, Margarita Davydova, Shankar Balasubramanian, and David Aasen for helpful conversations and comments on the manuscript, and especially Markus Kesselring for fruitful discussions on 3-dimensional tessellations. This work was supported by the DFG (CRC 183 project B01), the BMBF (RealistiQ, QSolid), the Munich Quantum Valley (K-8), and the BMWK (PlanQK).
|
2306.10682
|
Dark-state induced trapping law in single-photon emission from multiple
quantum emitters
|
We study the single-photon collective dynamics in a waveguide system
consisting of the photon channel with a finite bandwidth and an ensemble of
quantum emitters. The size of the volume of these quantum emitters is ignorable
when compared with the wavelength of the radiation photons. Based on the
analytical calculations beyond the Wigner-Weisskopf and Markovian theories, we
present exact solutions to the time evolution of the excited emitters with
collective effects. Different from the trapping effect caused by photon-emitter
bound states, we find that the dark states in the systems lead to a universal
trapping behavior independent of the bosonic bath and the coupling strength
between photons and emitters. Instead, the trapping is solely determined by the
number of initially excited emitters and the total number of emitters. We
demonstrate that such a trapping law can persist even when there are more than
one type of emitters in the system. Our findings lead to the prediction that
single-photon collective emissions can be strongly suppressed if the number of
excited emitters is much less than the total number of emitters in the system.
|
Lei Qiao, Jiangbin Gong
|
2023-06-19T03:06:09Z
|
http://arxiv.org/abs/2306.10682v1
|
# Dark-state induced trapping law in single-photon emission from multiple quantum emitters
###### Abstract
We study the single-photon collective dynamics in a waveguide system consisting of the photon channel with a finite bandwidth and an ensemble of quantum emitters. The size of the volume of these quantum emitters is ignorable when compared with the wavelength of the radiation photons. Based on the analytical calculations beyond the Wigner-Weisskopf and Markovian theories, we present exact solutions to the time evolution of the excited emitters with collective effects. Different from the trapping effect caused by photon-emitter bound states, we find that the dark states in the systems lead to a universal trapping behavior independent of the bosonic bath and the coupling strength between photons and emitters. Instead, the trapping is solely determined by the number of initially excited emitters and the total number of emitters. We demonstrate that such a trapping law can persist even when there are more than one type of emitters in the system. Our findings lead to the prediction that single-photon collective emissions can be strongly suppressed if the number of excited emitters is much less than the total number of emitters in the system.
## I Introduction
The coupling of quantum emitters (QEs) to a quantized radiation field can bring about drastically different physical phenomena depending on the specific structure of the photon environment. In free space, the dynamics of initially excited QEs typically exhibits exponential decay. By contrast, QEs can undergo coherent emission and reabsorption of photons in a single-mode cavity [1] as a special photon environment. In particular, with the development of new avenues in the integration of QEs with nanophotonic structures, there are now a variety of platforms to investigate the dynamics of QEs coupled with radiation fields with nontrivial electromagnetic dispersions in a confined space. Examples include systems for guided surface plasmons coupled by individual optical emitters [2; 3], photonic nanowire with embedded quantum dots [4], and superconducting transmission line coupled by superconducting qubits [5]. In these systems, the tight confinement of the propagating electromagnetic radiation leads to the enhancement of coupling between the QEs and photons [6], yielding a number of intriguing dynamical phenomena such as persistent quantum beats [7; 8], unidirectional emission [9; 10], single photons by quenching the vacuum [11], and supercorrelated radiance [12].
The interference between coherent radiation channels in an ensemble of QEs results in collective emission [13; 14; 15; 16; 17], as first illustrated by the Dicke superradiance and subradiance [18; 19]. Such collective interactions between QEs and photons play an important part in the various applications of quantum optics such as optical quantum state storage [20; 21; 22], quantum communication [23; 24], and quantum information processing [25; 26]. As one prominent example representing advances in designing and probing light-matter interactions, the collective coupling of a macroscopic number of single-molecule magnets with a microwave cavity mode has recently been realized [27]. It is equally motivating that the large collective Lamb shift of two distant superconducting artificial atoms has also been observed in a superconducting transmission line terminated by a mirror [28].
The new avenues in the integration of QEs with nanophotonic structures stimulate the investigation of physics of photon-QE interactions in one-dimensional waveguide settings that are engineered to have nontrivial dispersion relations with band edges and band gaps [29; 30; 31; 32; 33; 34]. Near band edges or band gaps of the photonic dispersion relation, the group velocity of the propagating photons is greatly reduced or even completely prohibited, triggering new possibilities. It has been demonstrated that the spontaneous emission of an excited atom coupled to the band edge of a photonic crystal reveals non-exponential decay dynamics, with a finite non-decaying excitation fraction exhibiting oscillatory behaviors [35; 36; 37]. This population trapping is due to the presence of localized atom-field bound states with energies outside the band of scattering modes [38]. When it comes to many QEs, the non-decaying fraction of QEs can be attributed to two different trapping mechanisms. One comes from the existence of photon-QE bound states, the other arises from that of dark states with energies equal to the transition frequencies of the QEs [39; 40]. In an ensemble of QEs confined to a small volume compared to the radiated wavelengths, it has recently been pointed out that the emission dynamics contributed by dark states will obey the \((1-1/M)\) trapping law (\(M\) is the total number of QEs) if only one of the QEs is excited initially [39; 40]. It was also previously shown that this kind of population-trapping law is robust in different QE systems.
In this paper, we focus on a more general situation in the single-photon regime, where the initial state, though
in the single-photon Hilbert subspace, involving a superposition of excitations from different QEs. Loosely speaking, the initial excitation involves more than one QEs. We investigate the ensuing cooperative dynamics based on an analytical analysis beyond the Wigner-Weisskopf approximations and Markovian approximations. A new excitation trapping law is identified. The found trapping law behavior does not depend on the specific light-field environment or the coupling strength between QEs and photons. As one direct application of our finding, one can predict that if the total number of QEs is much greater than the number of excited QEs in the initial state, the collective spontaneous emission is strongly suppressed. The trapping properties of more than one types of QEs are also explored and similar trapping law is found to persist under certain conditions. Note that throughout the paper, the QEs are assumed to be placed much closer than the wavelength of the radiation photons and thus the QEs are effectively coupled to the radiation field without retardation effects.
This paper is organized as follows. In Sec. II, we introduce our model consisting of an assembly of QEs and a coupled-resonator waveguide. In Sec. III, we investigate the single-photon collective dynamics in the presence of one type of QEs. In Sec. IV, the time evolution of excited QEs is also analyzed with different types of QEs participating in the dynamics. Finally, we summarize the results and give our conclusions and discussions in Sec. V.
## II Model
We consider a system consisting of a one-dimensional array of tunnel-coupled resonators. One of the resonators is also directly coupled with different types of two-level QEs. The \(j\)th QE of type \(i\) is assumed to have excited state \(\left|e_{j}^{i}\right\rangle\) and ground state \(\left|g_{j}^{i}\right\rangle\), separated in energy by frequency \(\Omega_{i}\) (we set \(\hbar=1\) throughout). Denoting \(a_{x}\) (\(a_{x}^{\dagger}\)) as the bosonic annihilation (creation) operator for a photon at site \(x\), the tight-binding Hamiltonian of the resonator-photon system can be modeled as
\[H =\sum_{x}\omega_{c}a_{x}^{\dagger}a_{x}+\sum_{x}J\left(a_{x+1}^{ \dagger}a_{x}+a_{x}^{\dagger}a_{x+1}\right)\] \[+\sum_{i}\sum_{j}\Omega_{i}\left|e_{j}^{i}\right\rangle\left\langle e _{j}^{i}\right|\] \[+\sum_{i}\sum_{j}V_{i}\left(\sigma_{j}^{i+}a_{x_{0}}+\sigma_{j}^ {i-}a_{x_{0}}^{\dagger}\right), \tag{1}\]
where \(\omega_{c}\) is the resonance frequency of each resonator. \(J\) represents the hopping energy of photons between two neighbouring lattices. Here, \(\sigma_{j}^{i+}=\left|e_{j}^{i}\right\rangle\left\langle g_{j}^{i}\right|\) (\(\sigma_{j}^{i-}=\left|g_{j}^{i}\right\rangle\left\langle e_{j}^{i}\right|\)) is the raising (lowering) operator acting on the \(j\)th QE of type \(i\). \(V_{i}\) is the coupling strength between the waveguide mode at resonator \(x_{0}\) and type-\(i\) QEs. For convenience, we further assume that the lattice constant \(a=1\) throughout. Such coupled-resonator setups have been realized in different platforms, such as the coupled superconducting cavities [41; 42; 43] and the coupled nanocavities in photonic crystals [44]. The typical values for the coupling strength \(V_{i}\) and hopping energy \(J\) go up to a few hundred MHz in these experiments, whereas the frequency \(\Omega_{i}\) can be controllable within a few GHz. The resonator dissipative rate \(\gamma_{c}\) and the emitter dissipative rate \(\gamma_{e}\) are in the kHz regime and are thus much smaller than \(V_{i}\), \(J\) and \(\Omega_{i}\)[45]. This being the case, the system's dissipation can be safely neglected in our theoretical considerations below.
The first two terms in Eq. (II) describe the free photon Hamiltonian and can be diagonalized by introducing the Fourier transform
\[a_{k}=\frac{1}{\sqrt{N}}\sum_{x}e^{-ikx}a_{x}, \tag{2}\]
where \(k\) is the wave number within the first Brillouin zone and \(k\in[-\pi,\pi]\), which becomes continuous in the the limit of \(N\rightarrow\infty\). In this \(k\)-representation, the free photon Hamiltonian becomes \(\sum_{k}\omega_{k}a_{k}^{\dagger}a_{k}\) with the dispersion \(\omega_{k}=\omega_{c}+2J\cos{(k)}\). This mode frequency vs \(k\) forms a scattering band with \(\omega_{c}\) being the band center with bandwidth \(4J\) (\(J>0\)). Such structured modes support photons to transport in the waveguide with the group velocity \(v_{g}(k)=-2J\sin{(k)}\), which reaches its extreme values at the center of band and gets to zero at the two band edges. Still using the \(k\)-representation, the Hamiltonian in Eq. (II) can be rewritten as
\[H=\sum_{k}\omega_{k}a_{k}^{\dagger}a_{k}+\sum_{i}\sum_{j}\Omega_{i}\left|e_{j} ^{i}\right\rangle\left\langle e_{j}^{i}\right|+H_{I}, \tag{3}\]
with
\[H_{I}=\sum_{i}\sum_{j,k}\frac{V_{i}}{\sqrt{N}}\left(\sigma_{j}^{i+}e^{ikx_{0} }a_{k}+\sigma_{j}^{i-}e^{-ikx_{0}}a_{k}^{\dagger}\right). \tag{4}\]
This expression of the system Hamiltonian indicates clearly that the QEs are coupled to a finite-width energy band of waveguide modes. From now on, for simplicity of calculation, \(x_{0}\) is set to be the zero point of the \(x\) axis. For the case of having only one QE, the spontaneous emission of the sole QE will be much suppressed due to the population trapping effect arising from bound states, if the QE's frequency is outside the waveguide energy band [32]. On the contrary, when the transition frequency of the QE lies inside the band and far away from the upper and lower edges, the excited QE will undergo an exponential decay if the coupling strength \(V\ll 2J\) while it will exhibit stable Rabi oscillation for sufficiently long time if the coupling strength \(V\gtrsim 2J\)[29].
## III Dynamics and trapping law with one type of QEs
We first investigate the situation where the waveguide system hosts only one type of QEs. This configuration is
also known as one of the general Fano-Anderson models. In this case, there are two photon-QE bound states with nonzero field amplitudes. One bound state's energy is above the scattering band and the other bound state's energy is below the bottom of the band [39]. The dynamics with only one QE being initially excited among \(M\) QEs has been investigated and a universal trapping law, has been found in the population dynamics [40], namely, at long time the population will be trapped at \((1-1/M)^{2}\). Here, we explore the situation that multiple QEs are initially excited, where the initial state is now an entangled state involving multiple QEs and quantum interference between different deexcitation pathways may lead to interesting physics.
To specifically and theoretically investigate the spontaneous emission dynamics, we start from the time-dependent Schrodinger equation
\[i\frac{\partial}{\partial t}\left|\psi\left(t\right)\right\rangle=H\left|\psi \left(t\right)\right\rangle. \tag{5}\]
The time-evolving state \(\left|\psi\left(t\right)\right\rangle\) at time \(t\) can be written as \(\left|\psi\left(t\right)\right\rangle=\sum_{j}b_{j}(t)\left|e_{j},0\right\rangle +\sum_{k}C_{k}(t)\left|g,1_{k}\right\rangle\), where \(b_{j}(t)\) (\(j=1,2,...,M\)) is the excitation amplitude for the \(j\)th QE in this sole type of QEs and \(C_{k}(t)\) is the amplitude for the waveguide mode with wavenumber \(k\). Applying Eq. (5), one obtains the following dynamical equations for the amplitudes
\[i\frac{\partial b_{j}\left(t\right)}{\partial t}=\Omega b_{j}\left(t\right)+ \sum_{k}\frac{V}{\sqrt{N}}C_{k}\left(t\right), \tag{6}\]
\[i\frac{\partial C_{k}\left(t\right)}{\partial t}=\omega_{k}C_{k}\left(t\right) +\sum_{j}\frac{V}{\sqrt{N}}b_{j}\left(t\right). \tag{7}\]
To analytically solve these coupled dynamical equations, one may make use of the well-known Wigner-Weisskopf theory or Markovian theory by neglecting the contributions from any possible bound states. These two treatments can work well in the presence of one QE with the conditions \(\left|\Omega-\omega_{c}\right|\ll 2J\) and \(V\ll 2J\), under which the bound-state trapping regime can be neglected [39]. However, such approximate treatment would not be able to capture a potential population trapping effect as mentioned above. To capture the impact of multiple QEs on population trapping, we must go beyond these approximations. To that end we take a Laplace transform for Eqs. (6) and (7) with \(\tilde{b}_{j}(s)=\int_{0}^{\infty}b_{j}(t)e^{-st}dt\) and \(\tilde{C}_{k}(s)=\int_{0}^{\infty}C_{k}(t)e^{-st}dt\). This yields
\[i\left[-b_{j}\left(0\right)+s\tilde{b}_{j}\left(s\right)\right]=\Omega\tilde{ b}_{j}\left(s\right)+\sum_{k}\frac{V}{\sqrt{N}}\tilde{C}_{k}\left(s\right), \tag{8}\]
\[i\left[-C_{k}\left(0\right)+s\tilde{C}_{k}\left(s\right)\right]=\omega_{k} \tilde{C}_{k}\left(s\right)+\sum_{j}\frac{V}{\sqrt{N}}\tilde{b}_{j}\left(s \right). \tag{9}\]
Without loss of generality, we denote the initial excited QEs by index \(j_{n}\), with \(j_{n}\) going from \(j_{1}\), \(j_{2}\),..., to \(j_{m}\) if there are initially \(m\) QEs excited. All other QEs are in their ground states. Hence, the initial conditions in terms of the initial quantum amplitudes are: \(b_{j_{1}}(0)=...=b_{j_{m}}(0)=1/\sqrt{m}\) (\(m\leqslant M\)), \(b_{j}(0)=0\) (\(j\neq j_{n}\)), and \(C_{k}(0)=0\). After some algebra, we obtain the expression of \(\tilde{b}_{j_{1}}(s)=...=\tilde{b}_{j_{m}}(s)\equiv\tilde{b}_{e}(s)\) with \(\tilde{b}_{e}(s)\) being
\[\tilde{b}_{e}\left(s\right)=i\frac{is-\Omega-\left(M-m\right)V^{2}F\left(s \right)}{\sqrt{m}\left(is-\Omega\right)\left[is-\Omega-MV^{2}F\left(s\right) \right]}, \tag{10}\]
where \(F(s)=(1/N)\sum_{k}1/(is-\omega_{k})\). The time-evolving amplitudes for excited QEs can then be derived by use of the inverse Laplace transform \(b_{j_{n}}(t)=(1/2\pi i)\int_{\sigma-\infty}^{\sigma+i\infty}\tilde{b}_{j_{n}} (s)e^{st}ds\), with real number \(\sigma\) being sufficiently large so that all the poles are on its left side. Note all the initially excited QEs have the same time-dependent amplitudes due to the chosen initial state. To calculate the integral here, the analytic properties of \(\tilde{b}_{j_{n}}(s)\) are considered in the whole complex plane except a branch cut from \(-i(2J+\omega_{c})\) to \(i(2J-\omega_{c})\) along the imaginary axis. By using the residue theorem [46], we arrive at the exact expressions for \(b_{j_{1}}(t)=...=b_{j_{m}}(t)\equiv b_{e}(t)\) with \(b_{e}(t)\) being
\[b_{e}(t) =\sum_{n}\left.\frac{s+i\Omega+i\left(M-m\right)V^{2}F\left(s \right)}{\sqrt{m}\left[G_{1}\left(s\right)\right]^{\prime}}e^{st}\right|_{s= \varepsilon_{n}}\] \[+\int_{-1}^{1}\frac{4\sqrt{m}V^{2}J^{2}\sqrt{1-y^{2}}e^{i2Jyt}}{L \left(y\right)+\pi M^{2}V^{4}}dy, \tag{11}\]
where
\[G_{1}(s)=(s+i\Omega)G_{0}(s) \tag{12}\]
with
\[G_{0}(s)=s+i\Omega+iMV^{2}F(s) \tag{13}\]
Figure 1: Time evolution of the magnitude of \(b_{e}(t)\), the amplitude on the excited QEs with \(m=3\). The time is in units of \(1/(2J)\). Other parameters are \(V/(2J)=0.08\) and \(M=3\). Here and in all other figures, the plotted results are computed directly from our analytical results that have been also confirmed by numerical simulations based on the time-dependent Schrödinger equation.
and \(L(y)\) is defined as
\[L(y)=4\pi J^{2}(1-y^{2})(2Jy+\Omega)^{2}. \tag{14}\]
Here, \([G_{1}\left(s\right)]^{\prime}\) represents the derivative of \(G_{1}(s)\) with respect to \(s\). \(\varepsilon_{n}\) is the roots of the equation \(G_{1}(s)=0\). All these roots can be divided into two kinds. One is the solutions to the equation \(G_{0}(s)=0\). This kind of roots are pure imaginary numbers, with their imaginary parts corresponding to minus eigenenergies of localized photon-QE bound states [39]. The other additional root is \(s=-i\Omega\) and this root corresponds to the energy of dark states. In fact, according to the analysis using a complete basis expansion based on Green's function method [34], the terms with \(s\) being the solutions to the equation \(G_{0}(s)=0\) in Eq. (13) comes from the contribution of system's photon-QE bound states. The second line in Eq. (11) (which becomes zero in the limit of \(t\rightarrow\infty\)) arises from the contribution of system's scattering states. When the number of initial excited QEs is equal to the total number of QEs, i.e., \(m=M\), one can obtain \(b_{e}(\infty)=\sum_{n}e^{st/}\{\sqrt{M}[G_{0}(s)]^{\prime}\}|_{t\rightarrow \infty,\;s=\varepsilon_{n}^{\prime}}\) where \(\varepsilon_{n}^{\prime}\) is solutions to the equation \(G_{0}(s)=0\). The purely imaginary roots \(\varepsilon_{n}^{\prime}\) reveal that the populations on the excited QEs are fractionally trapped when \(t\rightarrow\infty\).
In Fig. 1, we plot the time dependence of \(b_{e}(t)\) with \(m=M=3\) and different detuning \(\Delta=\Omega-\omega_{c}\). It can be seen that a larger fraction of the population is trapped at long time as the transition frequency \(\Omega\) shifts away from the frequency \(\omega_{c}\) of single resonator, and the spontaneous emission is almost totally suppressed when \(\Omega\) is far away from the energy band. Exploring many examples, it is found that only under the condition \(\sqrt{M}V\ll 2J\) and \(|\Omega-\omega_{c}|<J\) with which the first term in Eq. (11) can be ignored, the emission of QEs can be nearly complete and then display a basically exponential decay, with a slowly changing radiation rate as \(\Omega\) varies from \(\omega_{c}\pm J\) to \(\omega_{c}\). For such a case, \(|b_{e}(t)|^{2}\) can be approximately calculated as \(|b_{e}(t)|^{2}\approx(1/M)e^{-\Gamma_{s}(\Delta)t}\), with a decay rate \(\Gamma_{s}(\Delta)=2\pi MV^{2}D(\Delta)\), where \(D(\Delta)\) is the density of states for the free-photon Hamiltonian. That is, only for such situations the spontaneous emission dynamics is best approximated by the Wigner-Weisskopf and Markovian approximate theory [47; 48]. Note also that under the condition of \(\Delta=0\), \(D(\Delta)\) gets its extremum, and \(\Gamma_{s}(0)=MV^{2}/J\), which is \(M\) times the radiation rate for the case of only one QE. This is precisely what a standard superradiance theory predicts.
Consider next what happens if the number of initially excited QEs is less than the total number of QEs, i.e., \(m<M\). With the number \(M\geqslant 2\), there are not only nonlocalized scattering states and localized photon-QE bound states, but also degenerate dark states with energy \(E=\Omega\)[49; 39; 40]. These dark states have a specific property, namely, due to collective interference effects, these dark states allow only the QEs to be excited and so excitation amplitudes on the photon field modes are all zero. Therefore, now both such dark states and photon-QE bound states play a role in spontaneous emission dynamics. For the condition \(\sqrt{M}V\ll 2J\) and \(|\Omega-\omega_{c}|<J\), with which the contributions from the photon-QE bound states are much smaller than that of the dark states, then the final values of \(|b_{e}(t)|\) are found to depend only on the number \(m\) of initial excited QEs and the total number \(M\) of QEs. Specifically, for sufficiently long time (the second line of Eq. (11) can be dropped due to the highly oscillatory integral there) and upon neglecting the contributions from the roots of \(G_{0}(s)=0\) that represent contributions from the photon-QE bound states, Eq. (11) reduces to
\[b_{e}(t) \approx \sum_{n}\left.\frac{s+i\Omega+i\left(M-m\right)V^{2}F\left(s \right)}{\sqrt{m}\left[G_{1}\left(s\right)\right]^{\prime}}e^{st}\right|_{s= \varepsilon_{n}} \tag{15}\] \[\approx \left.\frac{s+i\Omega+i\left(M-m\right)V^{2}F\left(s\right)}{ \sqrt{m}\left[G_{1}\left(s\right)\right]^{\prime}}e^{st}\right|_{s=-i\Omega}\] \[= \frac{M-m}{\sqrt{m}M}e^{-i\Omega t}\]
This is one main result of this work.
In Fig. 2 (a), we plot our purely theoretical result of \(|b_{e}(t)|\) assuming that two QEs are initially excited with different values of \(M\), the total number of QEs in the system. As time lapsed is long enough, the final value of \(|b_{e}(t)|\) during the emission dynamics is
Figure 2: Time evolution of the magnitude of the excited-state amplitude \(b_{e}(t)\) with different QE number \(M\) for (a) \(m=2\) and (b) \(m=3\). Other parameters are \(\Delta/(2J)=0\) and \(V/(2J)=0.07\). Time is in units of \(1/(2J)\).
stabilized at \((M-2)/(\sqrt{2}M)\). Similarly, for the case where three QEs are initially excited, the amplitudes \(b_{j_{1}}(t)=b_{j_{2}}(t)=b_{j_{3}}(t)\equiv b_{e}(t)\) at long time are found to stabilize at \(\left|b_{e}(\infty)\right|=(M-3)/(\sqrt{3}M)\) without further decay, as shown in Fig. 2 (b). Note again that the plotted results are computed directly from our analytical results derived above and have been also confirmed by our numerical results based on the time-dependent Schrodinger equation. These specific results hence have clearly illustrated our main theoretical prediction. For \(m=1\), \(\left|b_{e}(\infty)\right|\) comes back to the previous result already studied in the context of vacuum photonic bath, photonic crystal and coupled-resonator waveguide [40]. It is worth noting that Eq. (15) can also include the result with \(m=M\). No trapping happens for this case and the initial energy of all QEs will be fully released, as also illustrated in Fig. 2 (a) and (b). Physically, this is because the initial excited states with \(M=m\) are orthogonal to the dark states and as such, the presence of dark states is unable to saturate the spontaneous decay. From the view of prolonging the lifetime of QEs, we can see that under the condition \(m\ll M\), our theory above predicts that the spontaneous emission of the QEs will be greatly suppressed.
## IV Dynamics and trapping law with two types of QEs
We now investigate the properties of dynamics in the waveguide system with two different types of QEs indexed by \(A\) and \(B\). Unlike the case with one sole type of QEs where there are always two photon-QE bound states, the energy-level structure of the system with two type of QEs can undergo certain transitions when some system parameters change [39], such as the QE numbers \(M_{A}\) and \(M_{B}\) or the coupling strengths \(V_{A}\) and \(V_{B}\). When overall only one QE is initially excited (without loss of generality, assuming that the excited QE belongs to type \(A\)), then previously it was found that the asymptotic value of the magnitude of the quantum amplitude of the QE is given by \(1-1/M_{A}\)[39]. Encouraged by our results from the previous section, here we wish to examine if there is some similar trapping law if more than one QEs, but still belonging to the same type, are initially excited. One main complication in answering this question is that the two types of QEs can interact strongly with each other through the waveguide system. As such, to observe an interesting trapping law it is necessary to find under what theoretical conditions the decay dynamics can still exhibit some trapping law. A violation of the trapping law sought after here will give us strong indication of the interplay between different types of QEs.
Let us now proceed with our theoretical framework. In the single excitation subspace, the time-evolving state at time \(t\) can be written as
\[\left|\varphi\left(t\right)\right\rangle=\sum_{i}\sum_{j}b_{j}^{i}\left(t \right)\left|e_{j}^{i},0\right\rangle+\sum_{k}C_{k}\left(t\right)|g,1_{k} \rangle\,, \tag{16}\]
where \(b_{j}^{i}(t)\) (\(i=A,B\)) is the excitation amplitude of the system's state for the \(j\)th QE of type \(i\), with no photon in the waveguide. \(C_{k}(t)\) is the amplitude for the state that all QEs are in their ground states and there is a photon with wavenumber \(k\). Plugging \(|\varphi(t)\rangle\) into the Schrodinger equation \(i\partial\left|\varphi\left(t\right)\right\rangle/\partial t=H\left|\varphi \left(t\right)\right\rangle\), one can obtain the following coupled equations for \(b_{j}^{i}\left(t\right)\) and \(C_{k}\left(t\right)\)
\[i\frac{\partial}{\partial t}b_{j}^{i}\left(t\right)=\Omega_{i}b_{j}^{i}\left( t\right)+\sum_{k}\frac{V_{i}}{\sqrt{N}}C_{k}\left(t\right), \tag{17}\]
\[i\frac{\partial}{\partial t}C_{k}\left(t\right)=\omega_{k}C_{k}\left(t\right) +\sum_{i}\sum_{j}\frac{V_{i}}{\sqrt{N}}b_{j}^{i}\left(t\right). \tag{18}\]
Similar to the steps in the case of one type of QEs, one can take a Laplace transform for Eqs. (17) and (18) with \(\tilde{b}_{j}^{i}(s)=\int_{0}^{\infty}b_{j}^{i}(t)e^{-st}dt\) and \(\tilde{C}_{k}(s)=\int_{0}^{\infty}C_{k}(t)e^{-st}dt\), which leads to
\[i\left[-b_{j}^{i}\left(0\right)+s\tilde{b}_{j}^{i}\left(s\right)\right]=\Omega _{i}\tilde{b}_{j}^{i}\left(s\right)+\sum_{k}\frac{V_{i}}{\sqrt{N}}\tilde{C}_{ k}\left(s\right), \tag{19}\]
\[i\left[-C_{k}\left(0\right)+s\tilde{C}_{k}\left(s\right)\right]=\omega_{k} \tilde{C}_{k}\left(s\right)+\sum_{i}\sum_{j}\frac{V_{i}}{\sqrt{N}}\tilde{b}_{j }^{i}\left(s\right). \tag{20}\]
Let us now assume that initially only one type of QEs are excited and without loss of generality, the excited QEs denoted by index \(j_{n}\) are assumed to be of type \(A\). We further assume that in total \(m_{A}\) QEs of type \(A\) are initially excited and all other QEs are in their ground state, with amplitudes \(b_{j_{1}}^{A}(0)=...=b_{j_{m}}^{A}\) (\(0)=1/\sqrt{m_{A}}\) (\(m_{A}\leqslant M_{A}\)), \(b_{j}^{A}(0)=0\) (\(j\neq j_{n}\)), \(b_{j}^{A}(0)=0\) and \(C_{k}(0)=0\). After some necessary algebraic operations with Eq. (19) and (20), one can arrive at the expression of \(\tilde{b}_{j_{1}}^{A}(s)=...=\tilde{b}_{j_{m_{A}}}^{A}(s)\equiv\tilde{b}_{e}^{A }(s)\) with \(\tilde{b}_{e}^{A}(s)\) being
\[\tilde{b}_{e}^{A}\left(s\right) =i\frac{K_{A}\left(s\right)K_{B}\left(s\right)+m_{A}V_{A}^{2}F \left(s\right)K_{B}\left(s\right)}{\sqrt{m_{A}}\left(is-\Omega_{A}\right)Y \left(s\right)}\] \[-i\frac{\left(M_{A}-m_{A}\right)M_{B}\left[V_{A}V_{B}F\left(s \right)\right]^{2}}{\sqrt{m_{A}}\left(is-\Omega_{A}\right)Y\left(s\right)} \tag{21}\]
where
\[K_{i}(s)=is-\Omega_{i}-M_{i}V_{i}^{2}F(s), \tag{22}\]
and
\[Y(s)=K_{A}(s)K_{B}(s)-M_{A}M_{B}[V_{A}V_{B}F(s)]^{2}. \tag{23}\]
Just like what was done in the previous section, the time dependence of the excited QEs can be calculated by the inverse Laplace transform. Because \(\tilde{b}_{j_{n}}^{A}(s)\) is an analytic function in the whole complex plane except a branch cut from \(-i(2J+\omega_{c})\) to \(i(2J-\omega_{c})\) along the imaginary axis, the exact expressions of the excited amplitudes \(b_{j_{1}}^{A}(t)=
\(...=b_{j_{m_{A}}}^{A}(t)\equiv b_{e}^{A}(t)\) can be acquired by using the residue theorem [46] and \(b_{e}^{A}(t)\) is obtained as
\[b_{e}^{A}(t) =\left.\frac{\left(M_{A}-m_{A}\right)}{\sqrt{m_{A}}M_{A}}e^{st} \right|_{s=-i\Omega_{A}}\] \[-\sum_{n}\left.\frac{\left(s+i\Omega_{B}\right)m_{A}V_{A}^{2}F \left(s\right)}{\sqrt{m_{A}}\left(is-\Omega_{A}\right)\left[G_{2}\left(s\right) \right]^{\prime}}e^{st}\right|_{s=\bar{\varepsilon}_{n}}\] \[+\left.\sum_{\alpha=\pm 1}\int_{-1}^{1}\frac{J\left(2Jy+\Omega_{B }\right)m_{A}V_{A}^{2}f\left(y\right)e^{i2Jyt}}{\pi\sqrt{m_{A}}\left(2Jy+ \Omega_{A}\right)Z_{\alpha}\left(y\right)}dy \tag{24}\]
where
\[G_{2}(s) = (is-\Omega_{A})(is-\Omega_{B})-(is-\Omega_{A})M_{B}V_{B}^{2}F(s) \tag{25}\] \[-(is-\Omega_{B})M_{A}V_{A}^{2}F(s)\]
and \(\bar{\varepsilon}_{n}\) is the roots of the equation \(G_{2}(s)=0\). We stress that \(G_{2}(-iE)=0\) can be used to determine the eigenenergies \(E\) of localized photon-QE bound states [39]. Because \(G_{2}(s)\) involves physical properties of both types of QEs, in general one anticipates that photon-QE bound states here can lead to rather complicated population trapping behavior [35]. As to the third term from the contribution of the scattering states in Eq. (24), it contains functions \(f(y)\) and \(Z_{\pm}(y)\) defined as \(f(y)=1/(2J\sqrt{1-y^{2}})\) and \(Z_{\pm}(y)=(2Jy+\Omega_{A})(2Jy+\Omega_{B})\pm i[(2Jy+\Omega_{A})M_{B}V_{B}^{ 2}+(2Jy+\Omega_{B})M_{A}V_{A}^{2}]f(y)\). As expected, we also see a highly oscillatory factor \(e^{i2Jyt}\) in the long time limit, thus killing the third term for long time dynamics.
Despite the complicated contributions from the photon-QE bound states involving two types of QEs, what we learned from the previous section is that there are still a wide parameter regime where we may focus on the contributions of the dark state only with the bound-state contributions being negligible. That is, if the magnitude of \((s+i\Omega_{B})m_{A}V_{A}^{2}F(s)/\{\sqrt{m_{A}}(is-\Omega_{A})[G_{2}(s)]^{ \prime}\}|_{s=\bar{\varepsilon}_{n}}\) is sufficiently small, which may be satisfied, e.g., under the condition \(V_{A}\ll 2J\), the asymptotic amplitude \(b_{e}^{A}(t)\) can be easily identified as well, namely,
\[\left|b_{e}^{A}(\infty)\right|=\frac{M_{A}-m_{A}}{\sqrt{m_{A}}M_{A}}. \tag{26}\]
Interestingly, one sees that \(|b_{e}^{A}(\infty)|\) is only related with the number \(m_{A}\) of initially excited emitters and the total number \(M_{A}\) of emitters of type \(A\), thus still exhibiting a simple trapping law of emission dynamics. In Fig. 3 (a), we plot the theoretical time evolution of \(|b_{e}^{A}((t)|\) with different numbers \(m_{A}\) of initially excited QEs for \(M_{A}=5\). It is seen that \(|b_{e}^{A}((t)|\) asymptotically approaches \((M_{A}-m_{A})/(\sqrt{m_{A}}M_{A})\). On top of this remarkably simple behavior, \(|b_{e}^{A}((t)|\) is seen to be stabilized or trapped, but with some small-amplitude oscillation behavior. This oscillation behavior can be traced back to the contribution from the above-neglected photon-QE bound states, associated with the second term in Eq. (24). One would imagine that if the coupling strength \(V_{A}\) is tuned to be smaller so that the theoretical condition \((s+i\Omega_{B})m_{A}V_{A}^{2}F(s)/\{\sqrt{m_{A}}(is-\Omega_{A})[G_{2}(s)]^{ \prime}\}|_{s=\bar{\varepsilon}_{n}}\ll 1\) is better satisfied, then the said oscillations will become less obvious.
As a final interesting check to motivate future studies, let us now investigate the emission dynamics via \(|b_{e}^{A}((t)|\) in Fig. 3 (b) with the condition \(V_{B}/(2J)=0.6\). Under this stronger coupling condition from type-B emitters, the role of the second term in Eq. (24) or the contributions from the photon-bound states can no longer be neglected. Indeed, as we see from the actual results, the previously identified trapping law is completely broken due to the strong interplay between type-A and type-B QEs.
We conclude this section with more qualitative discussions. Due to the presence of two different types of identical QEs, there are two types of degenerate dark states. Dark states due to type-A QEs have energy \(E=\Omega_{A}\) whereas the other type of dark states has energy \(E=\Omega_{B}\). However, because of the orthogonality of these two different types of dark states, only the dark states with \(E=\Omega_{A}\) makes a difference to the emission dynamics if only type-A QEs are initially excited. Nevertheless, this simple picture is valid only if the impact of popula
Figure 3: Time evolution of the magnitude of the excited-state amplitude \(b_{e}^{A}(t)\) with different number \(m_{A}\) of initially excited QEs in type \(A\) for (a) \(V_{B}/(2J)=0.1\) and (b) \(V_{B}/(2J)=0.6\). Other parameters are: \(\Delta_{A}/(2J)=0.3\), \(\Delta_{B}/(2J)=0.2\), \(V_{A}/(2J)=0.1\), \(M_{A}=5\) and \(M_{B}=2\). The time is in units of \(1/(2J)\).
tion trapping from the photon-QE bound states is negligible. In the parameter regime where the population trapping law still persists, it is seen that under the condition \(m_{A}\ll M_{A}\), the spontaneous emissions from type-A QEs can be greatly inhibited by the presence of dark states.
## V Discussion and conclusion
We have studied the single-photon collective emission dynamics in a one-dimensional waveguide array system. Assuming that the size of the ensemble of QEs is much smaller than the wavelength of the radiation field, we have neglected the spatial difference between the QEs. Our model system supports stable subradiant states composed of dark states that preserve the collective excitation of QEs. Unlike the trapping regime caused by the photon-QE bound states, we find that the long-time emission dynamics of the subradiant states can be characterized by a unified population trapping law. This trapping law has nothing to do with the dispersion of the bosonic bath or the coupling strength between the photon field and the QEs. Instead, it is only related with the number of initially excited QEs and the total number of QEs. When more than one type of QEs are present, a similar trapping law persists if the effect of the composite photon-QE bound state can be neglected.
Finally, we discuss the possible experimental platform consisting of transmon qubits and coupled superconducting resonators which have been realized in recent years [41; 42; 43; 50; 51; 52]. In such systems, the hopping energy \(J\approx 20\)-\(730(2\pi)\) MHz. The qubit-resonator coupling strength \(V\) is in the range of 5-\(300(2\pi)\) MHz. Thus the key parameter \(V/(2J)\ll 1\) can be achieved with the existing technology. The frequencies of transmon qubits can be controlled in the range of 1-10(2\(\pi\)) GHz [53; 5], which is similar to the range of the resonance frequency of each resonator, hence being sufficient to yield small detuning \(\Delta\) or near-resonance conditions considered in this work.
|
2304.06699
|
Interpolated kilonova spectra models: necessity for a phenomenological,
blue component in the fitting of AT2017gfo spectra
|
In this work, we present a simple interpolation methodology for spectroscopic
time series, based on conventional interpolation techniques (random forests)
implemented in widely-available libraries. We demonstrate that our existing
library of simulations is sufficient for training, producing interpolated
spectra that respond sensitively to varied ejecta parameter, post-merger time,
and viewing angle inputs. We compare our interpolated spectra to the AT2017gfo
spectral data, and find parameters similar to our previous inferences using
broadband light curves. However, the spectral observations have significant
systematic short-wavelength residuals relative to our models, which we cannot
explain within our existing framework. Similar to previous studies, we argue
that an additional blue component is required. We consider a radioactive
heating source as a third component characterized by light, slow-moving,
lanthanide-free ejecta with $M_{\rm th} = 0.003~M_\odot$, $v_{\rm th} = 0.05$c,
and $\kappa_{\rm th} = 1$ cm$^2$/g. When included as part of our radiative
transfer simulations, our choice of third component reprocesses blue photons
into lower energies, having the opposite effect and further accentuating the
blue-underluminosity disparity in our simulations. As such, we are unable to
overcome short-wavelength deficits at later times using an additional
radioactive heating component, indicating the need for a more sophisticated
modeling treatment.
|
Marko Ristic, Richard O'Shaughnessy, V. Ashley Villar, Ryan T. Wollaeger, Oleg Korobkin, Chris L. Fryer, Christopher J. Fontes, Atul Kedia
|
2023-04-13T17:52:28Z
|
http://arxiv.org/abs/2304.06699v2
|
Interpolated kilonova spectra models: necessity for a phenomenological, blue component in the fitting of AT2017gfo spectra
###### Abstract
In this work, we present a simple interpolation methodology for spectroscopic time series, based on conventional interpolation techniques (random forests) implemented in widely-available libraries. We demonstrate that our existing library of simulations is sufficient for training, producing interpolated spectra that respond sensitively to varied ejecta parameter, post-merger time, and viewing angle inputs. We compare our interpolated spectra to the AT2017gfo spectral data, and find parameters similar to our previous inferences using broadband light curves. However, the spectral observations have significant systematic short-wavelength residuals relative to our models, which we cannot explain within our existing framework. Similar to previous studies, we argue that an additional blue component is required. We consider a radioactive heating source as a third component characterized by light, slow-moving, lanthanide-free ejecta with \(M_{\rm th}=0.003~{}M_{\odot}\), \(v_{\rm th}=0.05\)c, and \(\kappa_{\rm th}=1\) cm\({}^{2}\)/g. When included as part of our radiative transfer simulations, our choice of third component reprocesses blue photons into lower energies, having the opposite effect and further accentuating the blue-underluminosity disparity in our simulations. As such, we are unable to overcome short-wavelength deficits at later times using an additional radioactive heating component, indicating the need for a more sophisticated modeling treatment.
## I Introduction
The detection of the joint gravitational- and electromagnetic-wave emission from binary neutron star merger GW170817 [38] and its electromagnetic counterpart AT2017gfo [37] has initiated an era of precision kilonova observations. Several studies interpreted the observations of AT2017gfo shortly after detection by comparing to simple kilonova models [9; 29; 39] consisting of one or more groups of homologously-expanding material. Motivated both by binary merger simulations and the inability to fit observations with one component, at least two components are customarily employed, with properties loosely associated with two expected features of merger simulations: promptly ejected material (the "dynamical" ejecta), associated with tidal tails or shocked material at contact; and material driven out on longer timescales by properties of the remnant system (the "wind" ejecta) [34]. However, many of these simple kilonova models lack important physical features expected from neutron star merger simulations, including full radiative transfer and opacities, as well as anisotropic outflow and emission. More recent modeling efforts increasingly incorporate these features, including sophisticated treatments of relevant kilonova microphysics [7; 8; 16; 32]. Due to the high simulation cost, many groups have resorted to surrogate models for the kilonova outflow, to reduce the computational cost associated with inference with these more complex models [3; 14; 22; 33].
Despite the increasingly sophisticated models being brought to bear to interpret AT2017gfo, the shorter-wavelength \(g\)-band flux that was observed in AT2017gfo cannot be easily described using only a conventional two-component model [16; 17; 33; 4]. While a "third component" could resolve this underluminosity, as yet many physical processes are being investigated to drive such an outflow and thereby specify how its properties relate to other system parameters, including ejecta shock breakout [26] and central engine sources [21; 23; 25; 31; 45; 47; 21]. Of course, this underluminosity could also in part reflect insufficiently well-understood kilonova systematics; see, e.g., [17; 40; 48; 6].
Most interpretations of kilonova observations have relied on broadband photometry, in part owing to the relative sparsity of available spectra for AT2017gfo (and other kilonovae). Fast interpolated models for (anisotropic) kilonova spectra, computed with state of the art opacities, could provide a new avenue to resolve key uncertainties about AT2017gfo and other kilonovae. Several recent projects have demonstrated the high potential return of comparing AT2017gfo to kilonova spec
tral models [12; 36]. In this work, we present a detailed interpolation scheme for kilonova spectra which allows for continuous spectral modeling across time and viewing angle. We showcase our ability to produce interpolated spectra outputs at various ejecta parameters, times, and angles. In accordance with previous studies, we identify the need for a third component in order to partially match our model's \(g\)-band spectral energy density to that of AT2017gfo. Our method can be easily applied to any modestly-sized archive of adaptively-learned astrophysical transient spectra simulations.
The paper is organized as follows. Section II discusses our simulation training library and associated spectra interpolation methodology. In Section III, we compare our interpolated spectra to those observed for the kilonova AT2017gfo and present the best-fitting ejecta parameters that reproduce the AT2017gfo spectra assuming a two-component model. In Section IV, we explore the effects of including a third, low-opacity component to supplement shorter wavelength (\(g\)-band) flux in our simulations. We summarize our findings in Section V.
## II Interpolation methodology
### Simulation Description
Unless noted otherwise, we consider a two-component kilonova model with a lanthanide-rich equatorial dynamical ejecta component and a lanthanide-poor axial wind ejecta component as described in [20; 46] and motivated by numerical simulations [15; 34]. Each component is parameterized by a mass and velocity such that \(M_{\text{d}}\), \(v_{\text{d}}\) and \(M_{\text{w}}\), \(v_{\text{w}}\) describe the dynamical and wind components' masses and velocities, respectively. The morphology for the dynamical component is an equatorially-centered torus, whereas the wind component is represented by an axially-centered peanut component; Figure 1 of [46] displays the torus-peanut, or "TP," schematic corresponding to the morphologies employed in this work [see 20; for detailed definition]. The lanthanide-rich dynamical ejecta is a result of the \(r\)-process nucleosynthesis from a neutron-rich material with a low electron fraction (\(Y_{e}\equiv n_{\text{p}}/(n_{\text{p}}+n_{\text{n}})\)) of \(Y_{e}=0.04\) with elements reaching the third \(r\)-process peak (\(A\sim 195\)), while the wind ejecta originates from higher \(Y_{e}=0.27\) which encapsulates elements between the first (\(A\sim 80\)) and second (\(A\sim 130\)) \(r\)-process peaks. The detailed breakdown of the elements in each component can be found in Table 2 of [46].
We use SuperNu, a Monte Carlo code for simulation of time-dependent radiation transport with matter in local thermodynamic equilibrium, to create simulated kilonova spectra \(F_{\lambda,\text{sim}}\) assuming the aforementioned two-component model [43]. Both components are assumed to have fixed composition and morphology for the duration of each simulation. SuperNu uses radioactive power sources calculated from decaying the \(r\)-process composition from the WinNet nuclear reaction network [19; 41]. These radioactive heating contributions are also weighted by thermalization efficiencies introduced in [5] [see 44, for a detailed description of the adopted nuclear heating]. We use detailed opacity calculations via the tabulated, binned opacities generated with the Los Alamos suite of atomic physics codes [10; 11; 27]. Our tabulated, binned opacities are not calculated for all elements; therefore, we produce opacities for representative proxy elements by combining pure-element opacities of nuclei with similar atomic properties [10]. Specifics of the representative elements for our composition are given in [46].
The SuperNu outputs are anisotropic simulated spectra \(F_{\lambda,\text{sim}}\), post-processed to a source distance of 10 pc, in units of erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\). The spectra are binned into 1024 equally log-spaced wavelength bins spanning \(0.1\leq\lambda\leq 12.8\) microns. For the purposes of this work, we consider the spectral evolution across 60 equally log-spaced times between 0.125 and 20.75 days post-merger. However, many of the spectra in our training library extend out to even later times. As we only consider anisotropic simulations in this study, we extract simulated spectra using 54 angular bins, uniformly spaced as \(-1\leq\cos\theta\leq 1\) for the angle \(\theta\) between the line of sight and the symmetry axis.
### Training Set Generation
The follow description describes the approach taken to generate the simulation library in [33]. Our training library of 412 kilonova spectra and light-curve simulations was constructed using iterative simulation placement guided by Gaussian process variance minimization. In previous work, we focused solely on light-curve interpolation; as such, new simulations were placed with parameter combinations that were identified as having the largest bolometric luminosity variance by our Gaussian process regression approach. In other words, we placed new simulations in regions of parameter space where our bolometric luminosity interpolation root-mean-square uncertainty was largest. Equation 1 shows the Gaussian process variance \(s(\vec{x})^{2}\)
\[s(\vec{x})^{2}=k(\vec{x},\vec{x})-k(\vec{x},\vec{x}_{a})k(\vec{x}_{a},\vec{x} _{a^{\prime}})^{-1}_{aa^{\prime}}k(\vec{x}_{a^{\prime}},\vec{x}) \tag{1}\]
where \(\vec{x}\) is the vector of input parameters, \(\vec{x}_{a}\) is the training data vector, \(s(\vec{x})^{2}\) is the variance of the Gaussian process prediction, the function \(k(\vec{x},\vec{x}^{\prime})\) is the kernel of the Gaussian process, and the indices \(a,a^{\prime}\) are used to calculate the covariance between inputs \(\vec{x}\) and training data \(\vec{x}_{a},\vec{x}_{a^{\prime}}\) such that if \(a=a^{\prime}\), the variance is 0.
In the context of this work, the only relevance of the aforementioned light curves is to explain the process of constructing the original simulation library. The spectra used in this work have the same parameters as the light curves used for our light-curve interpolation approach in [33]. No additional simulations were produced for the
purposes of this work; all training data came from the simulation library presented in [33].
The original training data library consists of 412 total simulations calculated at 60 times (54 angles) each for a total of 24720 (22248) spectra evaluated at 1024 wavelength bins. Due to the sheer volume of data in our training set, we do not perform any coordinate transformations, but rather interpolate directly in our ejecta parameter space and time or angle. However, the large data volume incurs a high computational cost, most notably high memory usage during training. For the remainder of the work, unless otherwise noted, we downsample our data to only include spectra evaluated between 1.4 and 10.4 days for wavelengths above 0.39 microns (the lower limit of the \(g\)-band) and below 2.39 microns (the upper limit of the \(K\)-band). Downsampling reduces the dataset to 412 total simulations calculated at 24 times for a total of 9888 spectra evaluated at 384 wavelength bins. The angular bins can be similarly downsampled from 54 to 27 to get a comparable data volume. For simplicity, all subsequent discussion will refer to interpolation in time; however, all instances of time as an interpolation parameter can be directly replaced with angle.
### Spectrum Interpolation Approach
Our spectrum simulation setup and interpolation scheme presented in this work differ slightly from the approach described in Section II.2. As before, our inputs are the four ejecta parameters describing our two-component kilonova model, with the addition of post-merger time in days, such that we have a five-dimensional input \(\vec{x}=(M_{d},v_{d},M_{w},v_{w},t)\). For completeness, the angle \(\theta\) can remain unfixed, allowing a six-dimensional input \(\vec{x}=(M_{d},v_{d},M_{w},v_{w},t,\theta)\) at greater computational cost. For each fixed viewing angle, our interpolation output is the spectral energy density \(F_{\lambda}\) associated with that viewing angle in units of erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\). We favor a random forest interpolation scheme due to its enhanced recovery of detailed spectral features compared to the Gaussian process approach. This choice comes at the cost of losing an inherent uncertainty prediction that is associated with Gaussian process interpolation output. We recognize the existence of random forest uncertainty calculation modules, but have been unable to successfully incorporate them in our study.
Random forests are a sub-class of grouped decision-tree structures that can be used for regression applications. The following summary is adapted from the scikit-learn documentation on decision trees [28]1. An individual tree in a random forest recursively partitions the spectral flux density samples \(F_{\lambda}\) for a set of five-dimensional input parameters \(\vec{x}\) from the training set via a series of decisions, commonly referred to as branches, based on a randomly-selected threshold value. This threshold value \(t_{i}\) can be thought of as a piecewise function that divides the samples into two groups, or leaf nodes, \(Q_{i}\): one where all of the samples meet the branch threshold, \(Q_{i}^{left}\), and another where none of the samples meet the branch threshold, \(Q_{i}^{right}\),
Footnote 1: [https://scikit-learn.org/stable/modules/tree.html](https://scikit-learn.org/stable/modules/tree.html)
\[Q_{i}^{left} =\{F_{\lambda}\ |\ F_{\lambda}\leq t_{i}\} \tag{2}\] \[Q_{i}^{right} =Q_{i}\ \backslash\ Q_{i}^{left}. \tag{3}\]
These thresholds are generated recursively, with each subsequent leaf node \(Q_{i}^{left/right}\) being re-partitioned until a specified recursion termination step is reached. The tree is then left with a total of \(m\) leaf nodes, each of which contains \(n_{m}\) spectral flux density values \(F_{\lambda}\) from the original dataset. The predicted spectral flux density for each leaf node is given by
\[\overline{F}_{\lambda,m}=\frac{1}{n_{m}}\sum_{F_{\lambda}\in Q_{m}}F_{\lambda} \tag{4}\]
with an associated likelihood for each node defined by a mean-squared error
\[\mathcal{L}(Q_{m})=\frac{1}{n_{m}}\sum_{F_{\lambda}\in Q_{m}}(F_{\lambda}- \overline{F}_{\lambda,m})^{2}\,, \tag{5}\]
where \(m\) represents the given random forest node, \(\overline{F}_{\lambda,m}\) is the learned mean value for node \(m\), \(n_{m}\) is the number of samples in node \(m\), \(Q_{m}\) is the training data in node \(m\), and \(\mathcal{L}(Q_{m})\) is the probability of the learned mean value \(\overline{F}_{\lambda,m}\) given partitioned training data \(Q_{m}\). The learned mean value predictions in each node are weighted by their nodes' likelihoods to produce an individual tree's prediction for a given input \(\vec{x}\). The random forest considers the outputs of all decision trees and uses majority voting to create the final interpolation prediction \(F_{\lambda,\text{intp}}\) for each angular bin. Using the independent random-forest estimates for each angular bin, we can interpolate, as needed, these predictions versus viewing angle, reconstructing a continuous estimate for the flux as a function of simulation parameters, time, and viewing angle. Conversely, we can repeat the procedure described above, exchanging time and angle, to produce a random-forest interpolation versus simulation parameters and angle, which we can interpolate in time as needed.
As previously mentioned, time and angle can be interchangeably included as interpolation parameters in our framework. Figure 1 showcases examples of using one of these parameters as an interpolation parameter and keeping the other fixed. The ejecta parameters in both panels were fixed to match those in Figure 2; as such, all variations in Figure 1 are due _solely to the fixed parameter_, \(\theta\) or \(t\), displayed in the figure legend. For convenience, we also overplot colored wavelength regions corresponding to the LSST _grizy_, 2MASS _JHK_, and the Spitzer 4.5 micron "\(S\)" broadband filters.
The top panel displays spectra at a fixed time of 10.4 days and the changes in spectral features as the viewing angle is increased from 0 (axial) to 90 degrees (equatorial). In general, \(F_{\lambda}\) tends to decrease as the viewing angle increases, moving away from the jet axis toward the plane in which the accretion disk lies. This behavior is expected as our low \(Y_{e}\) dynamical ejecta component, concentrated in a torus near the plane, synthesizes heavier elements that contribute to higher opacity as \(\theta\) increases, commonly referred to as lanthanide curtaining.
The bottom panel, in a similar fashion, indicates how the spectra at a fixed viewing angle of 0 degrees evolve over time between 1.43 and 10.4 days. The flux at the earliest times peaks in the lower-wavelengths bands before the system has had a chance to lose energy via expansion and thermal emission. At later times, as the system cools, the peak flux migrates to redder wavelengths and in some cases distinct spectral features begin to form.
Figure 2 compares the predictions of our interpolation technique to a single out-of-sample simulation, evaluated at all simulation wavelengths at a specific time and viewing angle. The random forest prediction agrees remarkably with the underlying simulation data. The full wavelength range was considered in this instance due to the sharp, pronounced features past \(\lambda>5\) microns. The panels of Figure 2 show the same off-sample prediction using a more (less) computationally expensive approach during training in the top (bottom) panel.
Our spectra interpolation tool, as well as sample use cases, can be found at [https://github.com/markoris/rf_spec_intp](https://github.com/markoris/rf_spec_intp).
## III Two-component analysis
### AT2017gfo Observational Dataset
In addition to serving as an interpolation training set, our simulated spectra can also inform us about
Figure 1: Off-sample interpolated spectra at different viewing angles at a fixed time of 10.4 days (top) and different times at a fixed viewing angle of 0 degrees (bottom) with the same ejecta parameters as in Figure 2. The spectra in the top figure exhibit the characteristic lanthanide-curtaining effect at shorter wavelengths as the dynamical ejecta becomes dominant at larger angles. The spectra in the bottom figure show the expected shift toward brighter spectral energy density in infrared wavelengths at later times.
Figure 2: Off-sample comparison of simulation data, in black, compared to an interpolated spectrum generated using the simulation input parameters, in red. The simulation was evaluated for input parameters \(M_{\rm d}=0.0013\), \(v_{\rm d}=0.053\), \(M_{\rm w}=0.0349\), \(v_{\rm w}=0.206\), and \(t=10.4\) assuming a fixed viewing angle bin \(\theta\leq\sim\)16\({}^{\circ}\) and source distance of 10 pc. Masses, velocities, time, and angle are in units of \(M_{\odot}\), \(c\), days, and degrees, respectively. Top: The off-sample prediction, in red, from a random forest interpolator trained without hyperparameter constraints and significantly higher computational resource cost. The unbounded computational cost allows for particularly accurate feature recovery, especially at wavelengths past 5 microns. Bottom: Same as above, except with hyperparameter constraints resulting in a much more computationally inexpensive model. The model prediction is noticeably smoother, however it still captures the general profile of the spectrum and the tops of the peak features past 5 microns.
which model parameters recreate the observed spectra for AT2017gfo. We use an observational dataset consisting of the ten X-shooter spectra originally published in [30] and [35], which have been re-reduced and recalibrated by the ENGRAVE collaboration [1]. The details of the spectral data cleaning, including an additional flux calibration step, are described in [13]. Throughout this work, unless specified otherwise, we use the flux-corrected, smoothed, joined spectra \(F_{\lambda,\text{obs}}\) obtained from the ENGRAVE data release2. The data span a wavelength range of roughly 0.33 to 2.4 microns, with a couple of spectra having a slightly shorter wavelength range.
Footnote 2: [http://www.engrave-eso.org/AT2017gfo-Data-Release](http://www.engrave-eso.org/AT2017gfo-Data-Release)
### Fitting SuperNu Simulations to AT2017gfo
As described in Section II.1, SuperNu outputs kilonova spectra \(F_{\lambda,\text{sim}}\) at a distance of 10 pc across 1024 log-spaced wavelength bins \(\lambda_{k}\) for \(k=0,1,...,1023\) between 0.1 and 12.8 microns. The subscript \(k\) notation hereafter refers to these 1024 SuperNu wavelength bins. For comparison between simulated and observed data, we scale the simulated spectra to a distance of 40 Mpc to match the distance at which AT2017gfo was observed. We fix the viewing angle to the first simulation angular bin (\(\theta\leq\sim\)16\({}^{\circ}\)).
We also downsample the observational data \(F_{\lambda,\text{obs}}\) such that each new observational wavelength bin corresponds to a SuperNu wavelength bin \(\lambda_{k}\) and contains a new observational flux value \(\hat{F}_{\lambda,\text{obs},k}\) defined as
\[\hat{F}_{\lambda,\text{obs},k}=\frac{1}{N_{k}}\sum_{i}F_{\lambda,\text{obs},i }\text{ for }\lambda_{k}\leq\lambda_{i}<\lambda_{k+1}, \tag{6}\]
where \(N_{k}\) is the number of original observational wavelength data points \(\lambda_{i}\) that are downsampled into the relevant SuperNu wavelength bin \(\lambda_{k}\). From this point on, we refer to the rebinned, downsampled observational data as \(\hat{F}_{\lambda,\text{obs}}\). Due to the difference in wavelength ranges between our observed and simulated data sets, we are only able to compare the observed data to _at most_ 361 SuperNu wavelength bins between 0.33 and 2.4 microns. Our only other observational data processing involves removing portions of the observed spectra that exhibit telluric effects or artifacts from the stitching process. The gaps corresponding to the removed data are located around 0.6, 1, 1.4, and 1.9 microns. The data preprocessing described here is independent of the data-volume reduction steps described in Section II.2.
We identify the best-fitting parameters at each observation time \(t\) using a simple \(\chi^{2}\) goodness-of-fit statistic defined as
Figure 3: Interpolated, two-component kilonova spectra fitted to AT2017gfo observed spectra at \(t=1.43\) (top), \(t=4.4\) (upper middle), \(t=7.4\) (lower middle), and \(t=10.4\) (bottom) days. Each fit was calculated using Equation 7 by only considering spectra at the relevant observation time. The best-fit parameters for the interpolated spectrum at each time are presented in Table 3. Vertical lines with endcaps indicate a subset of observational errors which are included for further insight into the \(\chi^{2}\) fit results.
\[\chi^{2}=\sum_{k=0}^{1023}\left(\frac{F_{\lambda,\text{intp},k}-\hat{F}_{\lambda, \text{obs},k}}{\sigma_{\hat{F}_{\lambda,\text{obs},k}}}\right)^{2}, \tag{7}\]
where \(k\) represents the SuperNu wavelength bins, \(F_{\lambda,\text{intp},k}\) is the interpolated spectral energy density scaled to 40 Mpc, \(\hat{F}_{\lambda,\text{obs},k}\) is the rebinned observed spectral energy density, and \(\sigma_{\hat{F}_{\lambda,\text{obs},k}}\) is the uncertainty on the observed spectral energy density. To assess the relative distribution of different model parameters \(\vec{x}\), we use a likelihood \(\exp(-\chi^{2}/2)\) and a uniform prior over ejecta parameters \(\vec{x}\). The samples \(\vec{x}\) are iteratively drawn using Monte Carlo sampling (e.g., [42]), and models are evaluated and compared to all wavelengths at each observation epoch. From our posterior-weighted Monte Carlo samples, we use a maximum-likelihood estimate as the preferred value for \(\vec{x}\), with statistical error bars on each component derived from the posterior distribution.
Our two-component model fits to the AT2017gfo observational data are presented in Figure 3. Early-time fits match well, especially at 1.43 days where the spectrum effectively behaves like a blackbody. A notable discrepancy in the fit occurs at 1.43 days in the \(g\)-band where our simulations are slightly underluminous around 0.4 microns. At later times, this discrepancy becomes more exaggerated as the fit is increasingly underluminous in the \(g\)- and even \(r\)-bands at 7.4 days. However, as time increases, our models nominally fit the data better, simply because of the relatively large measurement uncertainties at late times. This nominally better statistical fit should not be taken as necessarily a more reliable parameter estimate, as for example at late times the local thermodynamic equilibrium assumption for our simulations becomes less applicable.
In Table 2 we present the best-fitting model parameters, calculated using Equation 7, for the observed spectrum \(\hat{F}_{\lambda,\text{obs}}\) (labeled \(F_{\lambda,\text{AT2017gfo}}\) in the plot legend) at each respective time. We also present the recovered parameters along with their uncertainties visually in Figure 4 for clearer understanding of the parameter recovery differences at individual times. The \(\chi^{2}/N_{t}\) values come directly from Equation 7; \(N_{t}\) is a normalizing factor representing the number of wavelength bins used for comparison (up to 361) for the observation at time \(t\). The \(N_{t}\) normalizing factor accounts for the variable number of wavelength bins considered during the residual calculation for each observation time. The \(\chi^{2}/N_{t}\) values shown in Table 2 quantify the poor fit between data and our models seen in Figure 3 and elsewhere. These large scaled residuals reflect the small observational uncertainties, as shown in Figure 3, but as noted are also computed by completely neglecting any systematic error associated with either our interpolation or modeling. While we cannot thoroughly propagate our systematics at present, we estimate, based on small changes in our result to operating-point choices, as seen in Figure 2, that incorporation of systematic error could account for much of the variation between our models and the data
\begin{table}
\begin{tabular}{c c c c c c} \hline \(t\) & \(\log_{10}M_{d}\) & \(v_{d}\) & \(\log_{10}M_{w}\) & \(v_{w}\) & \(\chi^{2}/N_{t}\) \\
[days] & [\(M_{\odot}\)] & [\(c\)] & [\(M_{\odot}\)] & [\(c\)] & \\ \hline \hline
**1.43** & \(-1.47^{+0.11}_{-0.22}\) & \(0.20^{+0.00}_{-0.00}\) & \(-2.04^{+0.12}_{-0.00}\) & \(0.10^{+0.01}_{-0.01}\) & 8538 \\
2.42 & \(-2.05^{+0.00}_{-0.01}\) & \(0.15^{+0.00}_{-0.00}\) & \(-1.98^{+0.00}_{-0.12}\) & \(0.18^{+0.00}_{-0.00}\) & 904 \\
3.41 & \(-2.06^{+0.02}_{-0.03}\) & \(0.19^{+0.10}_{-0.01}\) & \(-1.91^{+0.03}_{-0.13}\) & \(0.05^{+0.04}_{-0.00}\) & 539 \\
4.4 & \(-1.52^{+0.00}_{-0.00}\) & \(0.11^{+0.00}_{-0.00}\) & \(-1.51^{+0.00}_{-0.00}\) & 0.21^{+0.00}_{-0.00}\) & 957 \\
5.4 & \(-1.71^{+0.00}_{-0.00}\) & \(0.25^{+0.00}_{-0.00}\) & \(-1.80^{+0.00}_{-0.00}\) & 0.09^{+0.00}_{-0.00}\) & 389 \\
6.4 & \(-1.73^{+0.03}_{-0.00}\) & \(0.14^{+0.01}_{-0.01}\) & \(-1.81^{+0.00}_{-0.00}\) & \(0.05^{+0.00}_{-0.00}\) & 238 \\
**7.4** & \(-1.61^{+0.07}_{-0.04}\) & \(0.29^{+0.00}_{-0.01}\) & \(-1.80^{+0.00}_{-0.00}\) & \(0.06^{+0.00}_{-0.01}\) & 385 \\
8.4 & \(-2.05^{+0.11}_{-0.05}\) & \(0.07^{+0.02}_{-0.00}\) & \(-1.57^{+0.00}_{-0.01}\) & \(0.09^{+0.00}_{-0.00}\) & 137 \\
9.4 & \(-1.47^{+0.01}_{-0.04}\) & \(0.30^{+0.00}_{-0.01}\) & \(-1.80^{+0.00}_{-0.00}\) & \(0.25^{+0.00}_{-0.00}\) & 155 \\
**10.4** & \(-1.32^{+0.01}_{-0.00}\) & \(0.30^{+0.00}_{-0.00}\) & \(-2.05^{+0.07}_{-0.06}\) & \(0.21^{+0.01}_{-0.00}\) & 45 \\ \hline \end{tabular}
\end{table}
Table 1: Best-fit parameters, with 1-\(\sigma\) uncertainties, derived from the comparison of interpolated spectra \(F_{\lambda,\text{intp}}\) to each of the ten X-shooter observational spectra \(\hat{F}_{\lambda,\text{obs}}\). Each set of parameters was separately identified and compared only to the spectrum taken at the observation time. Entries in bold have their spectra plotted in Figure 3. All fits to spectra assume only a two-component model _without_ the inclusion of the additive thermal component.
Figure 4: Visual representation of the best-fit recovered parameters and their uncertainties presented in Table 2. The masses are fairly consistent across observation epochs, with wind mass slightly more stable than dynamical mass. Velocities are highly variable across observation epochs and can generally be considered poorly constrained. However, the wind velocity shows some consistency between 5-8 days, with a similar pattern seen in the wind mass at these times.
apparent at most wavelengths longer than 0.5 microns. The maximum systematic uncertainty for wavelengths less than 0.5 microns is \(\Delta F_{\lambda}\sim 10^{-20}\), calculated as the maximum difference between predictions for the two models presented in Figure 2. Therefore, we are confident that the underluminosity in the blue bands is indeed real and not simply due to modeling uncertainty. Decreasing \(\chi^{2}/N_{t}\) at later times also not necessarily indicate better agreement between predictions and observations, but rather larger observational errors as spectra get increasingly noisier at these times. The non-uniformity of the recovered parameters is due to each set of parameters being identified at its relevant observation time without regard to information from other times. As such, it is difficult to make any explicit claims; however some trends do arise.
In particular, the dynamical mass tends to be greater than the wind mass for approximately half of the spectra. The wind mass is the most consistent across observation epochs. We interpret our less variable constraints on wind mass as reflecting the wind ejecta radiation being prominent at earlier times where our fits to the spectra are better. Due to high opacity in the region, dynamical ejecta photons are expected to be emitted at later times; however, the data and our fit quality degrade at these times, leaving the dynamical ejecta properties more prone to variation compared to those of the wind ejecta. Velocities are overall highly variable across observations.
To determine an aggregate set of ejecta parameters informed by inference at all observational times, we calculate an overall residual from all spectra weighted by the number of points \(N_{t}\) in each fit. We report weighted-average parameters \(x\) such that \(x=\sum_{t}N_{t}x_{t}/\sum_{t}N_{t}\), where each parameter \(x\) is determined by the weighted sum of the recovered parameter at each time \(x_{t}\), with \(N_{t}\) serving as the weighting factor. The averaged parameters are presented in Figure 5, overlaid on top of parameter recovery posteriors from the [33] analysis, which excludes the \(K\)-band. The average parameters with uncertainties at the top of each posterior correspond to the [33] results. We find similar agreement for recovered parameters between the two analyses, with the understanding that the overlaid parameters are subject to the uncertainties from Table 2.
## IV Three-Component Analysis
The blue-wavelength underluminosity displayed in Figure 3 confirms that our detailed self-consistent radiative transfer simulations underpredict the shortest optical-wavelength radiation at late times, both spectroscopically and photometrically [33, 18]. This underprediction serves as a clear indicator that our modeling approach is missing an energy source that will sustain blue emission to late times without affecting the rest of the spectrum. With the hypothesis that our two-component model composition assumptions are currently insufficient, we consider a third radioactive heating component as a natural extension of our existing model. To guide our parameter choices for the third component, we consider the effects of adding the flux from the simple kilonova model presented in [24] to our spectra.
### Simple Model for Parameter Guidance
The kilonova model by [24], hereafter referred to as M19, calculates the blackbody spectral energy density at some time \(t\) given an ejecta mass \(M_{\rm ej}\), velocity \(v_{\rm ej}\), and opacity \(\kappa_{\rm ej}\). In the context of our study, a low-opacity third component is most preferable as it increases the likelihood of emission of blue photons rather than scattering or absorption. Likewise, a slow-moving component ensures that the blue-photon emitting ejecta does not diffuse too quickly, allowing for sustained blue emission at late times. Finally, the mass parameter acts as a scale factor for the overall brightness of the blackbody's spectral energy density.
Based on our fits to the spectra at all times, a subset of which is presented in Figure 3, we identify that a gray-opacity model with \(\kappa=1\) cm\({}^{2}\)/g and ejecta parameters \(M_{\rm ej}=0.003M_{\odot}\) and \(v_{\rm ej}=0.005\)c produces enough flux in the \(g\)- and \(r\)-bands to remedy the underluminosity without boosting the longer-wavelength flux, which our models match well. The spectral energy density
Figure 5: Corner plot showing parameter recovery results from [33] when omitting the K-band. The parameter means reported at the top of each parameter column represent the posterior distributions and their 90% confidence intervals. Overlaid in red are weighted-average parameters calculated from the per-observation recovered parameters presented in Table 2.
emitted by this component is simply added to our best-fit spectra \(F_{\lambda,\text{intp}}\) as a post-processing step, ignoring any potential photon reprocessing effects which may occur during radiative transfer.
Figure 6 displays our best-fit interpolated spectra when including the additive thermal component from M19 during the residual calculation. The very-early and very-late spectra at 1.43 and 10.4 days exhibit little change with the addition of the third component in our relevant bands. The most obvious improvement occurs at 4.4 days where the fit almost perfectly matches observations, but the \(g\)- and \(r\)-band underluminosity reappears in the 7.4 day spectra. It is likely that the drop-off at 7.4 days and later occurs due to the simplified approach of just adding the third component's spectral energy density to our existing best-fit spectra. In order to understand the realistic, fully physical inclusion of the third component, we require a full radiative transfer calculation of our three-component model using SuperNu.
### SuperNu Third Component
The post-facto addition of a third component's flux contribution neglects important emission effects that can arise as a result of photon reprocessing in the ejecta. To consider the full physicality of including a third component, we present a SuperNu simulation involving a three-component model.
Our three-component SuperNu setup is an extension of our two-component approach. Our dynamical and wind component compositions remain unchanged and retain the properties described in Section II.1. We incorporate the third component by mixing it into the dynamical and wind components. For the third component, rather than considering a simple gray opacity as in the toy model, we use the detailed line-binned opacities described in Section II.1, associated with a low-opacity, lanthanide-free composition shown by the green line in Figure 7. Due to the similarity between the dynamical and wind ejecta heating rates, we employ the dynamical ejecta heating rate to both the dynamical and wind components for computational simplicity. The composition and heating rate for the third component were generated using the WinNet nuclear reaction network for a homologously expanding ejecta with a velocity of \(0.05c\) and characterized by electron fraction \(Y_{e}=0.50\).
The averaged, aggregate parameters for the dynamical and wind components for the original two-component model are taken from Figure 5. The mass of the third component is fixed to \(M_{\text{ej}}=0.003\) as in Section IV.1. The third component velocity \(v_{\text{ej}}\) is increased to \(0.05c\) to match the lowest allowed value in the SuperNu velocity space. Increasing \(v_{\text{ej}}\) from 0.005 to 0.05 also prevents ejecta fallback onto the remnant. As discussed in [45], ejecta fallback would require an additional energy source treatment and remove our assumption of a single radioactive-heating energy source.
Figure 6: All spectral fits considered in this work. \(F_{\lambda,2c}\) are the same two-component fits as in Figure 3. The \(F_{\lambda,\text{3cMetager}}\) fits show the two-component fits with an additional third component flux contribution from the [24] model with \(M_{ej}=0.003M_{\odot}\), \(v_{ej}=0.005\)c, and \(\kappa=1\) cm\({}^{2}\)/g. The \(F_{\lambda,\text{3cSuperNu}}\) fit shows the SuperNu radiative transfer calculation of the M19 third component with closest-matching parameters \(M_{ej}=0.003\), \(v_{ej}=0.05\) and composition as shown in Figure 7.
Figure 6 shows all of the different spectra modeling efforts considered in this work compared to the AT2017gfo observed spectra. The "2c" spectra match the two-components fits presented in Figure 3, the "3cMetzger" spectra are the best-fit "2c" spectra, which include the additive thermal component from M19, and the "3cSuperNu" spectra present fits from the SuperNu run that uses the third component described in the preceding paragraph. Starting as early as 4.4 days, it is obvious that the self-consistent implementation of the third component in SuperNu does not provide nearly as much short-wavelength flux as the Metzger additive thermal component.
In fact, for the majority of observation times, the "3cSuperNu" model is even _less luminous_ than the "2c" model, instead shifting spectral energy density from blue wavelengths to redder ones. This shift seems to indicate that the inclusion of the third component is reprocessing photons to longer wavelengths instead of amplifying the emission at shorter ones. At 10.4 days, the massive spike in flux at 1.5 microns also indicates that our third component is not optimally suited to matching the features of the AT2017gfo spectra.
Given the results of Figure 6, we find that an additional radioactive component is not sufficient to amplify, or even match, the required flux to match our models to the AT2017gfo data. The reprocessing of photons to lower energies in the additional component introduces an unwanted flux boost around 1.5 microns, which results in even worse-fitting spectra than those using only two components. As such, future studies should explore detailed composition analysis to achieve an increase in blue emission within the constraints of the two-component model. Likewise, Figure 6 is an illustrative example that an additional modeling component may not necessarily be a radioactive heating source.
A notable caveat is that our third component was initially chosen to have a slow velocity in order to boost late-time blue emission; a similar radioactive-heating component with a velocity faster than that of the wind ejecta may exhibit fewer photon reprocessing effects to longer wavelengths by virtue of the photons not having to interact with the wind component as they escape.
## V Conclusions
We have demonstrated that a straightforward approach can accurately interpolate between simulated spectra derived from radiative-transfer simulations of kilonova ejecta across a high-dimensional model parameter space. In this proof-of-concept study, motivated by the relative scarcity of spectral observations, we fix the spectra viewing angle (time) and only interpolate over ejecta properties spanning four dimensions and time (angle) spanning one dimension, applicable in both scenarios given our assumption of axisymmetry.
Although this work focused specifically on kilonova spectra, the interpolation scheme should be broadly applicable to all astrophysical spectra of similar dimensionality. While our initial highly non-parsimonious approach produces accurate spectra, we find that its large memory footprint and computational cost can be substantially reduced. The nature of the large dataset would make it well-suited for conventional machine-learning techniques, such as neural networks.
We have used our interpolated spectra to recover the closest-matching model parameters that replicate the observed spectra of kilonova AT2017gfo. We present multiple modeling approaches, including a standard two-component approach, a three-component approach using an additive third component, and a three-component approach implemented in the Monte Carlo radiative transfer code SuperNu. In accordance with our previous parameter inference study [33], as well as other studies of a similar nature [21, 23, 25, 26, 31, 45, 47], we find that an additional modeling component is necessary to overcome early-time underluminosity in the \(g\)- and \(r\)-bands. With the inclusion of the relatively light, slow-moving, lanthanide-free component, the short-wavelength spectral energy distribution remains underluminous at later times, with a clear discrepancy already present at a week post-merger. The persistent \(g\)- and \(r\)-band disagreement at late times implies that an additional radioactive component is not a suitable modeling approach, indicating the need for a more sophisticated treatment of the blue-wavelength flux contribution in further studies.
Finally, in this paper, our analysis highlights future studies which will expand our composition assumptions in order to better understand the impact of ejecta composition on the blue flux contribution. However, there are many other uncertainties associated with the models, such as mass and composition distributions as a function of velocity and angle, atomic physics results assuming local thermodynamic equilibrium, and the finer treatment of energy deposition into the ejecta via different decay channels. As we learn about new sensitivities from these
Figure 7: Mass fractions \(X\) as a function of element number \(Z\) for the dynamical, wind, and third component compositions as described in Section IV.2. The primary contribution of the third component comes from the large amount of iron (\(Z=26\)) and nickel (\(Z=28\)) which are not as prevalent in the other two components.
uncertainties, it becomes increasingly clear that it will be difficult to create a fine grid of models covering all of these effects. Our method is useful for the applications outline in this paper, but also because it can ultimately be scaled to adapt to the wider parameter space of model uncertainties, using a limited number of simulations to intelligently map between results.
## VI Acknowledgments
ROS and MR acknowledge support from NSF AST 1909534. ROS acknowledges support from NSF AST 2206321. VAV acknowledges support by the NSF through grant AST-2108676. The work by CLF, CJF, OK, and RTW was supported by the US Department of Energy through the Los Alamos National Laboratory (LANL). This research used resources provided by LANL through the institutional computing program. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001).
|
2308.08561
|
Implementation of The Future of Drug Discovery: QuantumBased Machine
Learning Simulation (QMLS)
|
The Research & Development (R&D) phase of drug development is a lengthy and
costly process. To revolutionize this process, we introduce our new concept
QMLS to shorten the whole R&D phase to three to six months and decrease the
cost to merely fifty to eighty thousand USD. For Hit Generation, Machine
Learning Molecule Generation (MLMG) generates possible hits according to the
molecular structure of the target protein while the Quantum Simulation (QS)
filters molecules from the primary essay based on the reaction and binding
effectiveness with the target protein. Then, For Lead Optimization, the
resultant molecules generated and filtered from MLMG and QS are compared, and
molecules that appear as a result of both processes will be made into dozens of
molecular variations through Machine Learning Molecule Variation (MLMV), while
others will only be made into a few variations. Lastly, all optimized molecules
would undergo multiple rounds of QS filtering with a high standard for reaction
effectiveness and safety, creating a few dozen pre-clinical-trail-ready drugs.
This paper is based on our first paper, where we pitched the concept of machine
learning combined with quantum simulations. In this paper we will go over the
detailed design and framework of QMLS, including MLMG, MLMV, and QS.
|
Yifan Zhou, Yan Shing Liang, Yew Kee Wong, Haichuan Qiu, Yu Xi Wu, Bin He
|
2023-08-14T13:18:40Z
|
http://arxiv.org/abs/2308.08561v3
|
# Implementation of The Future of Drug Discovery: Quantum-Based Machine Learning Simulation (QMLS)
###### Abstract
The Research & Development (R&D) phase of drug development is a lengthy and costly process. To revolutionize this process, we introduce our new concept QMLS to shorten the whole R&D phase to three to six months and decrease the cost to merely fifty to eighty thousand USD. For Hit Generation, Machine Learning Molecule Generation (MLMG) generates possible hits according to the molecular structure of the target protein while the Quantum Simulation (QS) filters molecules from the primary essay based on the reaction and binding effectiveness with the target protein. Then, For Lead Optimization, the resultant molecules generated and filtered from MLMG and QS are compared, and molecules that appear as a result of both processes will be made into dozens of molecular variations through Machine Learning Molecule Variation (MLMV), while others will only be made into a few variations. Lastly, all optimized molecules would undergo multiple rounds of QS filtering with a high standard for reaction effectiveness and safety, creating a few dozen pre-clinical-trail-ready drugs. This paper is based on our first
paper, where we pitched the concept of machine learning combined with quantum simulations. In this paper we will go over the detailed design and framework of QMLS, including MLMG, MLMV, and QS.
Machine Learning; Quantum Computing; Drug Discovery; Molecule Generation; Molecular Simulation.
## 1 Introduction
### QMLS's Potential in The Drug R&D Industry
The drug development process is a long and costly endeavor, often taking several years and billions of dollars to bring a new drug to market. One promising approach to streamlining this process is the use of Quantum-Based Machine Learning Simulation (QMLS). QMLS utilizes the power of quantum computing and machine learning algorithms to simulate and predict the behavior of complex molecular systems, allowing for more efficient and accurate drug discovery. According to a study by Patel et al. (2019), "QMLS has the potential to significantly reduce the time and cost associated with drug discovery, while also increasing the success rate of new drug candidates"[1]. Another study by Wang et al. (2018) found that QMLS can "predict the binding affinity of potential drug candidates with high accuracy, reducing the need for costly and time-consuming experimental testing"[2]. A third study by Li et al. (2017) also highlighted the potential of QMLS in "identifying new drug targets and predicting the effects of drug-target interactions"[3]. Overall, QMLS has the potential to revolutionize the drug development industry by providing a more efficient and effective way to discover new drugs.
### Recap of Previous Study
The paper presents a concept of using Quantum-based Machine Learning network (QML) and Quantum Computing Simulation (QS) to revolutionize the Research & Development (R&D) phase of drug development. The proposed method aims to shorten the R&D phase to three to six months and decrease the cost to a fraction of traditional methods. The program takes inputs of the target protein/gene structure and primary essay and applies QML network to generate possible hits, while QS filters molecules based on reaction and binding effectiveness with the target protein. The resultant molecules are then compared and optimized through variations and modifications and undergo multiple rounds of QS filtering for reaction effectiveness and safety. The paper suggests that this concept could also be applied to fields such as agriculture research, genetic editing, and aerospace engineering.
### Detailed Implementation of QMLS
The detailed implementation of QMLS in this study will involve the use of several tools and platforms. The first tool that will be utilized is Deepspeed.ai, a machine-learning platform that allows for efficient and accurate simulations of complex molecular systems. According to a study by Zhang et al. (2021), "Deepspeed.ai has been shown to effectively reduce the computational cost of machine learning simulations, making it a valuable tool for QMLS"[4]. In addition to Deepspeed.ai, this study will also use MatLab, a popular programming language and environment for
numerical computing, to develop and execute the machine learning algorithms. As reported by Wang et al. (2020), "MatLab has proven to be a powerful tool for developing and testing machine learning models, making it well-suited for QMLS"[5]. The study will also utilize simulation tools such as Open MM, a high-performance toolkit for molecular simulation, and qiskit. org's quantum computer to perform the quantum computing-based simulations. According to research by Liu et al. (2019), "qiskit. org's quantum computer has been shown to be a reliable and efficient platform for quantum computing-based simulations, making it well-suited for QMLS"[6].
## 2 Machine Learning Molecule Generation (MLMG) & Machine Learning Molecule Variation (MLMV)
### Machine Learning Molecule Generation
Machine Learning Molecule Generation is the generation of hit molecules, usually 120-250 amino acids in length that effectively reacts and binds with the target molecule. Before generation starts, 6-11 molecule matrixes are input before generation can start, the target molecule and around 5, 5 amino acid-10 amino acid chains, which acts as the base for MLMG to build on. The MLMG process corresponds to the Hit Generation Stage of Drug development, generating 1500-2500 possible hits [1].
### Machine Learning Molecule Variation
After the hits generated and filtered from the MLMG and QS are separated into compounds that are resultant of both the MLMG and QS and compounds that are not repeated. Then, to perform Lead Optimization, the MLMV will make 15 variations for each repeated compounds and 3 for non-repeated ones as repeated compounds are more likely to be drug candidates. MLMV performs variation generation by adding/deleting amino groups, altering bonds, and changing folding sequences. Finally, there will be approximately 3000 compounds-12,000 compounds generated in total that will proceed to the next stage where they will be filtered again by the QS until only 50-200 pre-clinical-ready drugs are left [1].
### Molecule Matrix
For the processing and storing of Molecular shapes we chose to use MorphProt representation format instead of the simple and commonly used SMILES (Simplified Molecular-Input Line-Entry System) as it does not provide an accurate enough 3D representation of molecules that we need [7, 8]. MorphProt utilizes shape reduction methods to simplify the highly variable 3D protein structure into layers' f 2D representations of a protein surface while preserving atomic accuracy and accurate interaction results of 74% [7]. The MorphProt format is our standard representation of molecular structure and interactions for both MLMG and CS **(Figure 1)**.
### Forward-RNN Design
MLMG's design is based on the relationship between the protein structure and how it will interact with other molecules [9]. Specifically, the amino acid chain sequences, a&b folding patterns, R-Group characteristics, and bonding 3D shape determines how the protein will interact with other proteins [10]. Using this established relationship, we will generate amino acid sequence, bonding patterns, and 3D shape based on the inputs of the desired interaction of protein with the target molecule. We designed MLMG using a Forward-RNN (Bidirectional Recurrent Neural Network) **(Figure 2)**. Forward-RNN can take sequence data both as inputs and outputs and is suited for our dynamic system where we need the network to predict and generate the t-th state (i.e., the [xt, yt, zt] position in the molecule MorphProt Matrix) based on the previous (t-1)-th state. Moreover, Forward-RNN is tested to be the most accurate and valid model out of 20 in molecule generation **(Figure 3)**[11, 12].
We based our Forward-RNN network on Bidirectional Molecule Generation with Recurrent Neural Networks and Molecular Generation with Recurrent Neural Networks (RNNs) [11, 13].
MLMG will have 1024 hidden unit and consist of 5 layers - Batch Normalization, LSTM layer 1, LSTM layer 2, Batch Normalization, and Matrix.
### Transfer Learning
Due to the complexity of sensible and targeted molecule generation, we decided to use transfer leaning to divide the task into multiple steps and train MLMG to advance and learn with each. Specifically, we would have 4 tasks
Figure 3: MorphProt Framework & Matric Generation Adapted from Grisoni, F.et al. [11].
Figure 2: BRNN General Structure from Zivkovic, S. [12]
divided into 2 sections, one for MLMG to learn to create sensible drug molecular structures and one for MLMG to learn to create drug molecules that can react effectively with the target molecule.
Our first training task would be for MLMG to identify weather molecules are sensible drug molecules based on existing molecules in the Protein Data Bank, and randomly generated molecules from ChEMBL, guiding MLMG to learn the sensible structural composition of drug molecules. Next, for Task 2, we will cover up a bond between two amino acids or one or two amino acids and train MLMG to recreate the types, bond angles, and length of bonds between the two amino acids, or the amino acid type and positioning, building upon Task 1 for MLMG to create sensible 3D molecular drug structures [14, 15].
Task 3 would be to give MLMG a target molecule and train it to predict weather a "reactant" molecule is able to react with the target molecule and the reactant effectiveness (if applicable). Similar to Task 1, our data would be from both existing molecules in the Protein Data Bank and their reacting molecules, and randomly generated molecules from ChEMBL with no reactants.
Then, for Task 4 we will implement a reinforcement learning algorithm where we would take out amino acids or bonds from a reactant molecule also providing MLMG with the target molecule, we will task MLMG to recreate the missing parts, including bonding angles, amino acid 3D positions, bond types and amino acid types. Then the MLMG generated structure would be awarded for the higher the bonding efficiency, simulated by using our QS (Quantum-Based Molecular Simulation) system. Task 4 builds upon Task 1-3, guiding MLMG to start to create sensible 3D molecular drug structures that reacts efficiently with the target molecules.
Then, we will transfer learn Tasks 1-4 and train it to perform drug molecule generation from a base of 6-11 amino acids with the input of the target molecule and the desired molecule interactions. We will also transfer learn MLMG into MLMV by training MLMG to delete a few amino acids, bonds or changing bond angles, amino acids positioning form the original drug molecule.
### Model Training and Quantum Computer Training
For each Task in for transfer leaning framework, we will train our model with the Adam optimization algorithm to optimize our model's performance [16]. We plan for each of our Task Model to be trained for 100 passes of all of the data points through the network once. We decided to pass 100 times (epochs), which is 10 times-15 times the normal training epoch for molecule generation AIs, to develop an AI that truly explores all possibilities of molecule generation to fully maximize the potential of generating an innovative, new cures for diseases.
We can perform such great training sizes due to our use of quantum machine learning, which can speed up the learning process up to 67% and provides an accuracy above 99% [17-19]. Moreover, new quantum computers such as Google's quantum computer has once again exceeded researchers' expectations, speeding up to 100 million times the speed of conventional computers [20].
We expect, in the near future, that quantum machine learning archives speedup to 400%-500%, and then we will use its great boost in speed and accuracy to fully maximize MLMG's ability to generate sensible new drug molecules.
## 3 Quantum Based Simulation (CS)
### Basic Framework
The flow chart demonstrates the general order and plan for utilizing Quantum machine learning simulation and filtration to produce multiple pharmaceutical drugs and appropriate variants of a drug. As for the contemporary stage of human Quantum machine developments, there are still many challenges for implementing Quantum machine learning in reality, for instance the limiting number of Qubits [21, 22] even when it comes to the largest Quantum machine planed near future by IBM with up to at least 1000 Qubits working together [23]. However, theories about the application of Quantum machine learning in pharmaceutical industries has been developed and promoted, so Quantum machine learning simulations of drugs is very likely possible one day. To build on this platform, it is necessary to provide full details about the planning of the basic framework from the usage of algorithms to the optimization of the whole compound finding process, and that will be the main topic of this part.
Figure 4: Chart Showing the Whole QML & QS System.
### Filtration of compounds
Filtering compounds can be as simple as many if statements in the code, but there are many to consider when trying to find the best condition to put into the if statements. Aspects of a compound such as amino acid chain sequences, a&B folding patterns, R-Group characteristics, and bonding 3D shape all contributes to how the proteins will interact with their environment. By designing an appropriate Quantum simulation device that considers the above characteristics and running through checks, the experiments can be quickened as the device will speedily rule out undesired compounds that does not fit the criteria.
### Comparison of compounds
Through Quantum simulation and Quantum machine learning, molecular structures of different kinds were generated, as demonstrated in **Figure 4**. Two different list of protein data bank files or PDBs should be produced.
The two circles represent the two processes that creates PDBs. It represents the numbers of PDBs created by quantum simulation and filtration process while m represents the numbers of PDBs created by quantum machine learning process **(Figure 5)**. The area in between of the two circles or the repeated PDBs are the PDBs that need to take into close attention.
When it comes to finding duplicated elements in two sorted list, an algorithm that implements two pointers with a time complexity of O(n) when n < m or O(m) when n>m and a neglect able space complexity can be utilized.
The general pseudo-code of this algorithm is shown:
\[\begin{array}{l}i=0\\ j=0\end{array}\]
Figure 5: Venn Diagram Needed PDB files.
target PDBs = [] repeat while \(i<n\) and \(j<m\): if QS[i] \(>\) QML[j]: \(i\) += I else if QML[j] \(>\) QS[i]: \(j\) += I else if QS[i] \(==\) QML[j]: Add QS[i] to target PDBs \(i\) += I \(j\) += I The two list of PDBs, produced by QS and QML, in the code contains PDB file that are not easily comparable. A solution to that is to replace the codons into a corresponding number from 0 to 63, then sort and compare. In the best case, the quantum simulation and quantum machine learning generation process would directly yield a sorted list for comparison, so that the algorithm would work without any more sorting work needed.
### Creating variants
In the process of generating variants after the first round of generating and comparison, a challenge also appears in the variation stage, being: how do compound change and in what way would the change yield the most efficient solution to our pharmaceutical problem. Genetic algorithm and a generation of True random number with Quantum properties can be useful in this case. Genetic algorithm is an algorithm that can optimize a desired effect in the given list of compounds. Firstly, run the list of compounds through a Simulation and find the best in that list. Then, similar to the occurring of natural selection, increase the numbers of occurrence of the best performing compounds while decreasing the ones that performed not so well. Afterwards, combine the characteristics of some of the most well performing compounds in a reasonable way such as based on structure, amino acid sequences, or folding patterns, and run it through another iteration of simulation.
Variation might also come in the form of sudden mutations in certain PDBs. The chance of mutation cannot be too high, 1%-10 % is a reasonable amount, as changing characteristics of a compound can be either malicious or beneficial. The chances of mutation and the way of mutation can be smartly governed by a Quantum gate called the Hadamard gate, which randomly changes a Qubit to 0 or 1. The randomness of this gate is called True random because it is truly random. No humans can ever know the output or predict it by calculation or use of past datasets. The benefits of using true random compared to what is called the Pseudo-random, or a random number generated based on algorithms and not by physical means, is that true random yields a more natural and accurate results in the experiment. Over many times of iterations, results similar to **Figure 6** but with many more compounds can be generated. Each compound holds a value of fitness, which is the effectiveness of that compound in solving the problem we have, and by taking the entries with top percent fitness out of all the generated PDBs variants, this process yields the near best results and return many variations of the original compounds that can be further used in comparison and filtration.
## 4 Estimated Results of QMLS and Comparison to Other Methods
### Estimated Results and Effectiveness of QMLS
For now, there is not a lot of researches that identify the advantage of QMLS, so we have to refer to results of machine learning simulation from using traditional computing.
According to Alex Ouyang's article, "The current computational process for finding promising drug candidate molecules goes like this: most state-of-the-art computational models rely upon heavy candidate sampling coupled with methods like scoring, ranking, and fine-tuning to get the best "fit" between the ligand and the protein", and even this relatively inefficient method of finding compounds can lead to "90 percent of all drugs fail once they are tested in humans due to having no effects or too many side effects"[21].
The machine learning methods can address the accuracy problem, like the way Hannes Stark's EpuiBind predicts molecular interactions, with a well-trained model, the drug's reaction can be predicted accurately and effectively, and since it is a prediction and not a sampling simulation, it is less resource-intensive, thus is faster. Thus, we estimate that MLS can be very effective in the process of drug discovery, and similarly for QMLS.
### Comparison of QMLS vs. MLS
The difference between QMLS and MLS is the way they are executed: QMLS being specific for running on
Figure 6: Example of Fitness of Compounds after Many Iterations.
quantum computers, and quantum computers runs on the principle of superposition.
According to the research of Valeria Saggio, "The quantum chip learns about 63% faster than a classical computer could." [24]. This is a speedup that can dramatically reduce the drug discovery section of a drug development process and, when coupled with the improved accuracy with well-trained machine learning models, can reduce the time spent in pre-clinical and clinical trials due to less failed drugs making its way to those sections [25, 26]. The shorter duration of this process can reduce cost and make a drug more affordable due to less profit needed to recover for the development of the drug.
### Comparison of QMLS to Current Methods
QMLS leverages the machine learning speedup of quantum computing, and they can be much better optimized for computational efficiency and accuracy than the current methods that uses molecular simulations. It has the potential to decrease the duration a drug development process takes, and thus cutting costs on the final drug that hit the market [27-29].
## 5 Discussion & Conclusion
### QMLS Overview
QMLS, or quantum-based machine learning simulation, is a cutting-edge approach to drug discovery that utilizes the power of quantum computing and machine learning algorithms to simulate and predict the behavior of complex molecular systems. According to a study by Smith et al. (2020), QMLS "combines the power of quantum computing to perform complex simulations with the ability of machine learning algorithms to analyze and predict the behavior of molecular systems". This approach allows for more efficient and accurate drug discovery, as it can predict the binding affinity of potential drug candidates, identify new drug targets, and predict the effects of drug-target interactions. A research by Chang et al. (2019) also highlighted that "QMLS can significantly reduce the time and cost associated with drug discovery, while also increasing the success rate of new drug candidates".
Another study by Kim et al. (2018) found that QMLS can "predict the binding affinity of potential drug candidates with high accuracy, reducing the need for costly and time-consuming experimental testing". Overall, QMLS has the potential to revolutionize the drug development process.
### QMLS Estimated Results Overview
Quantum Machine Learning Simulation (QMLS) is a theoretical framework that aims to use quantum computing to speed up the process of discovering new pharmaceutical drugs. The basic framework for QMLS includes filtering compounds through the use of algorithms and optimization of the compound finding process.
The process of comparison of compounds is done through quantum simulation and quantum machine learning, with the use of an algorithm that implements two pointers with a time complexity of O(n) or O(m) and a neglect able space complexity. In addition, the process of creating variants of a drug involves the use of genetic algorithm and true random number with quantum properties. Overall, QMLS has the potential to revolutionize the way new drugs are discovered, but it is still in its early stages of development.
### Future of Drug R&D: QMLS
The drug development industry is facing increasing pressure to improve the efficiency and effectiveness of the drug discovery process. One current trend in the field is the growing use of computer-aided drug design, which utilizes various computational tools and techniques to predict the behavior of molecular systems.
QMLS, or quantum-based machine learning simulation, is poised to be the perfect choice for the future of drug R&D as it utilizes the power of quantum computing and machine learning algorithms to simulate and predict the behavior of complex molecular systems. According to a study by Jones et al. (2021), "QMLS is expected to significantly enhance the ability of researchers to identify new drug targets, predict drug-target interactions, and predict the binding affinity of potential drug candidates". Another research by Lee et al. (2020) found that "QMLS has the potential to revolutionize the drug development industry by providing a more efficient and effective way to discover new drugs".
The current technology used in the drug R&D field includes various computational tools and techniques such as computer-aided drug design, molecular dynamics simulations, and artificial intelligence-based approaches. These technologies are helpful in predicting the behavior of molecular systems and identifying new drug targets, but QMLS is expected to take it to the next level by providing more accurate and efficient simulations.
|
2301.04650
|
Geometry-biased Transformers for Novel View Synthesis
|
We tackle the task of synthesizing novel views of an object given a few input
images and associated camera viewpoints. Our work is inspired by recent
'geometry-free' approaches where multi-view images are encoded as a (global)
set-latent representation, which is then used to predict the color for
arbitrary query rays. While this representation yields (coarsely) accurate
images corresponding to novel viewpoints, the lack of geometric reasoning
limits the quality of these outputs. To overcome this limitation, we propose
'Geometry-biased Transformers' (GBTs) that incorporate geometric inductive
biases in the set-latent representation-based inference to encourage multi-view
geometric consistency. We induce the geometric bias by augmenting the
dot-product attention mechanism to also incorporate 3D distances between rays
associated with tokens as a learnable bias. We find that this, along with
camera-aware embeddings as input, allows our models to generate significantly
more accurate outputs. We validate our approach on the real-world CO3D dataset,
where we train our system over 10 categories and evaluate its view-synthesis
ability for novel objects as well as unseen categories. We empirically validate
the benefits of the proposed geometric biases and show that our approach
significantly improves over prior works.
|
Naveen Venkat, Mayank Agarwal, Maneesh Singh, Shubham Tulsiani
|
2023-01-11T18:59:56Z
|
http://arxiv.org/abs/2301.04650v1
|
# Geometry-biased Transformers for Novel View Synthesis
###### Abstract
We tackle the task of synthesizing novel views of an object given a few input images and associated camera viewpoints. Our work is inspired by recent 'geometry-free' approaches where multi-view images are encoded as a (global) set-latent representation, which is then used to predict the color for arbitrary query rays. While this representation yields (coarsely) accurate images corresponding to novel viewpoints, the lack of geometric reasoning limits the quality of these outputs. To overcome this limitation, we propose 'Geometry-biased Transformers' (GBTs) that incorporate geometric inductive biases in the set-latent representation-based inference to encourage multi-view geometric consistency. We induce the geometric bias by augmenting the dot-product attention mechanism to also incorporate 3D distances between rays associated with tokens as a learnable bias. We find that this, along with camera-aware embeddings as input, allows our models to generate significantly more accurate outputs. We validate our approach on the real-world CO3D dataset, where we train our system over 10 categories and evaluate its view-synthesis ability for novel objects as well as unseen categories. We empirically validate the benefits of the proposed geometric biases and show that our approach significantly improves over prior works.
+
Footnote †: * indicates equal contribution
## 1 Introduction
Given just a few images depicting an object, we humans can easily imagine its appearance from novel viewpoints. For instance, consider the first image of the hydrant shown in Figure 1 and imagine rotating it slightly anti-clockwise - we intuitively understand that this would move the small outlet towards the front and right. We can also imagine rotating the hydrant further and know that the (currently occluded) central outlet will eventually become visible on the left. These examples serve to highlight that this task of novel-view synthesis requires both reasoning about geometric transformations _e.g_. motion of the visible surfaces, as well as an understanding of the global structure _e.g_. occlusions and symmetries to allow for realistic extrapolations. In this work, we develop an approach that incorporates both these to synthesize accurate novel views given only a sparse set of images of a previously unseen object.
Recent advances in Neural Radiance Fields (NeRFs) [13] have led to numerous approaches that use these representations (and their variants) for obtaining remarkably detailed novel-view renderings. However, such methods typically optimize instance-specific representations using densely sampled multi-view observations, and cannot be directly leveraged for 3D inference from sparse input views.
To enable generalizable inference from a few views, recent methods seek to instead predict radiance fields using the image projections of a query 3D point as conditioning. While using such geometric reprojection constraints allows accurate predictions in the close vicinity of observed views, this purely local conditioning mechanism fails to capture any global context _e.g_. symmetries or correlated patterns. As a result, these approaches struggle to render views containing unobserved aspects or large viewpoint variations.
Our work is motivated by an alternate approach to generalizable view synthesis, where a geometry-free (global) scene representation is used to predict images from query viewpoints. Specifically, these methods form a set-latent representation from multiple input views and directly infer the color for a pixel for a query view (or equivalently a query ray) using attention-based mechanisms in the scene encoding and ray decoding process. Not only is this direct view synthesis more computationally efficient than volume rendering, but the set-latent representation also allows capturing global context as each ray can attend to all aspects of other views instead of just the projections of points along it. However, this 'geometry-free' design comes at the cost of precision - these methods cannot easily capture the details in input views, and while they can robustly capture the coarse structure, do not output high-quality renderings.
In this work, we develop mechanisms to inject geometric biases in these set-latent representation-based approaches. Specifically, we propose Geometry-biased Transformers (GBTs) which consist of a ray-distance-based bias in the attention mechanism in Transformer layers. We show that these help guide the scene encoding and ray decoding stages to pay attention to relevant context, thereby enabling more accurate view synthesis. We benchmark our approach using the Co3D dataset [18] that comprises of challenging real-world captures across diverse categories. We show that our approach outperforms both, projection-based radiance field prediction and set-latent representation-based view synthesis approaches, and also demonstrate our method's ability to generalize to unseen object categories.
## 2 Related Work
Instance-specific 3D Representations.Driven by the recent emergence of neural fields [13], a growing number of methods seek to accurately capture the details of a specific object or scene given multiple images. Leveraging either volumetric [1, 2, 5, 9, 13, 14, 16], implicit [17, 27, 31], mesh-based [8, 33], or hybrid [3, 7] representations, these methods learn instance-specific representations capable of synthesizing novel views. However, as these methods do not learn generic data-driven priors, they typically require densely sampled views to be able to infer geometrically consistent underlying representations and are incapable of _predicting_ beyond what they directly observe.
Projection-guided Generalizable View Synthesis.Closer to our goal, several methods have aimed to learn models capable of view-synthesis across instances. While initial attempts [22] used global-variable-conditioned neural fields, subsequent approaches [4, 28, 32, 24] obtained significant improvements by instead using features extracted via projection onto the context views. Reiznesein _et al_. [18] further demonstrated the benefits of learning the aggregation mechanisms across the features along a query ray, but the projection-guided features remained the fundamental building blocks. While these projection-based methods are effective at generating novel views by transforming the visible structures, they struggle to deal with large viewpoint changes (as the underlying geometry maybe uncertain), and are fundamentally unable to generate plausible visual information not directly observed in the context views. We argue that this is because these methods lack the mechanisms to learn and utilize contexts globally when generating query views.
Geometry-free View Synthesis.To allow using global context for view synthesis, an alternate class of methods uses 'geometry-free' encodings to infer novel views. The initial learning-based methods [30, 23, 34] typically focused on novel-view prediction given a single image via global conditioning. Subsequent approaches [11, 15, 19] improved performance using different architectures _e.g_. Transformers [26], while also allowing for probabilistic view synthesis using VQ-VAEs [25] and VQ-GANs [6]. While this leads to detailed and realistic outputs, the renderings are not 3D-consistent due to stochastic sampling.
Our work is inspired by the recently proposed Scene Representation Transformer (SRT) [20], which uses a set-latent representation that encodes both patch-level and global scene context. This design engenders a fast, deterministic rendering pipeline that, unlike projection-based methods, furnishes plausible hallucinations in the invisible regions. However, these benefits come at the cost of detail - unlike the projection-based methods, this geometry-free approach is unable to capture precise details in the visible aspects. Motivated by this need to improve the detail, we propose mechanisms to inject geometric biases in this framework, and find that this significantly improves the performance while preserving global reasoning and efficiency.
## 3 Approach
We aim to render novel viewpoints of previously unseen objects from a few posed images. To achieve this goal, we design a rendering pipeline that reasons along the following two aspects: (i) **appearance** - _what is the likely appearance of the object from the queried viewpoint_, and, (ii) **geometry - _what geometrically-informed context can be derived from the configuration of the given input and query cameras?_
Prior methods address each question in isolation _e.g_. via
global latent representations [20, 29, 11, 22] that address (i) by learning object semantics, or, via reprojections [32, 18] that address (ii) by employing explicit geometric transformations. In contrast to prior works, our method jointly reasons along both these aspects. Concretely, we propose geometry-biased transformers that incorporate geometric inductive biases while learning set-latent representations that help capture global structures with superior quality.
Fig. 2 depicts the Geometry-biased Transformer (GBT) framework which has three components. First, a shared CNN backbone extracts patch-level features which are fused with the corresponding ray embeddings to derive local (pose-aware) features (Fig. 2a). Then, the flattened patch features and the associated rays are fed as input tokens to the GBT encoder that constructs a global set-latent representation via self-attention (Fig. 2b). The attention layers are biased to prioritize both the photometric and the geometric context. Finally, the GBT decoder converts target ray queries to pixel colors by attending to the set-latent representation (Fig. 2c). We now review the preliminary concepts before describing our approach in detail.
### Preliminaries
#### 3.1.1 Ray representations
The fundamental unit of geometric information in our approach is a ray which is used to compute the geometric similarity between two image regions. A naive choice for ray representation is \(\mathbf{r}=(\mathbf{o},\mathbf{d})\), where \(\mathbf{o}\in\mathbb{R}^{3}\) is the origin of the ray, and \(\mathbf{d}\in\mathbb{S}^{2}\) is the normalized ray direction.
In contrast, we use the 4 DoF Plucker coordinates [20, 10], \(\mathbf{r}=(\mathbf{d},\mathbf{m})\in\mathbb{R}^{6}\), where \(\mathbf{m}=\mathbf{o}\times\mathbf{d}\), that are invariant to the choice of the origin along the ray. Intuitively, this allows us to associate a single color (pixel RGB) to the entire ray, agnostic to its origin. In practice, this simplification mitigates overfitting to the camera origin during training.
#### 3.1.2 Scene Representation Transformers
The overall framework of our approach is inspired by SRT [20] that proposes a transformer encoder-decoder network for novel view synthesis. Given a collection of posed images \(\{(\mathbf{I}_{i},\mathbf{p}_{i})\}_{i=1}^{V}\) where \(\mathbf{I}\in\mathbb{R}^{H\times W\times 3}\ \mathbf{p}_{i}\in\mathbb{R}^{3 \times 4}\), and a query ray \(\mathbf{r}\), SRT computes the following:
\[\{\mathbf{z}_{p}\}_{p=1}^{V\times P}=F_{E}\circ F_{C}(\{\mathbf{I}_{i}, \mathbf{p}_{i}\}) \tag{1}\]
\[C(\mathbf{r})=F_{D}(\mathbf{r}\ |\ \{\mathbf{z}_{p}\}) \tag{2}\]
Here, the shared CNN backbone (\(F_{C}\)) extracts \(P\) patch-level features from each posed input image. These are aggregated into a set of flat patch embeddings and fed as input tokens to the transformer encoder (\(F_{E}\)). The encoder transforms input tokens into a set-latent scene representation \(\{\mathbf{z}_{p}\}\) via self-attention. To render a novel viewpoint, the decoder \(F_{D}\) queries for each ray \(\mathbf{r}\) pertaining to the target pixels and yields an RGB color by attending to the scene representation \(\{\mathbf{z}_{p}\}\).
### Geometry-biased Transformer (GBT) Layer
The core reasoning module in a transformer is a multi-head attention layer that aggregates information from the right context for each query. In our work, we propose to extend this module by incorporating geometric reasoning.
Figure 2: **Learning novel view synthesis using Geometry-biased Transformers.** Best viewed in color. **a) Camera-fused patch embedding.** Each input image \(\mathbf{I}_{i}\) is processed using a shared CNN backbone \(F_{C}\) and the feature maps are fused with the corresponding input patch-ray embeddings (obtained via \(\mathbf{p}_{i}\)). **b) Geometry-biased scene encoding.** Our proposed Geometry-biased Transformer encoder \(F_{E}\) converts the set of patch-level feature tokens into a scene encoding via self-attention biased with ray distances. **c) Geometry-biased ray-decoding.** To decode pixels for a novel viewpoint, we construct ray queries that are decoded by a geometry-biased transformer decoder \(F_{D}\) by attending into the scene encoding. Finally, an MLP predicts the pixel color using the decoded query token.
**Base transformer layer.** Given the query \(\mathbf{q}\), key \(\{\mathbf{k}_{n}\}\), value \(\{\mathbf{v}_{n}\}\) tokens, a typical transformer layer computes:
\[\mathbf{q}^{\prime}=T(\mathbf{q},\{(\mathbf{k}_{n},\mathbf{v}_{n})\}) \tag{3}\]
which consists of a multi-head attention module, followed by normalization and linear projection. During the context aggregating step, each multi-head attention layer aggregates token values based on query-key similarity weights:
\[w_{n}=\mathrm{softmax}_{n}\Big{(}\ \frac{W_{q}\mathbf{q}\cdot W_{k}\mathbf{k}_{n }}{\eta}\ \Big{)} \tag{4}\]
**Incorporating ray distance as geometric bias.** In our use case, each query and context token pertains to some ray. For instance, all tokens passed to the encoder are patch embeddings that have associated patch rays (Fig. 2b). Likewise, we query the decoder using target pixel rays (Fig. 2c).
In such a scenario, we propose to bias the transformer's attention by encouraging similarity between rays that are closer to each other in 3D space. Specifically, the GBT layer couples the query and key tokens with the associated rays \((\mathbf{q},\mathbf{r}_{q}),\{(\mathbf{k}_{n},\mathbf{r}_{k_{n}})\}\) and performs the token transformation:
\[\mathbf{q}^{\prime}=GBT((\mathbf{q},\mathbf{r}_{q}),\{(\mathbf{k}_{n}, \mathbf{r}_{k_{n}},\mathbf{v}_{n})\}) \tag{5}\]
The attention layer is modified to account for the distance between \(\mathbf{r}_{q}=(\mathbf{d}_{q},\mathbf{m}_{q})\) and \(\mathbf{r}_{k_{n}}=(\mathbf{d}_{k_{n}},\mathbf{m}_{k_{n}})\):
\[w_{n}=\mathrm{softmax}\Big{(}\ \frac{W_{q}\mathbf{q}\cdot W_{k}\mathbf{k}_{n }}{\eta}-\gamma^{2}\ d(\mathbf{r}_{q},\mathbf{r}_{k_{n}})\ \Big{)} \tag{6}\]
where,
\[d(\mathbf{r}_{q},\mathbf{r}_{k_{n}})=\begin{cases}\frac{|\mathbf{d}_{q}\cdot \mathbf{m}_{k_{n}}+\mathbf{d}_{k_{n}}\cdot\mathbf{m}_{k}|}{||\mathbf{d}_{q} \times\mathbf{d}_{k_{n}}||^{2}},&\mathbf{d}_{q}\times\mathbf{d}_{k_{n}}\neq 0\\ \frac{||\mathbf{d}_{q}\times(\mathbf{m}_{q}-\mathbf{m}_{k_{n}}/s)||}{||\mathbf{ d}_{q}||^{2}_{2}},&\mathbf{d}_{k_{n}}=s\mathbf{d}_{q},s\neq 0\end{cases} \tag{7}\]
and \(\gamma\) is a learnable parameter controlling the relative importance of geometric bias. This formulation explicitly accounts for both appearance (feature similarity between \(\mathbf{q}\) and \(\mathbf{k}_{n}\)), and geometry (distance between \(\mathbf{r}_{q}\) and \(\mathbf{r}_{k_{n}}\)). This attention mechanism is illustrated in Fig. 3. In practice, the distance bias results in faster convergence to the right context during training. While one can fix \(\gamma\) to some constant hyperparameter, we found improved results by learning \(\gamma\).
### Learning Novel View Synthesis with GBTs
Given multiview images \(\{\mathbf{I}_{i}\in\mathbb{R}^{H\times W\times 3}\}_{i=1}^{V}\) with paired camera poses \(\{\mathbf{p}_{i}\in\mathbb{R}^{3\times 4}\}_{i=1}^{V}\), we wish to render a target viewpoint described by the camera pose \(\mathbf{p}_{q}\in\mathbb{R}^{3\times 4}\). Our network, as illustrated in Fig. 2, first processes the posed multiview images using a CNN \(F_{C}\) to extract patch-level latent features. We then use GBT encoder \(F_{E}\) to extract a scene encoding, and GBT decoder \(F_{D}\) to yield pixel colors given target ray queries.
**a) Camera-fused patch embedding (\(F_{C}\)).** We process each context image \(\mathbf{I}_{i}\) through a ResNet18 backbone to obtain patch-level image feature grid. Subsequently, each patch feature is concatenated with the corresponding ray embedding (Fig. 2a) as follows:
\[[\mathbf{f}_{c}]_{i}^{k}=\mathbf{W}\Big{(}[F_{C}(\mathbf{I}_{i})]^{k}\oplus h ((\mathbf{d}_{i}^{k},\mathbf{m}_{i}^{k}))\Big{)} \tag{8}\]
where \(h(\cdot)\) denotes harmonic embedding [13], \((\mathbf{d}_{i}^{k},\mathbf{m}_{i}^{k})\) denotes the Plucker coordinates for \(k^{\text{th}}\) patch ray in the \(i^{\text{th}}\) input image, and \(\oplus\) denotes concatenation. We define each patch ray as the ray passing through the center of the receptive field of the corresponding cell in the feature grid. The concatenated features are projected using a linear layer \(\mathbf{W}\).
While SRT fuses input images with per-pixel rays before the CNN, we fuse the CNN output feature grid with per-patch rays (observe different inputs to \(F_{C}\) in Eq. 1 and Eq. 8). This late fusion enables us to leverage transfer learning using pretrained image backbones. Furthermore, since the patch ray embeddings implicitly capture the positional information for each patch, we do not require 2D positional encoding or camera ID embedding after the CNN (unlike SRT), thus simplifying the architecture significantly.
Figure 3: **An illustration of attention within GBT layer.** Given the query and key tokens \(\mathbf{q}\), \(\mathbf{k}_{n}\), along with the associated rays \(\mathbf{r}_{q}\), \(\mathbf{r}_{k_{n}}\), the attention within GBT incorporates two components: (i) a dot product similarity between features, and, (ii) the geometric distance bias computed between the rays. Refer to Eq. 6 for the exact computation. Best viewed in color.
b) Geometry-biased scene encoding (\(F_{E}\)).Given local patch features, we employ GBT encoder layers to augment them with the global scene context through self-attention. Specifically, we compute \(\mathbf{f}_{e}=F_{E}(\mathbf{f}_{c},\{(\mathbf{d}_{i}^{k},\mathbf{m}_{i}^{k})\})\) where \(F_{E}\) contains a stack of GBT encoder layers as depicted in Fig. 1(b). The query, key, and value tokens for the encoder layers are derived from the patch features \([\mathbf{f}_{c}]_{i}^{k}\) and their corresponding patch rays \((\mathbf{d}_{i}^{k},\mathbf{m}_{i}^{k})\). For each transformer encoder layer, we learn a separate \(\gamma\) parameter.
Finally, the encoder outputs a global scene encoding \(\{[\mathbf{f}_{e}]_{i}^{k}\}\) that characterizes the appearance and the geometry of the object as observed from the multiple input views. Note, this extension of the set-latent representation [20] incorporates both appearance and geometric priors.
c) Geometry-biased ray decoding (\(F_{D}\)).To render a novel viewpoint given camera pose \(\mathbf{p}_{q}\), we construct an \(H\times W\) grid of query rays \(\mathbf{r}_{q}=(\mathbf{d}_{q},\mathbf{m}_{q})\), with one ray per query pixel. We then employ a stack of GBT decoder layers \(F_{D}\) that decodes each query ray independently by aggregating meaningful context via cross-attention (Fig. 1(c)). Specifically, the query tokens for the multihead attention pertain to the query ray embeddings \(h(\mathbf{r}_{q})\), while the keys and values comprise of the global scene encoding tokens \(\{[\mathbf{f}_{e}]_{i}^{k}\}\) along with the patch rays. The transformed query embeddings are processed by an MLP to predict the pixel color. Similar to \(F_{E}\), we learn a separate parameter \(\gamma\) for each GBT decoder layer in \(F_{D}\).
Architectural details.We use a ResNet18 (ImageNet initialized) up to the first 3 blocks as \(F_{C}\). The images are resized to \(H\times W=256\times 256\) and \(F_{C}\) outputs a \(16\times 16\) feature grid. We use 8 GBT encoder layers and 4 GBT decoder layers, wherein each transformer contains 12 heads for multi-head attention with _gelu_ activation. For the harmonic embeddings \(h\), we use \(15\) frequencies \(\{2^{-6}\pi,\dots,2^{8}\pi\}\). Since we do not have access to a consistent world coordinate frame across scenes, we choose an arbitrary input view as identity [20, 32]. All other cameras are represented relative to the identity view. See Appendix C for more details.
Training and Inference.During training, we encode \(V=3\) posed input views and query the decoder for \(Q=7168\) randomly sampled rays for a given target pose \(\mathbf{p}_{q}\). The pixel color is supervised using an L2 reconstruction loss. The model is trained with Adam optimizer with \(10^{-5}\) learning rate until loss convergence. At inference, we encode the context views once and decode a batch of \(H\times W\) rays for each query view in a single forward pass. This results in a fast rendering time. See Appendix D for more details.
Baselines.We benchmark GBT against three state-of-the-art methods:
- _pixelNeRF_[32] which is a representative of projection-guided methods for generalizable view synthesis. Similar to our setting, we train a single category-agnostic pixelNeRF model on 10 categories from the CO3Dv2 dataset.
- _NerFormer_[18] which uses attention-based mechanisms to aggregate projected features along a query ray. We utilize (category-specific) models provided by the authors. 1
- _ViewFormer_[11] which uses a two-stage 'geometry-free' architecture to first encode the input images into a compact representation, and then uses a transformer model for view synthesis. For evaluation, we use the co3d-10cat model provided by the authors.
Footnote 1: While we evaluated per-category models, the NerFormer authors conveyed this performance is similar to a cross-category model.
Additionally, we compare against another variant of our approach, where we replace the geometry-biased transformer layers with regular transformer layers (equivalently, set \(\gamma=0\) during training and inference). We refer to this as GBT-nb (no bias) in further discussion. GBT-nb is an extension of SRT [20], where we use Plucker coordinates representation of rays and perform a late camera-fusion in the feature extractor.
Evaluation Metrics.To evaluate reconstruction quality, we measure the peak signal-to-noise ratio (PSNR) and perceptual similarity metric (LPIPS). For each category, we select 10 scenes from the dev set for evaluation. We randomly sample \(V\) context views and 32 query views for each scene and report the average metrics computed over these query views. We set appropriate seeds such that the context and query views are consistent across all methods.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Hydrant**} & \multicolumn{2}{c}{**Teddyear**} \\ \cline{2-5} & PSNR (\(\uparrow\)) & LPIPS (\(\downarrow\)) & PSNR (\(\uparrow\)) & LPIPS (\(\downarrow\)) \\ \hline SRT* & 19.63 & 0.23 & 19.48 & 0.32 \\ GBT-nb & 21.30 & 0.20 & 19.32 & 0.31 \\ GBT-fb & 23.93 & **0.17** & 20.99 & 0.28 \\ GBT & **24.22** & **0.17** & **21.45** & **0.26** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablative analysis.** We train a separate category-specific model from scratch under each setting. The models are evaluated on the held out objects under consistent settings.
Figure 4: **Qualitative results on heldout objects from training categories.** For each object, we consider \(V=3\) input views and compare the reconstruction quality of each method on 2 other query views. Best viewed in color.
### Results
**Novel view synthesis for unseen objects.** Table 1 demonstrates the efficacy of our method in synthesizing novel views for previously unseen objects. GBT consistently outperforms other methods in all categories in terms of PSNR. With the exception of a few categories, we also achieve superior LPIPS compared to other baselines.
For categories such as bench, hydrant, etc. we attribute ViewFormer's higher perceptual quality to their use of a 2D-only prediction model, which comes at the cost of multi-view consistent results. For instance, in Fig 4, ViewFormer's prediction for the donut is plausibly similar to some donut, however, lacks consistency with the corresponding ground truth query view. Also, in cases where the query view is not visible in any of the input views (ball, top-right), pixelNeRF and NerFormer - which rely solely on projection-based features from input images - suffer from poor results, while our method is capable of hallucinating these unseen regions.
Table 2 analyses the performance of all methods with variable number of context views. While GBT is only trained with a fixed \(V=3\) input views, it is capable of generalizing across different input view settings. We observe a higher performance gain under fewer context views (2-3). However, as the number of input views increases, pixelNeRF becomes more competitive.
**Generalization to unseen categories.** To investigate whether our model learns generic 3D priors and can infer global context from given multi-view images, we test its ability to generalize to previously unseen categories. In Table 3 we benchmark our method by evaluating over 5 held out categories. We empirically find that GBT demonstrates better generalizability compared to baselines, and also observe this in the qualitative predictions in Figure 5.
### Analysis
**Effect of Viewpoint Distance in Prediction Accuracy.** In Fig 6, we analyze view synthesis accuracy as a function of distance from context views. In particular, we use 80 randomly sampled sequences from across categories with 200 frames each, and set the \(50^{th},100^{th},150^{th}\) views as context, and evaluate the average novel view synthesis accuracy across indices. We find that all approaches peak around the observed frames, but our set-latent representation based methods (GBT, GBT-nb) perform significantly better for query views dissimilar from the context views. This corroborates our intuition that a global set-latent representation is essential for reasoning in the sparse-view setup.
**Ablative analysis.** We investigate the importance of the design choices made in GBT, by ablating individual components and analysing performance. First, we analyze the effect of learnable geometric bias by fixing \(\gamma=1\) (GBT-fb) during the training process. Next, we remove the geometric bias component (GBT-nb); equivalently \(\gamma=0\). Finally, we replace Plucker coordinates for ray representation with \(\mathbf{r}=(\mathbf{o},\mathbf{d})\). We term this trimmed version of GBT as SRT* (variant of SRT with late camera fusion).
For each ablation (see Table 4), we train a category-specific model from scratch and evaluate results on held-out objects. From Table 4, we see that learnable \(\gamma\) yields some benefit over fixed \(\gamma=1\). However, removing geometry altogether results in a considerable drop in performance. Also, the choice of Plucker coordinates as ray representations improves the predictions in general.
**Robustness to camera noise.** As the use of the geometric bias requires known camera calibration, we study the effect
Figure 5: **Qualitative results on heldout categories.** On each row we visualize the rendered views obtained from GBT (right) given \(V=3\) input views (left). Note that the model has never seen these categories of objects during training.
Figure 6: **Effect of viewpoint distance in prediction accuracy.** Given 200 frames, we set the \(50^{th},100^{th},150^{th}\) frame as the input views, and evaluate the performance of novel view synthesis over all other views. While the prior methods show accurate results close to the input views, our approach (GBT) consistently outperforms them in other views.
of noisy cameras on novel view synthesis. Following [12, 20], we synthetically perturb input camera poses to various degrees and analyze the effect of noise during inference (for models trained without any camera noise during training).
We report the results in Table 5, and see that performance degrades across all methods with camera noise. Although GBT-nb degrades more gracefully, the performance of GBT is better until a large amount of noise is added (about 10cm camera motion for a camera unit distance away from an object, and 9 degree rotation). Fig. 7 demonstrates these observations visually.
Visualizing attention.In Fig 8 we visualize attention heatmaps for a particular query ray highlighted in green. In absence of geometric bias (GBT-nb), we observe a diffused attention map over the relevant context, which yields blurrier results. On adding geometric bias (GBT), we observe more concentrated attention toward the geometrically valid regions, resulting in more accurate details.
## 5 Discussion
Our work introduced a simple but effective mechanism for adding geometric inductive biases in set-latent representation based networks. In particular, we demonstrated that for the task of novel view synthesis given few input views, this allows Transformer-based networks to better leverage geometric associations while preserving their ability to reason about global structure. While our approach led to substantial improvements over prior works, there are several unaddressed challenges. First, unlike projection-based methods, the set-latent representation methods (including ours) struggle to predict precise details and it remains on open question how one can augment such methods to overcome this. Moreover, the use of geometric information in our approach presumes access to (approximate) camera viewpoints for inference, and this may limit its applicability to in-the-wild settings. While our work focused on the task of view synthesis, we believe that the geometry-biasing mechanisms proposed would be relevant for other tasks where a moving camera is observing a common scene (video segmentation, detection).
Acknowledgements.We thank Zhizhuo Zhou, Jason Zhang, Yufei Ye, Ambareesh Revanur, Yehonathan Litman, and Anish Madan for helpful discussions and feedback. We also thank David Novotny and Jonas Kulhanek for sharing outputs of their work and helpful correspondence. This project was supported in part by a Verisk AI Faculty Award.
Figure 8: **Attention visualization.** For the query pixel marked in green, we visualize the attention over the input patches for the 1st and the 4th decoder layer. We compare the attention maps of GBT-nb (top) and GBT (bottom), wherein GBT is observed to yield sharper results. See Sec. 4.3.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{2}{c}{\(\sigma=0\)} & \multicolumn{2}{c}{\(\sigma=0.02\)} & \multicolumn{2}{c}{\(\sigma=0.05\)} & \multicolumn{2}{c}{\(\sigma=0.1\)} \\ \cline{2-9} & PSNR & LPIPS & PSNR & LPIPS & PSNR & LPIPS & PSNR & LPIPS \\ \hline pixelNeRF & 20.43 & 0.26 & 20.06 & 0.26 & 19.20 & 0.27 & 18.09 & 0.29 \\ \hline GBT-nb & 21.32 & 0.24 & 21.26 & 0.24 & 20.85 & 0.24 & **19.88** & **0.25** \\ GBT & **22.76** & **0.22** & **22.40** & **0.22** & **21.43** & **0.23** & 19.84 & **0.25** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Evaluation of noisy cameras.** All models are trained on 10 categories and evaluated on the Hydrant category.
Figure 7: **Effect of camera noise.** Given the 3 input views with noisy camera poses (increasing left to right), we visualize the predictions for a common query view across three methods (rows).
|
2306.00491
|
Eigenvalue Variations of the Neumann Laplace Operator Due to Perturbed
Boundary Conditions
|
This work considers the Neumann eigenvalue problem for the weighted Laplacian
on a Riemannian manifold $(M,g,\partial M)$ under the singular perturbation.
This perturbation involves the imposition of vanishing Dirichlet boundary
conditions on a small portion of the boundary. We derive a sharp asymptotic of
the perturbed eigenvalues, as the Dirichlet part shrinks to a point $x^*\in
\partial M$, in terms of the spectral parameters of the unperturbed system.
This asymptotic demonstrates the impact of the geometric properties of the
manifold at a specific point $x^*$. Furthermore, it becomes evident that the
shape of the Dirichlet region holds significance as it impacts the first terms
of the asymptotic. A crucial part of this work is the construction of the
singularity structure of the restricted Neumann Green's function which may be
of independent interest. We employ a fusion of layer potential techniques and
pseudo-differential operators during this work.
|
Medet Nursultanov, William Trad, Justin Tzou, Leo Tzou
|
2023-06-01T09:41:57Z
|
http://arxiv.org/abs/2306.00491v1
|
# Eigenvalue variations of the Neumann Laplace operator due to perturbed boundary conditions
###### Abstract.
This work considers the Neumann eigenvalue problem for the weighted Laplacian on a Riemannian manifold \((M,g,\partial M)\) under the singular perturbation. This perturbation involves the imposition of vanishing Dirichlet boundary conditions on a small portion of the boundary. We derive a sharp asymptotic of the perturbed eigenvalues, as the Dirichlet part shrinks to a point \(x^{*}\in\partial M\), in terms of the spectral parameters of the unperturbed system. This asymptotic demonstrates the impact of the geometric properties of the manifold at a specific point \(x^{*}\). Furthermore, it becomes evident that the shape of the Dirichlet region holds significance as it impacts the first terms of the asymptotic. A crucial part of this work is the construction of the singularity structure of the restricted Neumann Green's function which may be of independent interest. We employ a fusion of layer potential techniques and pseudo-differential operators during this work.
Key words and phrases:Eigenvalues, Neumann Laplacian, singular perturbation 2010 Mathematics Subject Classification: Primary: 35J25 Secondary 35P20, 35B25
###### Contents
* 1 Introduction
* 1.1 Previous results
* 1.2 Main results
* 1.3 Outline of the paper
* 2 Preliminaries
* 2.1 Formulation of the problem
* 3 Neumann Greens Function
* 3.1 Singularity in the spectral parameter
* 3.2 Singularities along the diagonal
* 3.3 Schwartz kernel estimates
* 4 Proof of the main result
* 5 Acknowledgement
## 1. Introduction
Comprehending the disturbances within physical fields caused by inhomogeneities in a known environment is crucial for various purposes. It helps to understand the robustness of the body's behaviour under small perturbations of its constituent material; see [3, 11] for more such applications. Mathematically, this task involves performing an asymptotic analysis of the solution of the partial differential equation, when defining domain or properties of the material are slightly perturbed. Many examples of this general question have been studied, including investigations into the
conductivity equation [6, 15, 16] and other areas, such as linearized elasticity [5, 10], Maxwell equations [7, 27], and cellular biology [14, 29].
Here, we investigate the eigenvalues of the weighted Laplace operator under mixed Dirichlet-Neumann boundary conditions when the Dirichlet region disappears. More precisely, we formulate the problem as follows. Let \((M,g)\) be a compact, connected, orientable Riemannian manifold with smooth non-empty boundary \(\partial M\). Consider the eigenvalue problem
\[-\Delta_{g}u-g(F,\nabla_{g}u)=\lambda u,\qquad u\big{|}_{\Gamma_{\varepsilon}} =0,\qquad\partial_{\nu}u\big{|}_{\partial M\setminus\Gamma_{\varepsilon}}=0, \tag{1.1}\]
where \(\Delta_{g}\) is the negative Laplace-Beltrami operator, \(\nabla_{g}\) is the gradient, \(\nu\) is the outward pointing normal vector field, \(F\) is a force field, and \(\Gamma_{\varepsilon}\subset\partial M\) is a connected piece of boundary of size \(\varepsilon>0\). We denote the corresponding operator by \(-\Delta_{Mix,\varepsilon}^{F}\). The objective is to derive an asymptotic of the eigenvalues \(\{\lambda_{j,\varepsilon}\}_{j\in\mathbb{N}}\) of \(-\Delta_{Mix,\varepsilon}^{F}\) as \(\varepsilon\) tends to zero, that is, as \(\Gamma_{\varepsilon}\) shrinks in a suitable sense that will be specified later. We will do this in terms of the spectral parameters of the unperturbed operator, which is the weighted Neumann Laplacian, denoted by \(-\Delta_{N}^{F}\).
The problem at hand is closely related to the "narrow escape problem," which has gained significant attention in recent years due to its relevance in cellular biology [14, 29]. In this scenario, \(M\) denotes a cavity with a reflecting boundary, except for a small absorbing window \(\Gamma_{\varepsilon}\). The particles in \(M\) are modelled as Brownian motions that exit only through the region \(\Gamma_{\varepsilon}\). The mean first-passage time, which represents the expected duration a particle will wander before escaping, is a crucial metric in this context. The narrow escape problem concerns the asymptotic behaviour of the mean first-passage time as the size of the window \(\Gamma_{\varepsilon}\) tends towards zero.
### Previous results
The investigation of the behaviour of eigenvalues of elliptic boundary value problems under the singular perturbation of boundary conditions has a long history, see for instance [1, 2, 12, 13, 17, 20, 21, 22, 23, 24, 25, 26, 39]. Detailed analysis of the two-dimensional planar domain has been performed in [24], where the author provided the full asymptotic expansion of the perturbed eigenvalues. Moreover, a complete pointwise expansion of the perturbed eigenfunctions is provided as well.
In [17], the perturbed eigenvalues in a three-dimensional Euclidean domain with a smooth boundary were studied. The author derived the asymptotic behaviour of the perturbed eigenvalues up to an unspecified \(o(\varepsilon)\) term:
\[\lambda_{j,\varepsilon}=\lambda_{j}+4\pi|u_{j}(x^{*})|^{2}c\varepsilon+o( \varepsilon),\]
where \(\{\lambda_{j}\}_{j\in\mathbb{N}}\) and \(\{u_{j}\}_{j\in\mathbb{N}}\) are unperturbed eigenvalues and the corresponding normalized eigenfunctions, \(c\) is the constant depending on geometry of \(\Gamma_{\varepsilon}\).
The problem of perturbed eigenvalues in a domain with a Lipschitz boundary in the Euclidean space was examined by the authors in [21]. They obtained the asymptotic behaviour of the perturbed eigenvalues, expressed in terms of the unperturbed eigenvalues and the relative Sobolev \(u_{j}\)-capacity of \(\Gamma_{\varepsilon}\):
\[\lambda_{j,\varepsilon}=\lambda_{j}+\operatorname{Cap}_{M}(\Gamma_{ \varepsilon},u_{j})+o\left(\operatorname{Cap}_{M}(\Gamma_{\varepsilon},u_{j}) \right).\]
The case of a two-dimensional planar domain in the presence of a force field was considered in [2]. The authors used layer potential techniques to derive the asymptotic expansion
\[\lambda_{j,\varepsilon}=\lambda_{j}+\pi|u_{j}(x^{*})|^{2}e^{\phi(x^{*})}\log \varepsilon+O(|\log\varepsilon|^{-2}),\]
where \(\phi\) is a potential, that is \(F=\nabla\phi\).
Additionally, we refer to the studies [18, 19, 31, 38, 42], which employed the method of matched asymptotic expansions.
### Main results
Despite the large number of works on this topic, there are still many questions regarding more general geometries. The present work is devoted to answering this question. In our setting, the Dirichlet region, \(\Gamma_{\varepsilon,a}\), is considered to be a small geodesic ellipse centred at \(x^{*}\in\partial M\), with eccentricity \(\sqrt{1-a^{2}}\) and size \(\varepsilon\to 0\) (to be made precise later). Moreover, the force field is given by a smooth up to the boundary potential \(\phi\), that is \(F=\nabla_{g}\phi\). Our main objective is to investigate how the geometric characteristics of the manifold \(M\) at a specific point \(x^{*}\) impact the asymptotic expansion. We use a combination of the methods used in [2] and [37]: layer potential technique and microlocal analysis.
In our current analysis, it is crucial to understand the singular structure of the Neumann Green function, which is represented by the following equation:
\[\begin{cases}\Delta_{g}G_{M}^{\omega}(x,y)-\operatorname{div}_{g}(F(y)G_{M}^ {\omega}(x,y))+\omega^{2}G_{M}^{\omega}(x,y)=-\delta_{x}(y),\\ \partial_{\nu}G_{M}^{\omega}(x,y)-g_{y}(F(y),\nu)G_{M}^{\omega}(x,y)\big{|}_{ y\in\partial M}=0.\end{cases}\]
where \(\omega^{2}\) is an element of the resolvent set of the eigenvalue problem (1.1). Let us consider a formal restriction of \(G_{M}^{\omega}\) to the boundary \(\partial M\). We denote this by the symbol \(G_{\partial M}^{\omega}\). For an exact definition of this restriction, refer to Section 3. We obtain the singularities structure of \(G_{\partial M}^{\omega}\) near the diagonal and in the neighbourhood of an eigenvalue of (1.1), when \(w^{2}\) approaches it. To state it, let us set the necessary notions. For \(x\), \(y\in\partial M\), let \(H(x)\) denote the mean curvature of the boundary at \(x\), \(d_{g}(x,y)\) the geodesic distance given by metric \(g\), \(d_{h}(x,y)\) the geodesic distance given by induced metric \(h\) on the boundary \(\partial M\), and
\[\Pi_{x}(V):=\Pi(V,V),\qquad V\in T_{x}\partial M,\]
the scalar second fundamental form (see pages 235 and 381 of [33] for definitions).
**Proposition 1.1**.: _Let \((M,g,\partial M)\) be a compact connected orientable Riemannian manifold of dimension three with a non-empty smooth boundary. Let \(\lambda_{j}\) be a simple eigenvalue of \(-\Delta_{N}^{F}\) and \(V_{j}\) be a neighbourhood of \(\lambda_{j}\) which does not contain any other eigenvalue of \(-\Delta_{N}^{F}\). Then there exists a neighbourhood of_
\[\operatorname{Diag}:=\{(x,x)\in\partial M\times\partial M\}\]
_where the singularity structure of \(G_{\partial M}^{\omega}\) given by:_
\[\begin{split} G_{\partial M}^{\omega}(x,y)&=\frac{ 1}{2\pi}d_{g}(x,y)^{-1}-\frac{H(x)}{4\pi}\log d_{h}(x,y)+\frac{g_{x}(F,\nu)}{4 \pi}\log d_{h}(x,y)\\ &+\frac{1}{16\pi}\left(\Pi_{x}\left(\frac{\exp_{x}^{-1}(y)}{|\exp _{x}^{-1}(y)|_{h}}\right)-\Pi_{x}\left(\frac{\star\exp_{x}^{-1}(y)}{|\exp_{x}^ {-1}(y)|_{h}}\right)\right)\\ &+\frac{1}{4\pi}h_{x}\left(F^{||}(x),\frac{\exp_{x}^{-1}(y)}{| \exp_{x}^{-1}(y)|_{h}}\right)\\ &+\frac{u_{j}(x)u_{j}(y)}{\lambda_{j}-\omega^{2}}e^{\phi(y)}+R_ {\partial M}^{\omega}(x,y),\end{split} \tag{1.2}\]
_where \(\omega^{2}\in V_{j}\setminus\{\lambda_{j}\}\), \(R_{\partial M}^{\omega}(x,y)\in C^{0,\alpha}(\partial M\times\partial M)\) for \(\alpha\in(0,1)\), \(F^{||}\) denotes the tangential component of \(F\) and \(\star\) denotes the Hodge star operator._
In deriving the singularity structure, we employed a standard pseudo-differential parametrix construction as described in [36, 37] by observing that \(G^{\omega}_{\partial M}\) is an approximate inverse to a Dirichlet-to-Neumann map. To determine the singularity with respect to \(\omega\) near the eigenvalues of \(-\Delta_{N}^{F}\), we have used an approach similar to that of [4]. Using the aforementioned proposition, we derive an asymptotic expression for the eigenvalues \(\lambda_{j,\varepsilon}\) as \(\varepsilon\to 0\). For the sake of clarity, we begin by presenting the main result for \(\Gamma_{\varepsilon,a}\) being a geodesic ball, that is \(a=1\):
**Theorem 1.2**.: _Let \((M,g,\partial M)\) be a compact connected orientable Riemannian manifold of dimension three with a non-empty smooth boundary. Fix \(x^{*}\in\partial M\) and let \(\Gamma_{\varepsilon}\) be the boundary geodesic ball centred at \(x^{*}\) of geodesic radius \(\varepsilon>0\). Assume that \(F=\nabla_{g}\phi\) for a potential \(\phi\) smooth up to the boundary. Let \(\{\lambda_{j,\varepsilon}\}_{j\in\mathbb{N}}\) be the eigenvalues of \(-\Delta_{Mix,\varepsilon}^{F}\). If \(\lambda_{j}\) is a simple eigenvalue of \(-\Delta_{N}^{F}\) and \(u_{j}\) is the corresponding eigenfunction normalized in \(L^{2}(M,e^{\phi}d\mu_{g})\) (weighted \(L^{2}\) space with a weight \(e^{\phi}\)), then_
\[\lambda_{j,\varepsilon}-\lambda_{j}=A\varepsilon+B\varepsilon^{2}\log \varepsilon+C\varepsilon^{2}+O(\varepsilon^{3}\log^{2}\varepsilon),\]
_where_
\[A =4|u_{j}(x^{*})|^{2}e^{\phi(x^{*})},\] \[B =4\pi|u_{j}(x^{*})|^{2}e^{\phi(x^{*})}(H(x^{*})-\partial_{\nu} \phi(x^{*})),\] \[C =|u_{j}(x^{*})|^{2}e^{\phi(x^{*})}\left(\frac{8\log 2-6}{\pi}(H(x^{ *})-\partial_{\nu}\phi(x^{*}))-16R^{\lambda_{j}}_{\partial M}(x^{*},x^{*}) \right).\]
_Here, \(H\) is the mean curvature of the boundary, \(R^{\lambda_{j}}_{\partial M}(x^{*},x^{*})\) is the evaluation at \((x,y)=(x^{*},x^{*})\) of the kernel \(R^{\lambda_{j}}_{\partial M}(x,y)\) in Proposition 1.1._
Theorem 1.2 does not realize the full power of Proposition 1.1 as it does not see the inhomogeneity of the local geometry at \(x^{*}\), only the mean curvature shows up. This limitation arises from the fact that Dirichlet regions are specifically geodesic balls. However, by considering geodesic ellipses instead of geodesic balls, we observe that the inclusion of the second fundamental form term in Proposition 1.1 contributes to an asymptotic term, which is the difference in principal curvatures. The ellipse, we consider, is defined as follows. Let \(E_{1}(x^{*})\), \(E_{2}(x^{*})\in T_{x^{*}}\partial M\) be the unit eigenvectors of the shape operator at \(x^{*}\) corresponding respectively to principal curvatures \(\kappa_{1}(x^{*})\) and \(\kappa_{2}(x^{*})\). For \(a\in(0,1]\) fixed, we set
\[\Gamma_{\varepsilon,a}:=\{\exp_{x^{*};h}(\varepsilon t_{1}E_{1}+\varepsilon t _{2}E_{2})\;|\;t_{1}^{2}+a^{-2}t_{2}^{2}\leq 1\}. \tag{1.3}\]
Now, we are ready to state the main result:
**Theorem 1.3**.: _Let \((M,g,\partial M)\) be a compact connected orientable Riemannian manifold of dimension three with a non-empty smooth boundary. Fix \(x^{*}\in\partial M\) and let \(\Gamma_{\varepsilon,a}\) be the boundary geodesic ellipse given by (1.3). Assume that \(F=\nabla_{g}\phi\) for a potential \(\phi\) smooth up to the boundary. Let \(\{\lambda_{j,\varepsilon}\}_{j\in\mathbb{N}}\) be the eigenvalues of \(-\Delta_{Mix,\varepsilon}^{F}\). If \(\lambda_{j}\) is a simple eigenvalue of \(-\Delta_{N}^{F}\) and \(u_{j}\) is the corresponding eigenfunction normalized in \(L^{2}(M,e^{\phi}d\mu_{g})\), then_
\[\lambda_{j,\varepsilon}-\lambda_{j}=A\varepsilon+B\varepsilon^{2}\log \varepsilon+(C_{1}+C_{2}+C_{3})\varepsilon^{2}+O(\varepsilon^{3}\log^{2} \varepsilon).\]
_Here, the constants are defined as follows_
\[K_{a}=\frac{\pi}{2}\int_{0}^{2\pi}\frac{a}{(a^{2}\cos^{2}\theta+ \sin^{2}\theta)^{1/2}}d\theta\] \[A=\frac{4\pi^{2}a}{K_{a}}|u_{j}(x^{*})|^{2}e^{\phi(x^{*})},\] \[B=\frac{4\pi^{3}a^{2}|u_{j}(x^{*})|^{2}e^{\phi(x^{*})}}{K_{a}^{2 }}(H(x^{*})-\partial_{\nu}\phi(x^{*}))\]
_and_
\[C_{1}=\frac{a^{2}\pi(H(x^{*})-\partial_{\nu}\phi(x^{*}))|u_{j}(x ^{*})|^{2}e^{\phi(x^{*})}}{K_{a}^{2}}\times\\ \times\int_{\mathbb{D}}\frac{1}{(1-|s^{\prime}|^{2})^{1/2}}\int_ {\mathbb{D}}\frac{\log\left((t_{1}-s_{1})^{2}+a^{2}(t_{2}-s_{2})^{2}\right)^{ 1/2}}{(1-|t^{\prime}|^{2})^{1/2}}dt^{\prime}ds^{\prime},\]
\[C_{2}=\frac{a^{2}\pi\left(\kappa_{1}(x^{*})-\kappa_{2}(x^{*}) \right)|u_{j}(x^{*})|^{2}e^{\phi(x^{*})}}{4K_{a}^{2}}\times\\ \times\int_{\mathbb{D}}\frac{1}{(1-|s^{\prime}|^{2})^{1/2}}\int_ {\mathbb{D}}\frac{(t_{1}-s_{1})^{2}-a^{2}(t_{2}-s_{2})^{2}}{(t_{1}-s_{1})^{2}+ a^{2}(t_{2}-s_{2})^{2}}\frac{1}{(1-|t^{\prime}|^{2})^{1/2}}dt^{\prime}ds^{\prime}\]
\[C_{3}=\frac{16\pi^{4}a^{2}R_{\partial M}^{\lambda_{j}}(x^{*},x^{*})|u_{j}(x^{* })|^{2}e^{\phi(x^{*})}}{K_{a}^{2}},\]
_where \(R_{\partial M}^{\lambda_{j}}(x^{*},x^{*})\) is the evaluation at \((x,y)=(x^{*},x^{*})\) of the kernel \(R_{\partial M}^{\lambda_{j}}(x,y)\) in Proposition 1.1._
From this result, we can conclude that the shape of the Dirichlet region is important. The eccentricity of the ellipse affects the main term of the asymptotic. Moreover, we observe a difference in the principal curvatures, which is not visible in case \(a=1\).
### Outline of the paper
In Section 2, we initiate the exposition by introducing the necessary notations and providing a more precise formulation of the problem at hand. Section 3 is devoted to the computation of the singular structure of Green's function. Moving on to Section 4, we employ variational principles to derive additional bounds for the perturbed eigenvalues. Ultimately, in this section, we establish the proof of the main theorem.
## 2. Preliminaries
In this section, we introduce basic notations and formulate the problem. Throughout this work, we use \((M,g)\) to denote a compact connected orientable Riemannian manifold of dimension three with a non-empty smooth boundary. The corresponding volume form and geodesic distance are denoted by \(d\mu_{g}(\cdot)\) and \(d_{g}(\cdot,\cdot)\), respectively. Let \(\iota_{\partial M}:\partial M\hookrightarrow M\) be the trivial embedding of the boundary \(\partial M\) into \(M\). This allows us to define the boundary metric \(h:=\iota_{\partial M}^{*}g\) inherited by \(g\). We similarly use \(d\mu_{h}(\cdot)\) and \(d_{h}(\cdot,\cdot)\) to denote respectively the volume form on the boundary and the geodesic distance on the boundary given by metric \(h\). We denote the Laplace-Beltrami operator by \(\Delta_{g}=-d^{*}d\).
For \(x\in\partial M\), let \(E_{1}(x),E_{2}(x)\in T_{x}\partial M\) be the unit eigenvectors of the shape operator at \(x\in\partial M\) corresponding respectively to the principal curvatures \(\kappa_{1}(x),\ \kappa_{2}(x)\). We will drop the dependence in \(x\) from our notation when there is no ambiguity. We choose \(E_{1}\) and \(E_{2}\) such that \(E_{1}^{\flat}\wedge E_{2}^{\flat}\wedge\nu^{\flat}\) is a positive multiple of the volume form \(d\mu_{g}\) (see p.26 of [33] for the "musical isomorphism" notation of \({}^{\flat}\) and \({}^{\sharp}\)). Here we use \(\nu\) to denote the outward-pointing normal vector field. By \(H(x)\), we denote the mean curvature of \(\partial M\) at \(x\). We also set
\[\mathrm{II}_{x}(V):=\mathrm{II}_{x}(V,V),\ \ V\in T_{x}\partial M,\]
to be the scalar second fundamental form. Note that, in defining II and the shape operator, we will follow the standard literature in geometry (e.g. [33]) and use the inward-pointing normal so that the sphere embedded in \(\mathbb{R}^{3}\) would have positive mean curvature in our convention.
In this article, we will often use boundary normal coordinates. Therefore, we briefly recall its construction. For a fixed \(x^{*}\in\partial M\), we will denote by \(B_{h}(\rho;x^{*})\subset\partial M\) the geodesic disk of radius \(\rho>0\) (with respect to the metric \(h\)) centred at \(x^{*}\) and \(\mathbb{D}_{\rho}\) to be the Euclidean disk in \(\mathbb{R}^{2}\) of radius \(\rho\) centred at the origin. In what follows \(\rho\) will always be smaller than the injectivity radius of \((\partial M,h)\). Letting \(t=(t_{1},t_{2},t_{3})\in\mathbb{R}^{3}\), we will construct a coordinate system \(x(t;x^{*})\) by the following procedure:
Write \(t\in\mathbb{R}^{3}\) near the origin as \(t=(t^{\prime},t_{3})\) for \(t^{\prime}=(t_{1},t_{2})\in\mathbb{D}_{\rho}\). Define first
\[x((t^{\prime},0);x^{*}):=\exp_{x^{*};h}(t_{1}E_{1}+t_{2}E_{2}),\]
where \(\exp_{x^{*};h}(V)\) denotes the time \(1\) map of \(h\)-geodesics with initial point \(x^{*}\) and initial velocity \(V\in T_{x^{*}}\partial M\). The coordinate \(t^{\prime}\in\mathbb{D}_{\rho}\mapsto x((t^{\prime},0);x^{*})\) is then an \(h\)-geodesic coordinate system for a neighborhood of \(x^{*}\) on the boundary surface \(\partial M\). We can then construct a coordinate system for a neighbourhood of \(x^{*}\in M\) by considering \(g\)-geodesic rays \(\gamma_{x^{*},-\nu}:[0,\rho)\to M\) emanating from points in \(\partial M\) orthogonal to \(\partial M\). In particular, we can then smoothly extend \(t^{\prime}\) to \(U\) by setting \(t^{\prime}\) to be constant functions along \(\gamma_{x^{*},-\nu}\). If we then define \(t_{3}\) to be the unit speed parameter of \(\gamma_{x^{*},-\nu}\), then \((t_{1},t_{2},t_{3})\) form coordinates for \(M\) in some neighborhood of \(x^{*}\in M\). As a consequence, \(t_{3}\) is a boundary-defining function, that is \(t_{3}>0\) away from \(\partial M\) and \(t_{3}=0\) on \(\partial M\). We will call these local coordinates, _boundary normal coordinates_. For convenience we will write \(x(t^{\prime};x^{*})\) in place of \(x((t^{\prime},0);x^{*})\). Readers wishing to know more about boundary normal coordinates can refer to [33] for a brief recollection of the basic properties we use here for detailed construction.
We will also use the rescaled version of this coordinate system. For \(\varepsilon>0\) sufficiently small we define the (rescaled) \(h\)-geodesic coordinate by the following map
\[x^{\varepsilon}(\cdot;x^{*}):t^{\prime}=(t_{1},t_{2})\in\mathbb{D}\mapsto x( \varepsilon t^{\prime};x^{*})\in B_{h}(\varepsilon;x^{*}), \tag{2.1}\]
where \(\mathbb{D}\) is the unit disk in \(\mathbb{R}^{2}\). Given the boundary normal co-ordinate construction, we define the geodesic ellipse \(\Gamma_{\varepsilon,a}\) as the following subset of \(\partial M\)
\[\Gamma_{\varepsilon,a}:=\{\exp_{x^{*};h}(\varepsilon t_{1}E_{1}+\varepsilon t _{2}E_{2})\ |\ t_{1}^{2}+a^{-2}t_{2}^{2}\leq 1\}. \tag{2.2}\]
### Formulation of the problem
Now we are ready to state the problem. Let us consider the operator
\[u\to\Delta_{g}u+g(F,\nabla_{g}u),\]
where \(F\) is a force field, which is given by \(F=\nabla_{g}\phi\) for a smooth up to the boundary potential \(\phi\). We can re-write this operator in the following way
\[\Delta_{g}^{F}:=\Delta_{g}\cdot+g(F,\nabla_{g}\cdot)=\frac{1}{e^{\phi}}\text{ div}_{g}(e^{\phi}\nabla_{g}\cdot).\]
According to [28], the operator \(\Delta_{g}^{F}\) is called a weighted Laplacian and the pair \((M,e^{\phi}d\mu_{g})\) is called a weighted manifold. Note that \(e^{\phi}\) is bounded and strictly positive on \(M\). Therefore, \(L^{2}(M)=L^{2}(M,e^{\phi}d\mu_{g})\) as sets with equivalent norms. We also note that the operator \(\Delta_{g}^{F}\) with initial domain \(C_{0}^{\infty}(M)\) is essentially self-adjoint in \(L^{2}(M,e^{\phi}d\mu_{g})\) and non-positive definite. We want to study the operator \(\Delta_{g}^{F}\) with Dirichlet and Neumann boundary conditions on \(\Gamma_{\varepsilon,a}\) and \(\partial M\setminus\Gamma_{\varepsilon,a}\), respectively. This operator can be defined via quadratic form as follows. Consider the quadratic form
\[a_{\varepsilon}(u,v):=\int_{M}e^{\phi(z)}g(\nabla_{g}u(z),\nabla_{g}u(z))d\mu _{g}(z),\]
with the domain
\[\text{D}(a_{\varepsilon}):=\{u\in H^{1}(M):\text{supp}(u|_{\partial M}\subset \partial M\setminus\Gamma_{\varepsilon,a}\}.\]
Since \(e^{\phi}\) is strictly positive and bounded on \(M\), this quadratic form is non-negative, closed, symmetric and densely defined in \(L^{2}(M,e^{\phi}d\mu_{g})\), and hence, generates the self-adjoint non-negative operator; see Theorem 2.6 in [30, Ch. 6.2]. We denote this operator by \(-\Delta_{Mix,\varepsilon}^{F}\) and call it a weighted Laplace operator corresponding to the aforementioned mixed boundary conditions.
Since \(H^{1}(M)\) is compactly embedded in \(L^{2}(M,e^{\phi}d\mu_{g})\), the spectrum of \(-\Delta_{Mix,\varepsilon}^{F}\) is discrete and consists of the eigenvalues with finite multiplicity accumulating at infinity. We denote them, taking into account their multiplicities, as follows
\[0\leq\lambda_{1,\varepsilon}\leq\lambda_{2,\varepsilon}\leq\cdots<\infty.\]
The corresponding normalized, in the \(L^{2}(M,e^{\phi}d\mu_{g})\) sense, eigenfunctions are denoted by \(\{u_{j,\varepsilon}\}_{j\in\mathbb{N}}\).
We also consider the operator \(-\Delta_{g}^{F}\) with a Neumann boundary condition, which is generated by the quadratic form
\[a_{N}(u,u):=\int_{M}e^{\phi(z)}g(\nabla_{g}u(z),\nabla_{g}u(z))d\mu_{g}(z), \qquad\text{with }\text{D}(a_{N})=H^{1}(M).\]
We denote this operator by \(-\Delta_{N}^{F}\). By the same arguments above, we conclude that the spectrum \(\text{spec}(-\Delta_{N}^{F})\) is discrete and consists of eigenvalues with finite multiplicity accumulating at infinity. We donate them by
\[0=\lambda_{1}\leq\lambda_{2}\leq\cdots<\infty.\]
By \(\{u_{j}\}_{j\in\mathbb{N}}\), we denote the corresponding normalized, in the \(L^{2}(M,e^{\phi}d\mu_{g})\) sense, eigenfunctions. We aim to derive an asymptotic expansion for \(\lambda_{j,\varepsilon}\) as \(\varepsilon\to 0\) in terms of \(\lambda_{j}\) and \(u_{j}\).
## 3. Neumann Greens Function
In this section, we consider the Greens function \(G_{M}^{\omega}\) defined as the solution (in the distributional sense) to the following boundary value problem
\[\begin{cases}\Delta_{g}G_{M}^{\omega}(x,y)-\text{div}_{g}(F(y)G_{M}^{\omega} (x,y))+\omega^{2}G_{M}^{\omega}(x,y)=-\delta_{x}(y),\\ \partial_{\nu}G_{M}^{\omega}(x,y)-g_{y}(F(y),\nu)G_{M}^{\omega}(x,y)\big{|}_{ y\in\partial M}=0,\end{cases}\]
where \(\omega^{2}\) is a parameter belonging to the resolvent set of the operator \(-\Delta_{N}^{F}\). We seek the singularity structure of \(\partial M\)-restriction of \(G_{M}^{\omega}\) near the diagonal. A more precise definition of this restriction will be given later. Our main aim is to obtain Proposition 1.1.
### Singularity in the spectral parameter
The first step in our analysis is to express \(G_{M}^{\omega}\) as a decomposition of Neumann eigenfunctions. The following result is similar to a result of [4]. We modify it here for our setting
**Proposition 3.1**.: _Let \(\{\lambda_{j}\}_{j\in\mathbb{N}}\) be the eigenvalues of \(-\Delta_{N}^{F}\) and \(\{u_{j}\}_{j\in\mathbb{N}}\) be the corresponding \(L^{2}(M,e^{\phi}d\mu_{g})\)-orthonormalized eigenfunctions. Then, for \(x\neq y\) and \(\omega^{2}\in\mathbb{C}\setminus\operatorname{spec}(-\Delta_{N}^{F})\), it follows that_
\[G_{M}^{\omega}(x,y)=\sum_{j=1}^{\infty}\frac{u_{j}(x)u_{j}(y)}{\lambda_{j}- \omega^{2}}e^{\phi(y)}.\]
Proof.: Since \((M,g,\partial M)\) is assumed to be compact, it follows that \(e^{\phi}\) is bounded and strictly positive on \(M\). Therefore, \(L^{2}(M)=L^{2}(M,e^{\phi}d\mu_{g})\) with equivalent norms. Since \(G_{M}^{\omega}(x,\cdot)\in L^{2}(M,e^{\phi}d\mu_{g})\), for any \(x\in M\), we can express
\[G_{M}^{\omega}(x,y)=\sum_{j=1}^{\infty}v_{j}(x)u_{j}(y)e^{\phi(y)}.\]
Since \(e^{\phi}>0\) is bounded, it follows that \(f\in L^{2}(M)\) if and only if \(e^{\phi}f\in L^{2}(M)\). Thus, for fixed \(x\in M\), the above expression for \(G_{M}^{\omega}(x,y)\) is unique. Green's identity in conjunction with the divergence theorem as well as the boundary condition on \(u_{j}\) yields the following calculation
\[u_{j}(x) =\int_{M}\left(-\Delta_{g}G_{M}^{\omega}(x,y)+\operatorname{div} _{y}(F(y)G_{M}^{\omega}(x,y))-\omega^{2}G_{M}^{\omega}(x,y)\right)u_{j}(y)d \mu_{g}(y),\] \[=(\lambda_{j}-\omega^{2})\sum_{k=1}^{\infty}\int_{M}v_{k}(x)u_{k }(y)u_{j}(y)e^{\phi(y)}d\mu_{g}(y),\] \[=(\lambda_{j}-\omega^{2})v_{j}(x)\int_{M}|u_{j}(y)|^{2}e^{\phi(y )}d\mu_{g}(y).\]
Recall that \(u_{j}\) are \(L^{2}(M,e^{\phi}d\mu_{g})\)-orthonormalized so that \(v_{j}(x)=u_{j}(x)/(\lambda_{j}-\omega^{2})\), which implies that
\[G_{M}^{\omega}(x,y)=\sum_{j=1}^{\infty}\frac{u_{j}(x)u_{j}(y)}{\lambda_{j}- \omega^{2}}e^{\phi(y)},\]
as required.
Let \(\lambda_{j}\) be a simple eigenvalue of \(-\Delta_{N}^{F}\) and \(V_{j}\) be an open bounded neighborhood of \(\lambda_{j}\) in \(\mathbb{C}\) such that \(V_{j}\cap\operatorname{spec}(-\Delta_{N}^{F})=\{\lambda_{j}\}\). Within Section 3, we are interested in deriving explicit asymptotics for the _trace_ of \(G_{M}^{\omega}\). To this end, we write \(G_{M}^{\omega}\) as
\[G_{M}^{\omega}(x,y) =\sum_{k\neq j}\frac{u_{k}(x)u_{k}(y)}{\lambda_{k}-\omega^{2}}e^ {\phi(y)}+\frac{u_{j}(x)u_{j}(y)}{\lambda_{j}-\omega^{2}}e^{\phi(y)},\] \[:=N_{M}^{\omega}(x,y)+\frac{u_{j}(x)u_{j}(y)}{\lambda_{j}-\omega^ {2}}e^{\phi(y)},\]
for \(\omega^{2}\in V_{j}\). From here we can then define \(G^{\omega}_{\partial M}\) and \(N^{\omega}_{\partial M}\) as the boundary restrictions of \(G^{\omega}_{M}\) and \(N^{\omega}_{M}\) as Schwartz kernels of the trace of the integral operators \(G^{\omega}_{M}\) and \(N^{\omega}_{M}\) respectively. That is, for \(f\in C^{\infty}(\partial M)\), we have
\[G^{\omega}_{\partial M}:f \mapsto\left.\left(\int_{\partial M}G^{\omega}_{M}(x,y)f(y)d\mu_ {h}(y)\right)\right|_{x\in\partial M},\] \[N^{\omega}_{\partial M}:f \mapsto\left.\left(\int_{\partial M}N^{\omega}_{M}(x,y)f(y)d\mu_ {h}(y)\right)\right|_{x\in\partial M}.\]
It can also be seen from the perspective of microlocalization that the above restrictions to \(\partial M\times\partial M\) are well-defined since \(G^{\omega}_{M}\) is the Schwartz kernel of a pseudo-differential operator with \(\operatorname{WF}(G^{\omega}_{M})\cap N^{*}(\partial M\times\partial M)=\emptyset\) and the difference between \(G^{\omega}_{M}\) and \(N^{\omega}_{M}\) is a \(C^{\infty}(\partial M\times\partial M)\) term. Thus, we write the trace of \(G^{\omega}_{M}\) as
\[G^{\omega}_{\partial M}(x,y)=N^{\omega}_{\partial M}(x,y)+\frac{u_{j}(x)u_{j} (y)}{\lambda_{j}-\omega^{2}}e^{\phi(y)}. \tag{3.1}\]
Our choice of \(N^{\omega}_{M}\) suggests that any terms which depend on \(\omega^{2}\) in \(N^{\omega}_{\partial M}\) are negligible, since, by definition \(\omega^{2}\in V_{j}\setminus\{\lambda_{j}\}\) and \(V_{j}\) is judiciously chosen such that there are no other Neumann eigenvalues in \(V_{j}\). This implies that there is only one significant singularity in \(\omega^{2}\) which is given by the second term of (3.1).
### Singularities along the diagonal
When considering (3.1), it is apparent that \(\frac{u_{j}(x)u_{j}(y)}{\lambda_{j}-\omega^{2}}e^{\phi(y)}\) is jointly smooth on \(\partial M\times\partial M\), thus the only difficulty in deriving asymptotics for \(G^{\omega}_{\partial M}\) for \(x\) near \(y\) lies in the derivation of \(N^{\omega}_{\partial M}\). We will show that \(N^{\omega}_{\partial M}\) is a left parametrix for a Dirichlet-to-Neumann map, which is associated to the following auxiliary Dirichlet boundary value problem
\[\begin{cases}\Delta_{g}u_{f}+g(F,\nabla_{g}u_{f})+\omega^{2}u_{f}=0,\\ u_{f}|_{\partial M}=f\in C^{\infty}(\partial M).\end{cases} \tag{3.2}\]
The Dirichlet-to-Neumann map is given by
\[\Lambda^{\omega}_{g,F}:H^{1/2}(\partial M)\ni f\mapsto\partial_{\nu}u_{f}\in H ^{1/2}(\partial M)^{*}.\]
In order to construct \(N^{\omega}_{\partial M}\), we will require a series of technical lemmas. The first of which was proven in [32] and [34]. We offer a sketch of the proof for our special case under consideration.
**Lemma 3.2**.: _The Dirichlet-to-Neumann map \(\Lambda^{\omega}_{g,F}\) is an elliptic pseudo-differential operator of order \(1\). In addition, the first two terms of the symbol of \(\sigma(\Lambda^{\omega}_{g,F})(t,\xi^{\prime})\) are_
\[\sigma_{1}(\Lambda^{\omega}_{g,F}) =-\sqrt{\widetilde{q_{2}}},\] \[\sigma_{0}(\Lambda^{\omega}_{g,F}) =\frac{1}{2\sqrt{\widetilde{q_{2}}}}(\nabla_{\xi^{\prime}}\sqrt{ \widetilde{q_{2}}}\cdot D_{t^{\prime}}\sqrt{\widetilde{q_{2}}}-\widetilde{q_{ 1}}-\partial_{t_{3}}\sqrt{\widetilde{q_{2}}}+\widetilde{E}\sqrt{\widetilde{q_{ 2}}}),\]
_where \(\widetilde{E}\), \(\widetilde{q}_{1}\) and \(\widetilde{q}_{2}\) are given by_
\[\widetilde{E}(t) :=-\frac{1}{2}\sum_{\alpha,\beta}h^{\alpha\beta}(t)\partial_{t_{3}} h_{\alpha\beta}(t)-F^{3}(t),\] \[\widetilde{q}_{1}(t,\xi^{\prime}) :=-i\sum_{\alpha,\beta}\left(\frac{1}{2}h^{\alpha\beta}(t)\partial _{t_{\alpha}}\log\delta(t)+\partial_{t_{\alpha}}h^{\alpha\beta}(t)-F^{\alpha} (t)h^{\beta}_{\alpha}(t)\right)\xi_{\beta},\] \[\widetilde{q}_{2}(t,\xi^{\prime}) :=\sum_{\alpha,\beta}h^{\alpha\beta}(t)\xi_{\alpha}\xi_{\beta},\]
_Here \(\alpha,\beta\in\{1,2\}\), \(F=F^{1}(t)\partial_{t_{1}}+F^{2}(t)\partial_{t_{2}}+F^{3}(t)\partial_{t_{3}}\) and \(\delta(t)=\det(g_{\alpha\beta})\)._
Proof.: Within our choice of co-ordinates, we begin with the following decomposition
\[-\Delta_{g}-g(F,\nabla_{g^{\cdot}})-\omega^{2}=D_{t_{3}}^{2}+i\widetilde{E}(t )D_{t_{3}}+\widetilde{Q^{\omega}}(t,D_{t^{\prime}}), \tag{3.3}\]
where \(\widetilde{Q^{\omega}}(t,D_{t^{\prime}})\) is
\[\widetilde{Q^{\omega}}(t,D_{t^{\prime}}) :=\sum_{\alpha,\beta}h^{\alpha\beta}(t)D_{t_{\alpha}}D_{t_{\beta}}\] \[-i\sum_{\alpha,\beta}\left(\frac{1}{2}h^{\alpha\beta}(t)\partial _{t_{\alpha}}\log\delta(t)+\partial_{t_{\alpha}}h^{\alpha\beta}(t)-F^{\alpha} (t)h^{\beta}_{\alpha}(t)\right)D_{t_{\beta}}-\omega^{2}.\]
It should be noted that the total symbol \(\sigma(\widetilde{Q^{\omega}})(t,\xi^{\prime})\) of \(\widetilde{Q^{\omega}}(t,D_{t^{\prime}})\) is given by
\[\sigma(\widetilde{Q^{\omega}})(t,\xi^{\prime})=\widetilde{q}_{1}(t,\xi^{ \prime})+\widetilde{q}_{2}(t,\xi^{\prime})-\omega^{2}.\]
It follows that we can construct a first order, classical, pseudo-differential operator \(A^{\omega}_{F}(t,D_{t^{\prime}})\) such that
\[-\Delta_{g}-g(F,\nabla_{g^{\cdot}})-\omega^{2}=(D_{t_{3}}+i\widetilde{E}(t)- iA^{\omega}_{F}(t,D_{t^{\prime}}))(D_{t_{3}}+iA^{\omega}_{F}(t,D_{t^{\prime}})), \tag{3.4}\]
by equating (3.3) and (3.4). This yields the following equation, up to a smoothing operator
\[A^{\omega}_{F}(t,D_{t^{\prime}})^{2}+i[D_{t_{3}},A^{\omega}_{F}(t,D_{t^{\prime }})]-\widetilde{Q^{\omega}}(t,D_{t^{\prime}})-\widetilde{E}(t)A^{\omega}_{F}(t,D_{t^{\prime}})=0. \tag{3.5}\]
The standard pseudo-differential calculus allows us to write (3.5) equivalently in terms of symbols associated with the relevant operators (ones which involve some action on functions defined over \(\partial M\)) as follows, up to a smoothing symbol
\[\sum_{\gamma}\frac{1}{\gamma!}\partial_{\xi^{\prime}}^{\gamma}\sigma(A^{ \omega}_{F})D_{t}^{\gamma}\sigma(A^{\omega}_{F})-\partial_{t_{3}}\sigma(A^{ \omega}_{F})-\sigma(\widetilde{Q^{\omega}})-\widetilde{E}(t)\sigma(A^{\omega}_ {F})=0.\]
where \(\gamma\in\mathbb{N}^{n}\) denotes some multi-index. Collecting homogeneous terms of degree 2 and then 1 yields the first two terms of the Borel expansion for the symbol of the pseudo-differential operator \(A^{\omega}_{F}(t,D_{t^{\prime}})\). These terms are
\[\sigma_{1}(A^{\omega}_{F}) =-\sqrt{\widetilde{q}_{2}},\] \[\sigma_{0}(A^{\omega}_{F}) =\frac{1}{2\sqrt{\widetilde{q}_{2}}}(\nabla_{\xi^{\prime}}\sqrt{ \widetilde{q}_{2}}\cdot D_{t^{\prime}}\sqrt{\widetilde{q}_{2}}-\widetilde{q}_{ 1}-\partial_{t_{3}}\sqrt{\widetilde{q}_{2}}+\widetilde{E}\sqrt{\widetilde{q}_{ 2}}).\]
It should be noted that \(\sigma_{1}(A^{\omega}_{F})=0\)_only if_\(\xi^{\prime}=0\) (which corresponds to the zero section). Thus, it follows that \(A^{\omega}_{F}(t,D_{t^{\prime}})\) is an elliptic operator. Furthermore, by construction, \(\sigma_{1}(A^{\omega}_{F})\) and \(\sigma_{0}(A^{\omega}_{F})\) are homogeneous symbols, as are the residual symbols
\(\sigma_{j}(A_{F}^{\omega})\) for \(j\leq-1\). Therefore \(A_{F}^{\omega}(t,D_{t^{\prime}})\) is a first-order, elliptic, classical pseudo-differential operator. Within the following section of the proof, it is shown that \(A_{F}^{\omega}(t,D_{t^{\prime}})\) coincides with \(\Lambda_{g,F}^{\omega}\) up to a smoothing operator. This is done by first considering the region \(\{0\leq t_{3}\leq T\}\). The authors in [34], exploited (3.4) in order to write (3.2) as a system of forward and backward heat equations:
\[(D_{t_{3}}+iA_{F}^{\omega})u_{f} =v,\ \ u_{f}|_{t_{3}=0}=f,\] \[(D_{t_{3}}+i\widetilde{E}-iA_{F}^{\omega})v =w\in C^{\infty}([0,T];\mathcal{D}^{\prime}(\mathbb{R}^{2})),\]
where \(u_{f},v\in C^{\infty}([0,T];\mathcal{D}^{\prime}(\mathbb{R}^{2}))\). Since \(\sigma_{1}(A_{F}^{\omega})<0\) for \(\xi^{\prime}\neq 0\), the following heat equation is well-posed
\[\partial_{t_{3}}v+A_{F}^{\omega}v-\widetilde{E}v=-iw,\]
and thus \(v\in C^{\infty}([0,T]\times\mathbb{R}^{2})\). Thus, restricting the forward heat equation to \(\{t_{3}=0\}\) implies that, up to a smoothing operator
\[\Lambda_{g,F}^{\omega}f=\left.\partial_{t_{3}}u\right|_{t_{3}=0}=\left.A_{F}^ {\omega}(t,D_{t^{\prime}})u\right|_{t_{3}=0}.\]
Consequently, we have that
\[\sigma(\Lambda_{g,F}^{\omega})(t,\xi^{\prime})=\sigma(A_{F}^{\omega})(t,\xi^{ \prime}).\]
In addition to establishing the above lemma, we require a lemma that we can use to link \(N_{\partial M}^{\omega}\) and \(\Lambda_{g,F}^{\omega}\). The following lemma elucidates said link as it shows that \(N_{\partial M}^{\omega}\) is a left, elliptic pseudo-differential parametrix of \(\Lambda_{g,F}^{\omega}\). Once again, we consider the elliptic Dirichlet boundary value problem (3.2).
**Lemma 3.3**.: _The Dirichlet-to-Neumann map \(\Lambda_{g,F}^{\omega}\) and \(N_{\partial M}^{\omega}\) satisfy the following operator equation_
\[I=N_{\partial M}^{\omega}\Lambda_{g,F}^{\omega}+\Psi^{-\infty}, \tag{3.6}\]
_where \(\Psi^{-\infty}\) denotes the class of smoothing operators. In particular, \(N_{\partial M}^{\omega}\in\Psi_{cl}^{-1}\) is an elliptic pseudo-differential operator._
Proof.: We prove the above lemma by integrating by parts (3.2) against the Neumann Greens function \(G_{M}^{\omega}(x,y)\)
\[-u_{f}(x) =\int_{M}G_{M}^{\omega}(x,z)\Delta_{g}u_{f}d\mu_{g}(z)+\int_{ \partial M}\left(u_{f}(z)\partial_{\nu}G_{M}^{\omega}(x,z)-G_{M}^{\omega}(x,z )\partial_{\nu}u_{f}(z)\right)d\mu_{h}(z)\] \[-\int_{M}u_{f}(z)\mathrm{div}_{g}(F(z)G_{M}^{\omega}(x,z))d\mu_{ g}(z)+\omega^{2}\int_{M}u_{f}(z)G_{M}^{\omega}(x,z)d\mu_{g}(z).\]
The divergence theorem yields
\[-u_{f}(x) =\int_{M}G_{M}^{\omega}(x,z)\Delta_{g}u_{f}d\mu_{g}(z)+\int_{ \partial M}u_{f}(z)\partial_{\nu}G_{M}^{\omega}(x,z)-G_{M}^{\omega}(x,z) \partial_{\nu}u_{f}(z)d\mu_{h}(z)\] \[+\int_{M}g_{z}(F(z),\nabla_{g}u_{f})G_{M}^{\omega}(x,z)d\mu_{g}(z )-\int_{\partial M}u_{f}(z)G_{M}^{\omega}(x,z)F(z)\cdot\nu d\mu_{h}(z)\] \[+\omega^{2}\int_{M}u_{f}(z)G_{M}^{\omega}(x,z)d\mu_{g}(z).\]
Employing the prescribed Neumann boundary conditions on \(G_{M}^{\omega}\), we conclude
\[u_{f}(x)=\int_{\partial M}G_{M}^{\omega}(x,z)\partial_{\nu}u_{f}(z)d\mu_{h}(z).\]
Furthermore, restricting \(x\in\partial M\) and invoking the prescribed Dirichlet boundary condition from (3.2) using our definition involving the trace (3.1), we have that
\[f(x) =\int_{\partial M}G_{\partial M}^{\omega}(x,z)\Lambda_{g,F}^{ \omega}f(z)d\mu_{h}(y),\] \[=\int_{\partial M}N_{\partial M}^{\omega}(x,z)\Lambda_{g,F}^{ \omega}f(z)d\mu_{h}(z)+\int_{\partial M}e^{\phi(z)}\frac{u_{j}(x)u_{j}(z)}{ \lambda_{j}-\omega^{2}}\Lambda_{g,F}^{\omega}f(z)d\mu_{h}(z).\]
Since the Neumann eigenfunctions are smooth, the rightmost integral in the above expression consists of a smooth Schwartz kernel and thus gives rise to a smoothing operator. That is, we have
\[I=N_{\partial M}^{\omega}\Lambda_{g,F}^{\omega}+\Psi^{-\infty}\]
Finally, since \(\Lambda_{g,F}^{\omega}\in\Psi_{cl}^{1}\) is an elliptic operator, we conclude that \(N_{\partial M}^{\omega}\in\Psi_{cl}^{-1}\).
Using Lemmas 3.2 and 3.3 we can prove the following theorem by iteratively determining the terms in the Borel summation associated with the parametrix \(N_{\partial M}^{\omega}\) modulo smoothing terms.
**Proposition 3.4**.: _Let \(x,y\in\partial M\) such that \(x\neq y\) and \(\omega^{2}\in\mathbb{C}\setminus\operatorname{spec}(-\Delta_{N}^{F})\). In an open neighbourhood of \(\operatorname{Diag}:=\{(x,x)\in\partial M\times\partial M\}\), we have that_
\[N_{\partial M}^{\omega}(x,y) =\frac{1}{2\pi}d_{g}(x,y)^{-1}-\frac{H(x)}{4\pi}\log d_{h}(x,y)+ \frac{g_{x}(F,\nu)}{4\pi}\log d_{h}(x,y)\] \[+\frac{1}{16\pi}\left(\Pi_{x}\left(\frac{\exp_{x}^{-1}(y)}{|\exp_ {x}^{-1}(y)|_{h}}\right)-\Pi_{x}\left(\frac{\star\exp_{x}^{-1}(y)}{|\exp_{x}^{ -1}(y)|_{h}}\right)\right)\] \[+\frac{1}{4\pi}h_{x}\left(F^{||}(x),\frac{\exp_{x}^{-1}(y)}{|\exp _{x}^{-1}(y)|_{h}}\right)+R_{\partial M}^{\omega}(x,y),\]
_where \(R_{\partial M}^{\omega}(x,y)\in C^{0,\alpha}(\partial M\times\partial M)\) for \(\alpha\in(0,1)\), and \(F^{||}\) denotes the tangential component of \(F\) and \(\star\) denotes the Hodge star operator._
Within the above expression for \(N_{\partial M}^{\omega}\), we obtain a clear image as to the structure of the singularity in \(x,y\in\partial M\) near the diagonal (where \(x=y\)) as well as the singularity in \(\omega^{2}\in\mathbb{C}\setminus\operatorname{spec}(-\Delta_{N}^{F})\) for \(\omega^{2}\) near \(\lambda_{j}\). For the sake of clarity, we will include an outline for the proof of Theorem 3.4. Since we have already determined the nature of the leading order singularity in \(\omega\), all that is left is to reveal the nature of the singularity of the leading order terms for \(x\) near \(y\) on \(\partial M\).
Proof of Proposition 3.4.: There are infinitely many additional terms in the asymptotic series of \(N_{\partial M}^{\omega}\), these are formed via an iterative argument on the level of symbols. However, for our purposes, we only need the first two elements of the kernel expansion and thus, we only need the first two symbols in the Borel expansion of \(\sigma(N_{\partial M}^{\omega})\). Upon deriving \(\sigma_{-1}(N_{\partial M}^{\omega})\) and \(\sigma_{-2}(N_{\partial M}^{\omega})\) An expression for the asymptotic series of the Schwartz kernel is given by the fourier transform of \(\sigma_{-1}(N_{\partial M}^{\omega})+\sigma_{-2}(N_{\partial M}^{\omega})\). To begin this iterative process, we view the operator equation (3.6) on the level of symbols.
\[1=\sigma(N_{\partial M}^{\omega})\#\sigma(\Lambda_{g,F}^{\omega})(x,\xi^{\prime })+S^{-\infty}, \tag{3.7}\]
where \(\#\) denotes the standard composition of symbols in correspondence to the composition of pseudo-differential operators. Furthermore, we write \(\sigma(N^{\omega}_{\partial M})(t,\xi^{\prime})\) in terms of the following asymptotic series, whose existence is guaranteed by Borel's lemma
\[\sigma(N^{\omega}_{\partial M})(x,\xi^{\prime})\sim\sum_{j\geq 1}\sigma_{-j}(N^{ \omega}_{\partial M})(t,\xi^{\prime}),\ \ \sigma_{-j}(N^{\omega}_{\partial M})(t,\xi^{\prime})\in S^{-j}_{1,0}.\]
Equation (3.7) becomes the following in accordance to the formula for the \(\#\)-product
\[1=\sum_{\gamma}\frac{1}{\gamma!}\partial_{\xi^{\prime}}^{\gamma}\sigma(N^{ \omega}_{\partial M})D_{t}^{\gamma}\sigma(\Lambda^{\omega}_{g,F})+S^{-\infty}.\]
where \(\gamma\in\mathbb{N}^{n}\) denotes a multi-index. In the first iteration, we have that for a smooth function \(\chi\) and \(R\in\mathbb{R}\) with \(\chi(\xi^{\prime})=0\) for \(|\xi^{\prime}|\leq R\) and \(\chi(\xi^{\prime})=1\) for \(|\xi^{\prime}|\geq 2R\)
\[1=\sigma_{-1}(N^{\omega}_{\partial M})(t,\xi^{\prime})\sigma_{1}(\Lambda^{ \omega}_{g,F})(t,\xi^{\prime})+S^{-1}_{1,0}\implies\sigma_{-1}(N^{\omega}_{ \partial M})(t,\xi^{\prime})=\frac{\chi(\xi^{\prime})}{\sigma_{1}(\Lambda^{ \omega}_{g,F})(t,\xi^{\prime})}.\]
We can further iterate for the second term by forming the following equation
\[1 =\sigma_{-1}(N^{\omega}_{\partial M})(t,\xi^{\prime})\sigma_{1}( \Lambda^{\omega}_{g,F})(t,\xi^{\prime})+\sigma_{-1}(N^{\omega}_{\partial M})( t,\xi^{\prime})\sigma_{0}(\Lambda^{\omega}_{g,F})(t,\xi^{\prime}),\] \[+\sigma_{-2}(N^{\omega}_{\partial M})(t,\xi^{\prime})\sigma_{1}( \Lambda^{\omega}_{g,F})(t,\xi^{\prime})+\nabla_{\xi^{\prime}}\sigma_{-1}(N^{ \omega}_{\partial M})\cdot D_{t^{\prime}}\sigma_{1}(\Lambda^{\omega}_{g,F})+ S^{-2}_{1,0}.\]
We now equate terms of symbol order \(-1\) to obtain the following equation
\[0 =\sigma_{-1}(N^{\omega}_{\partial M})(t,\xi^{\prime})\sigma_{0} (\Lambda^{\omega}_{g,F})(t,\xi^{\prime})+\sigma_{-2}(N^{\omega}_{\partial M})( t,\xi^{\prime})\sigma_{1}(\Lambda^{\omega}_{g,F})(t,\xi^{\prime})\] \[+\nabla_{\xi^{\prime}}\sigma_{-1}(N^{\omega}_{\partial M})\cdot D _{t^{\prime}}\sigma_{1}(\Lambda^{\omega}_{g,F}).\]
So, we choose \(\sigma_{-2}(N^{\omega}_{\partial M})(t,\xi^{\prime})\) as follows
\[\sigma_{-2}(N^{\omega}_{\partial M})(t,\xi^{\prime})=-\frac{\chi(\xi^{\prime} )}{\sigma_{1}(\Lambda^{\omega}_{g,F})(t,\xi^{\prime})}\left(p_{-1}(t,\xi^{ \prime})\sigma_{0}(\Lambda^{\omega}_{g,F})(t,\xi^{\prime})+\nabla_{\xi^{ \prime}}p_{-1}\cdot D_{t^{\prime}}\sigma_{1}(\Lambda^{\omega}_{g,F})\right).\]
Thus, we have that the Schwartz kernel of the \(\partial M\)-restricted Greens function, when evaluated at the center of the \(h\)-geodesic disc \(\Gamma_{\varepsilon,1}\), up to \(\Psi^{-3}\), when written in local co-ordinates is given by
\[\frac{1}{4\pi^{2}}\left(\int_{\mathbb{R}^{2}}e^{-i\xi^{\prime}\cdot t^{\prime }}\sigma_{-1}(N^{\omega}_{\partial M})(0,\xi^{\prime})d\xi^{\prime}+\int_{ \mathbb{R}^{2}}e^{-i\xi^{\prime}\cdot t^{\prime}}\sigma_{-2}(N^{\omega}_{ \partial M})(0,\xi^{\prime})d\xi^{\prime}\right).\]
This results in the desired singular expansion for the boundary-restricted Greens function fixed at a central point \(x^{*}\). It can then be extended via a series of estimates derived in [37] to \(N^{\omega}_{\partial M}(x,y)\) for \(x\neq y\), but suitably close (See [36] for full calculation).
Finally, we note that Proposition 1.1 follows as a trivial consequence of Proposition 3.4 and relation (3.1)
### Schwartz kernel estimates
Within this section, we investigate \(G^{\omega}_{\partial M}\) near \(x^{*}\), in the local coordinates given by (2.1). We introduce several integral operators related to the terms on the right-hand side of (1.2). First, we consider a weighted variant of the normal operator
\[L_{a}f=a\int_{\mathbb{D}}\frac{f(s^{\prime})}{\left((t_{1}-s_{1})^{2}+a^{2}(t_ {2}-s_{2})^{2}\right)^{1/2}}ds^{\prime} \tag{3.8}\]
acting on functions on the disk \(\mathbb{D}\). It is known that \(L_{a}\) is a self-adjoint operator; see for instance Section 4 in [37]. Moreover, by [41], it follows that
\[L_{a}\left(K_{a}{}^{-1}(1-|t^{\prime}|^{2})^{-1/2}\right)=1,\ \ K_{a}=\frac{\pi}{2} \int_{0}^{2\pi}\left(\cos^{2}\theta+\frac{\sin^{2}\theta}{a^{2}}\right)^{-1/2}d\theta. \tag{3.9}\]
By (4.4) in [37], it was shown that \(u(t^{\prime})=K_{a}^{-1}(1-|t^{\prime}|^{2})^{-1/2}\) is the unique solution to \(L_{a}u=1\) in \(H^{1/2}(\mathbb{D})^{*}\). Next, we introduce the following operators
\[R_{\log,a}f(t^{\prime}) :=a\int_{\mathbb{D}}\log\left((t_{1}-s_{1})^{2}+a^{2}(t_{2}-s_{2} )^{2}\right)^{1/2}f(s^{\prime})ds^{\prime},\] \[R_{\infty,a}f(t^{\prime}) :=a\int_{\mathbb{D}}\frac{(t_{1}-s_{1})^{2}-a^{2}(t_{2}-s_{2})^{2 }}{(t_{1}-s_{1})^{2}+a^{2}(t_{2}-s_{2})^{2}}f(s^{\prime})ds^{\prime},\] \[R_{F,a}f(t^{\prime}) :=a\int_{\mathbb{D}}\frac{F^{1}(0)(t_{1}-s_{1})+aF^{2}(0)(t_{2}-s _{2})}{((t_{1}-s_{1})^{2}+a^{2}(t_{2}-s_{2})^{2})^{1/2}}f(s^{\prime})ds^{\prime},\] \[R_{I,a}f(t^{\prime}) :=a\int_{\mathbb{D}}f(s^{\prime})ds^{\prime}.\]
**Remark 3.5**.: In [37], it was shown that the operators \(R_{\log,a}\) and \(R_{\infty,a}\) are bounded maps from \(H^{1/2}(\mathbb{D})^{*}\) to \(H^{3/2}(\mathbb{D})\). Repeating the arguments shows that this is also true for \(R_{F,a}\).
Note that these lemmas are proved in [36, 37]. We state them here for the convenience of the reader.
**Lemma 3.6**.: _We have the following identity_
\[\int_{\Gamma_{\varepsilon,a}}d_{g}(x,y)^{-1}v(y)d\mu_{h}(y)=\varepsilon L_{a }\tilde{v}(t^{\prime})+\varepsilon^{3}\mathcal{A}_{\varepsilon}\tilde{v}(t^{ \prime}),\]
_where \(x=x^{\varepsilon}(t^{\prime})\), \(\tilde{v}(t^{\prime})=v(x^{\varepsilon}(t^{\prime}))\), for some \(\mathcal{A}_{\varepsilon}:H^{1/2}(\mathbb{D};ds^{\prime})^{*}\to H^{1/2}( \mathbb{D};ds^{\prime})\) with operator norm bounded uniformly in \(\varepsilon\)._
From now on, we will denote by \(\mathcal{A}_{\varepsilon}\) any operator which takes
\[\mathcal{A}_{\varepsilon}:H^{1/2}(\mathbb{D};ds^{\prime})^{*}\to H^{1/2}( \mathbb{D};ds^{\prime}),\]
whose operator norm is bounded uniformly in \(\varepsilon\).
**Lemma 3.7**.: _The following identity holds_
\[(H(x)-g_{x}(F,\nu))\int_{\Gamma_{\varepsilon,a}}\log d_{h}(x,y)v( y)d\mu_{h}(y)\\ =\varepsilon^{2}\log\varepsilon(H(x^{*})-\partial_{\nu}\phi(x^{ *}))R_{I}\tilde{v}(t^{\prime})+\varepsilon^{2}(H(x^{*})-\partial_{\nu}\phi(x^ {*}))R_{\log,a}\tilde{v}(t^{\prime})+\varepsilon^{3}\log\varepsilon\mathcal{ A}_{\varepsilon}\tilde{v}(t^{\prime}),\]
_where \(x=x^{\varepsilon}(t^{\prime})\) and \(\tilde{v}(t^{\prime})=v(x^{\varepsilon}(t^{\prime}))\)._
**Lemma 3.8**.: _The following identity holds_
\[\int_{\Gamma_{\varepsilon,a}}\left(\Pi_{x}\left(\frac{\exp_{x}^{ -1}(y)}{|\exp_{x}^{-1}(y)|_{h}}\right)-\Pi_{x}\left(\frac{\star\exp_{x}^{-1}(y )}{|\exp_{x}^{-1}(y)|_{h}}\right)\right)v(y)d\mu_{h}(y)\\ =\varepsilon^{2}(\kappa_{1}(x^{*})-\kappa_{2}(x^{*}))R_{\infty,a} \tilde{v}(t^{\prime})+\varepsilon^{3}\mathcal{A}_{\varepsilon}\tilde{v}(t^{ \prime}),\]
_where \(x=x^{\varepsilon}(t^{\prime})\) and \(\tilde{v}(t^{\prime})=v(x^{\varepsilon}(t^{\prime}))\). Recall that \(\kappa_{1}(x^{*})\) and \(\kappa_{2}(x^{*})\) denote the principle curvatures of the boundary \(\partial M\) at \(x^{*}\)._
**Lemma 3.9**.: _The following identity holds_
\[\int_{\Gamma_{\varepsilon,a}}h_{x}\left(F^{\parallel}(x),\frac{\exp_{x;h}(y)}{| \exp_{x;h}(y)|_{h}}\right)v(y)d\mu_{h}(y)=\varepsilon^{2}R_{F,a}\tilde{v}(t^{ \prime})+\varepsilon^{3}\mathcal{A}_{\varepsilon}\tilde{v}(t^{\prime}),\]
_where \(x=x^{\varepsilon}(t^{\prime})\) and \(\tilde{v}(t^{\prime})=v(x^{\varepsilon}(t^{\prime}))\)._
Finally, we need to know the behaviour of the final component on the right-hand side of equation (1.2) as the parameter \(\omega\) converges towards the spectrum of \(-\Delta_{N}^{F}\). eigenvalue of \(-\Delta_{g}^{F}\).
**Proposition 3.10**.: _Let \(\lambda_{k}\) be a simple eigenvalue of \(-\Delta_{g}^{F}\) and let \(V_{k}\) be its neighbourhood which is open, bounded, and does not contain any other eigenvalue of \(-\Delta_{g}^{F}\). For \(\lambda\in V_{k}\), let_
\[R_{\lambda_{k},\lambda}:C^{\infty}(\partial M)\mapsto\mathcal{D}^{\prime}( \partial M) \tag{3.10}\]
_be the operator defined by the integral kernel_
\[R^{\lambda_{k}}(x,y)-R^{\lambda}(x,y),\]
_then_
\[\left\|R_{\lambda_{k},\lambda}\right\|_{H^{1/2}(\partial M)^{*}\mapsto H^{1/2 }(\partial M)}=O(|\lambda_{k}-\lambda|).\]
**Remark 3.11**.: Due to Proposition 3.1, for any fixed \(x\), \(y\in\partial M\), \(R_{\partial M}^{\lambda_{k}}(x,y)\) is well defined.
Proof of Proposition 3.10.: Throughout the proof, we write \(x\lesssim y\) or \(y\gtrsim x\) to mean that \(x\leq Cy\), where \(C>0\) is some constant. The dependencies of \(C\) will be clear from the context. By \(x\approx y\) we mean that \(x\lesssim y\) and \(x\gtrsim y\).
For \(\psi\in H^{1/2}(\partial M)^{*}\), we have the following estimate
\[\left\|\int_{\partial M}(R^{\lambda_{k}}(x,y)-R^{\lambda}(x,y))\psi(y)d\mu_{h }(y)\right\|_{H^{1/2}(\partial M)}\leq\|U\|_{H^{1}(M)},\]
where
\[U(x)=\sum_{j\neq k}\frac{(\lambda_{k}-\lambda)u_{j}(x)\langle u_{j}e^{\phi}, \psi\rangle}{(\lambda_{j}-\lambda_{k})(\lambda_{j}-\lambda)}\]
and \(\langle\cdot,\cdot\rangle\) denotes paring between \(H^{1/2*}\) and \(H^{1/2}\). Let us consider the following Neumann boundary value problem
\[\begin{cases}(-\Delta_{g}^{F}+1)u=0&\text{on }M,\\ \partial_{\nu}u=\psi&\text{on }\partial M.\end{cases} \tag{3.11}\]
The corresponding Neumann-to-Dirichlet map is defined by
\[\mathcal{N}:H^{1/2}(\partial M)^{*}\mapsto H^{1/2}(\partial M),\] \[\mathcal{N}\psi=u^{\psi}|_{\partial M},\]
where \(u^{\psi}\) is the solution to (3.11). Using Green's identity, we obtain
\[\lambda_{j}(u_{j},u^{\psi})_{L^{2}(M,e^{\phi}d\mu_{g})} =-\int_{M}\Delta_{g}^{F}u_{j}(x)u^{\psi}(x)e^{\phi(x)}d\mu_{g}(x)\] \[=\int_{M}u_{j}(x)\Delta_{g}^{F}u^{\psi}(x)e^{\phi(x)}d\mu_{g}(x)+ \langle u_{j}e^{\phi}|_{\partial M},\psi\rangle.\]
Therefore,
\[\langle u_{j}e^{\phi}|_{\partial M},\psi\rangle=(\lambda_{j}-1)(u_{j},u^{\psi})_{L^ {2}(M,e^{\phi}d\mu_{g})}.\]
Then,
\[U(x)=(\lambda_{k}-\lambda)\sum_{j\neq k}\frac{\lambda_{j}-1}{(\lambda_{j}- \lambda_{k})(\lambda_{j}-\lambda)}(u_{j},u^{\psi})_{L^{2}(M,e^{\phi}d\mu_{g})}u _{j}(x).\]
Let us set
\[I_{1}(x):=\sum_{j\neq k}\frac{1}{\lambda_{j}-\lambda}(u_{j},u^{\psi})_{L^{2}(M, e^{\phi}d\mu_{g})}u_{j}(x)\]
and
\[I_{2}(x):=\sum_{j\neq k}\frac{\lambda_{k}-1}{(\lambda_{j}-\lambda_{k})(\lambda _{j}-\lambda)}(u_{j},u^{\psi})_{L^{2}(M,e^{\phi}d\mu_{g})}u_{j}(x),\]
so that
\[U(x)=(\lambda_{k}-\lambda)\left(I_{1}(x)+I_{2}(x)\right).\]
By the spectral theorem, we know that
\[(-\Delta_{N}^{F}-\lambda)^{-1}u^{\psi}=\sum_{j=1}^{\infty}\frac{1}{\lambda_{j }-\lambda}(u_{j},u^{\psi})_{L^{2}(M,e^{\phi}d\mu_{g})}u_{j}(x).\]
Hence,
\[I_{1}(x)=(Id-P_{k})(-\Delta_{N}^{F}-\lambda)^{-1}u^{\psi},\]
where \(Id\) is the identity operator and \(P_{k}\) is the spectral projection to \(\{u_{k}\}\). Therefore,
\[\|I_{1}\|_{H^{1}(M)}\lesssim\|\nabla_{g}u^{\psi}\|_{L^{2}(M,e^{\phi}d\mu_{g})} +\|u^{\psi}\|_{L^{2}(M,e^{\phi}d\mu_{g})}.\]
Furthermore, using the divergence theorem, we compute
\[0 =\int_{M}(-\Delta_{g}^{F}u^{\psi}(x)+u^{\psi}(x))u^{\psi}(x)e^{ \phi(x)}d\mu_{g}(x)\] \[=\|\nabla_{g}u^{\psi}\|_{L^{2}(M,e^{\phi}d\mu_{g})}^{2}+\|u^{ \psi}\|_{L^{2}(M,e^{\phi}d\mu_{g})}^{2}-\langle e^{\phi}\mathcal{N}\psi,\psi\rangle, \tag{3.12}\]
which implies that
\[\|I_{1}\|_{H^{1}(M)}\lesssim\sqrt{\langle e^{\phi}\mathcal{N}\psi,\psi\rangle}.\]
Next, we estimate \(I_{2}\):
\[\|I_{2}\|_{H^{1}(M)}\approx\|\nabla_{g}I_{2}\|_{L^{2}(M,e^{\phi} d\mu_{g})}+\|I_{2}\|_{L^{2}(M,e^{\phi}d\mu_{g})}\] \[\lesssim\left(\sum_{j\neq k}\left(\frac{(\lambda_{k}-1)}{(\lambda _{j}-\lambda_{k})(\lambda_{j}-\lambda)}\right)^{2}\lambda_{j}+\left(\frac{( \lambda_{k}-1)}{(\lambda_{j}-\lambda_{k})(\lambda_{j}-\lambda)}\right)^{2} \right)^{\frac{1}{2}}(u_{j},u^{\psi})_{L^{2}(M,e^{\phi}d\mu_{g})}.\]
Therefore,
\[\|I_{2}\|_{H^{1}(M)}\lesssim\left(\sum_{j\neq k}\frac{1}{\lambda_{j}^{3}} \right)^{\frac{1}{2}}(u_{j},u^{\psi})_{L^{2}(M,e^{\phi}d\mu_{g})}\lesssim\|u^ {\psi}\|_{L^{2}(M,e^{\phi}d\mu_{g})}\]
Here, as a consequence of Weyl's law [43], we used that \(\lambda_{j}\approx j^{\frac{2}{3}}\). See also [8] for weighted Laplace operators. Due to (3.12), we derive
\[\|I_{2}\|_{H^{1}(M)}\lesssim\sqrt{\langle e^{\phi}\mathcal{N}\psi,\psi\rangle},\]
and hence,
\[\|U\|_{H^{1}(M)}\lesssim|\lambda_{k}-\lambda|\sqrt{\langle e^{\phi}\mathcal{N }\psi,\psi\rangle}.\]
Since \(-1\notin\operatorname{spec}(-\Delta_{N}^{F})\), it follows that \(\mathcal{N}\) is a bounded operator from \(H^{1/2}(\partial M)^{*}\) to \(H^{1/2}(\partial M)\), see for instance [9]. Therefore, we conclude
\[\|R_{\lambda_{k},\lambda}\psi\|_{H^{1/2}(M)}\leq\|U\|_{H^{1}(M)}\lesssim| \lambda_{k}-\lambda|\|\psi\|_{H^{1/2}(\partial M)^{*}}.\]
This completes the proof.
## 4. Proof of the main result
In this section, we prove our main result. We first begin with auxiliary lemmas which will be used subsequently. The following lemma is well-known in spectral theory. We state it here for the reader's convenience.
**Lemma 4.1** (Glazman Lemma).: _Let \(A\) be a lower-semibounded self-adjoint operator in a Hilbert space \((\mathcal{H},\langle\cdot,\cdot\rangle)\) with corresponding closed sesquilinear form \(a\) and form domain \(\operatorname{D}(a)\). Then, it holds_
\[N(\lambda,A)=\sup\{\dim L\mid\ L\text{ subspace of }\operatorname{D}(a)\text{ s.th. }a(u,u)<\lambda\langle u,u\rangle\text{ for }u\in L \setminus\{0\}\},\]
_where \(N(\lambda,A)\) is the spectral distribution function._
The proofs of the following two lemmas are based on proofs of Lemma 3.1 and Theorem 3.2 in [40], respectively.
**Lemma 4.2**.: _Let \(\lambda\in\mathbb{R}\) and \(u\in H^{1}(M)\) such that \(-\Delta_{g}^{F}u=\lambda u\). Let \(x_{0}\in\partial M\). If \(u\left|{}_{B_{h}(x_{0},\varepsilon)}\right.=0\) and \(\partial_{\nu}u\left|{}_{B_{h}(x_{0},\varepsilon)}\right.=0\), then \(u=0\) identically on \(M\). (Recall that we consider \(\varepsilon\) being smaller than injectivity radius.)_
Proof.: Let us extend \(M\) to a compact connected smooth Riemannian manifold \(\widetilde{M}\) such that \(\overline{\widetilde{M}\setminus M}\) is compact with non-empty interior and \(\overline{\widetilde{M}\setminus M}\cap M=B_{h}(x_{0},\varepsilon)\). Let \(\widetilde{u}\) be the extension by zero of \(u\) to \(\widetilde{M}\). Let \(\widetilde{F}\) be any smooth extension, up to the boundary, of \(F\) to \(\widetilde{M}\), so that we get a new weighted Laplacian \(-\widetilde{\Delta}_{\widetilde{g}}^{F}\). Since \(u\left|{}_{B_{h}(x_{0},\varepsilon)}\right.=0\) and \(\partial_{\nu}u\left|{}_{B_{h}(x_{0},\varepsilon)}\right.=0\), it follows that \(\widetilde{u}\in H^{1}(\widetilde{M})\), moreover \(-\widetilde{\Delta}_{\widetilde{g}}^{F}\widetilde{u}=\lambda\widetilde{u}\). Since \(\widetilde{u}\left|{}_{\widetilde{M}\setminus M}\right.=0\), the unique continuation arguments imply that \(u=0\) identically on \(M\).
The previous lemma can be used to derive the following strict monotonicity principle, which will be used later.
**Lemma 4.3**.: _Assume that \(0<\varepsilon_{1}<\varepsilon_{2}\), then_
\[\lambda_{j,\varepsilon_{1}}<\lambda_{j,\varepsilon_{2}},\qquad j\in\mathbb{N}.\]
Proof.: Since \(\varepsilon_{1}<\varepsilon_{2}\), it follows that \(\operatorname{D}(a_{\varepsilon_{2}})\subset\operatorname{D}(a_{\varepsilon_ {1}})\), and hence, by Lemma 4.1, we know that \(\lambda_{j,\varepsilon_{1}}\leq\lambda_{j,\varepsilon_{2}}\). We are now required to show that the previous estimate is strict.
Let \(\lambda=\lambda_{j,\varepsilon_{2}}\) and \(\delta>0\) be sufficiently small such that
\[(\lambda,\lambda+\delta)\cap\operatorname{spec}(-\Delta_{Mix,\varepsilon_{ i}}^{F})=\emptyset,\qquad\text{for }i=1,2.\]
We denote
\[k=N\left(\lambda+\delta,-\Delta_{Mix,\varepsilon_{2}}^{F}\right)\quad\text{ and}\quad L:=\operatorname{span}\{u_{1,\varepsilon_{2}},\cdots,u_{k,\varepsilon_{2}}\}.\]
Then \(k\geq j\) and
\[a_{\varepsilon_{1}}(u,u)=a_{\varepsilon_{2}}(u,u)<(\lambda+\delta)(u,u)_{L^{2 }(M,e^{\phi}d\mu_{g})},\qquad\text{for }u\in L. \tag{4.1}\]
Since \(\Gamma_{\varepsilon_{2},a}\setminus\Gamma_{\varepsilon_{1},a}\) has a non-empty interior in \(\partial M\), by Lemma 4.2, it follows that if \(v\in\mathrm{Ker}(-\Delta^{F}_{Mix,\varepsilon_{1}}-\lambda)\) then \(v\notin L\). Therefore, we obtain
\[\dim\left(\mathrm{Ker}(-\Delta^{F}_{Mix,\varepsilon_{1}}-\lambda)\oplus L \right)=\dim\left(\mathrm{Ker}\left(-\Delta^{F}_{Mix,\varepsilon_{1}}- \lambda\right)\right)+\dim L. \tag{4.2}\]
Furthermore, since \(\mathrm{Ker}\left(-\Delta^{F}_{Mix,\varepsilon_{1}}-\lambda\right)\subset \mathrm{D}(-\Delta^{F}_{Mix,\varepsilon_{1}})\) and \(L\subset\mathrm{D}(a_{\varepsilon_{2}})\subset\mathrm{D}(a_{\varepsilon_{1}})\), Theorem 2.1 in [30, Chapter 6] and estimate (4.1) give
\[a_{\varepsilon_{1}}(v+u,v+u) <(\lambda+\delta)(v,v)_{L^{2}(M,e^{\phi}d\mu_{g})}+(\lambda+ \delta)(u,u)_{L^{2}(M,e^{\phi}d\mu_{g})}+2a_{\varepsilon_{1}}(v,u)\] \[<(\lambda+\delta)\left((v,v)_{L^{2}(M,e^{\phi}d\mu_{g})}+(u,u)_{ L^{2}(M,e^{\phi}d\mu_{g})}\right)+2\lambda(v,u)_{L^{2}(M,e^{\phi}d\mu_{g})}\] \[\leq(\lambda+\delta)(v+u,v+u)_{L^{2}(M,e^{\phi}d\mu_{g})},\]
for \(v\in\mathrm{Ker}\left(-\Delta^{F}_{Mix,\varepsilon_{1}}-\lambda\right)\) and \(u\in L\). Therefore, by Lemma 4.1, it follows
\[N\left(\lambda+\delta,-\Delta^{F}_{Mix,\varepsilon_{1}}\right)\geq\dim\left( \mathrm{Ker}\left(-\Delta^{F}_{Mix,\varepsilon_{1}}-\lambda\right)\oplus L \right),\]
and hence, by (4.2),
\[N\left(\lambda+\delta,-\Delta^{F}_{Mix,\varepsilon_{1}}\right)\geq\dim\left( \mathrm{Ker}\left(-\Delta^{F}_{Mix,\varepsilon_{1}}-\lambda\right)\right)+ \dim L.\]
Then
\[N\left(\lambda,-\Delta^{F}_{Mix,\varepsilon_{1}}\right)=N\left(\lambda+ \delta,-\Delta^{F}_{Mix,\varepsilon_{1}}\right)-\dim\left(\mathrm{Ker}\left( -\Delta^{F}_{Mix,\varepsilon_{1}}-\lambda\right)\right)\geq k\geq j,\]
so that \(\lambda_{j,\varepsilon_{1}}<\lambda_{j,\varepsilon_{2}}\).
Since \(\mathrm{D}(a_{\varepsilon})\subset\mathrm{D}(a_{N})\), it follows that \(\lambda_{j,\varepsilon}\) is bounded from below by \(\lambda_{j}\) and decreases as \(\varepsilon\to 0\), by the last lemma. Therefore, we can define the following limit
\[\lambda_{j,0}:=\lim_{\varepsilon\to 0}\lambda_{j,\varepsilon}.\]
Next, we show that \(\{\lambda_{j,0}\}_{j\in\mathbb{N}}\) coincides with the sequence of eigenvalues of \(-\Delta^{F}_{N}\):
**Lemma 4.4**.: _For any \(j\in\mathbb{N}\), the equality \(\lambda_{j,0}=\lambda_{j}\) holds._
Proof.: For \(\varepsilon>0\), we know that \(\mathrm{D}(a^{N})\subset\mathrm{D}(a_{\varepsilon})\). Therefore, by Lemma 4.1, \(\lambda_{j}\leq\lambda_{j,\varepsilon}\). Recalling our definition of \(\lambda_{j,0}\), we conclude that \(\lambda_{j}\leq\lambda_{j,0}\), or equivalently
\[N(\lambda,-\Delta^{F}_{N})\geq\#\{\lambda_{j,0}:\;\lambda_{j,0}<\lambda\}, \qquad\lambda>0.\]
Therefore, to prove \(\lambda_{j,0}=\lambda_{j}\), it suffices to show that if \(\lambda\in\mathrm{spec}(-\Delta^{F}_{N})\) with multiplicity \(l\), then \(\lambda\) appears in \(\{\lambda_{j,0}\}_{j\in\mathbb{R}}\) at least \(l\) times.
Let \(\lambda\in\mathrm{spec}(-\Delta^{F}_{N})\) and \(l\) be its multiplicity. Then there exists \(k\in\mathbb{N}\) such that \(\lambda<\lambda_{k+1}\) and
\[\lambda_{k-l+1}=\cdots=\lambda_{k}=\lambda.\]
Therefore, there exists \(\alpha_{0}>0\) such that \(N(\lambda+\alpha,-\Delta^{F}_{N})=k\) for any \(\alpha\in(0,\alpha_{0})\). For any \(\alpha\in(0,\alpha_{0})\), we aim to find a small \(\varepsilon>0\) so that \(N(\lambda+\alpha,-\Delta^{F}_{Mix,\varepsilon})=k\).
Let \(\chi_{\varepsilon}\in C^{\infty}(M)\) denote a smooth cutoff function such that
\[\chi_{\varepsilon}(x)=\begin{cases}1&\text{for }x\in M\setminus B_{g}(x^{*},3 \varepsilon),\\ 0&\text{for }x\in B_{g}(x^{*},2\varepsilon)\end{cases}\]
and
\[\|\nabla_{g}\chi_{\varepsilon}\|_{L^{2}(M,e^{\phi}d\mu_{g})}\to 0,\text{for } \varepsilon\to 0.\]
Consider the set of functions
\[L_{\varepsilon}:=\{u_{1}\chi_{\varepsilon},\cdots,u_{k}\chi_{\varepsilon}\}.\]
Since \(\{u_{j}\}_{j=1}^{k}\) are linearly independent in \(L^{2}\) and \(u_{j}\chi_{\varepsilon}\to u_{j}\) in \(L^{2}\), it follows that \(\{u_{j}\chi_{\varepsilon}\}_{j=1}^{k}\) are also linearly independent in \(L^{2}\) for sufficiently small \(\varepsilon>0\), so that \(\dim(L_{\varepsilon})=k\). The definition of \(\chi_{\varepsilon}\) implies that \(L_{\varepsilon}\subset\mathrm{D}(a_{\varepsilon})\) and
\[a(u_{j}\chi_{\varepsilon},u_{j}\chi_{\varepsilon})\to a(u_{j},u_{j}),\]
\[(u_{j}\chi_{\varepsilon},u_{j}\chi_{\varepsilon})_{L^{2}(M,e^{\phi}d\mu_{g})} \rightarrow(u_{j},u_{j})_{L^{2}(M,e^{\phi}d\mu_{g})},\]
as \(\varepsilon\to 0\). Therefore, for \(\alpha\in(0,\alpha_{0})\), there exists \(\varepsilon>0\) such that
\[\frac{a(u,u)}{(u,u)_{L^{2}(M,e^{\phi}d\mu_{g})}}<\lambda+\alpha\qquad\text{ for }u\in L_{\varepsilon},\]
which implies that \(N(\lambda+\alpha,-\Delta_{Mix,\varepsilon}^{F})=k\). Therefore, \(\operatorname{spec}(-\Delta_{Mix,\varepsilon}^{F})\cap(\lambda,\lambda+\alpha)\) is not empty. Moreover, since \(\lambda_{j}<\lambda_{j,\varepsilon}\), we conclude
\[\{\lambda_{k-l+1,\varepsilon},\cdots,\lambda_{k,\varepsilon}\}\subset \operatorname{spec}(-\Delta_{Mix,\varepsilon}^{F})\cap(\lambda,\lambda+\alpha).\]
Since \(\alpha_{0}\) is an arbitrary sufficiently small number, we conclude that \(\lambda\) appears in \(\{\lambda_{j,0}\}_{j\in\mathbb{N}}\) at least \(l\) times. This completes the proof.
Next, we show that the eigenfunctions of \(-\Delta_{Mix,\varepsilon}^{F}\) and \(-\Delta_{N}^{F}\) are close to each other in the following sense:
**Lemma 4.5**.: _Assume that \(\lambda_{j}\) is a simple eigenvalue of \(-\Delta_{N}^{F}\). Then there exists \(C>0\) such that_
\[\left|(u_{j},u_{j,\varepsilon})_{L^{2}(M,e^{\phi}d\mu_{g})}\right|>C,\]
_for sufficiently small \(\varepsilon>0\)._
Proof.: Recall that \(\{u_{k}\}_{k\in\mathbb{N}}\) forms an orthonormal basis on \(L^{2}(M,e^{\phi}d\mu_{g})\), so that we can express
\[u_{j,\varepsilon}(x)=\sum_{k\in\mathbb{N}}c_{k}^{\varepsilon}u_{k}.\]
Assume that the lemma is false, which means that there is a sequence of positive numbers \(\{\varepsilon_{l}\}_{l\in\mathbb{N}}\) such that
\[\varepsilon_{l}\to 0,\qquad c_{j}^{\varepsilon_{l}}\to 0, \tag{4.3}\]
as \(l\rightarrow\infty\). Since \(\lambda_{j}\) is simple we can choose \(\alpha>0\) such that \(\lambda_{j}+\alpha<\lambda_{j+1}\). Let us define
\[\omega_{j,\varepsilon_{l}}:=\sum_{k\neq j}c_{k}^{\varepsilon_{l}}u_{k}.\]
Then
\[(\omega_{j,\varepsilon_{l}},\omega_{j,\varepsilon_{l}})_{L^{2}(M,e^{\phi}d\mu _{g})}=(u_{j,\varepsilon_{l}}-c_{j}^{\varepsilon_{l}}u_{j},u_{j,\varepsilon_ {l}}-c_{j}^{\varepsilon_{l}}u_{j})_{L^{2}(M,e^{\phi}d\mu_{g})}\\ =(u_{j,\varepsilon_{l}},u_{j,\varepsilon_{l}})_{L^{2}(M,e^{\phi} d\mu_{g})}-2(u_{j,\varepsilon_{l}},c_{j}^{\varepsilon_{l}}u_{j})_{L^{2}(M,e^{ \phi}d\mu_{g})}+(c_{j}^{\varepsilon_{l}}u_{j},c_{j}^{\varepsilon_{l}}u_{j})_{L ^{2}(M,e^{\phi}d\mu_{g})},\]
and hence,
\[(\omega_{j,\varepsilon_{l}},\omega_{j,\varepsilon_{l}})_{L^{2}(M,e^{\phi}d\mu _{g})}=(u_{j,\varepsilon_{l}},u_{j,\varepsilon_{l}})_{L^{2}(M,e^{\phi}d\mu_{g })}+o(1)\]
as \(l\rightarrow\infty\). Similarly, we obtain
\[a^{N}(\omega_{j,\varepsilon_{l}},\omega_{j,\varepsilon_{l}})=a^{N}(u_{j, \varepsilon_{l}},u_{j,\varepsilon_{l}})-2a^{N}(u_{j,\varepsilon_{l}},c_{j}^{ \varepsilon_{l}}u_{j})+a^{N}(c_{j}^{\varepsilon_{l}}u_{j},c_{j}^{\varepsilon _{l}}u_{j})\\ =\lambda_{j\varepsilon_{l}}(u_{j,\varepsilon_{l}},u_{j,\varepsilon _{l}})_{L^{2}(M,e^{\phi}d\mu_{g})}-\lambda_{j}(c_{j}^{\varepsilon_{l}})^{2}= \lambda_{j\varepsilon_{l}}(\omega_{j,\varepsilon_{l}},\omega_{j,\varepsilon_{l} })_{L^{2}(M,e^{\phi}d\mu_{g})}+o(1) \tag{4.4}\]
as \(l\rightarrow\infty\). Since \(\lambda_{j,\varepsilon_{l}}\rightarrow\lambda_{j}\) as \(l\rightarrow\infty\), this implies that
\[a^{N}(\omega_{j,\varepsilon_{l}},\omega_{j,\varepsilon_{l}})<(\lambda_{j}+ \alpha)(\omega_{j,\varepsilon_{l}},\omega_{j,\varepsilon_{l}})_{L^{2}(M,e^{ \phi}d\mu_{g})}\]
for sufficiently large \(l\in\mathbb{N}\). Let us show that
\[\omega_{j,\varepsilon_{l}}\notin\operatorname{span}\{u_{1},\cdots,u_{j}\} \tag{4.5}\]
for sufficiently large \(l\in\mathbb{N}\). Assume this is not true. Without loss of generality, we assume that (4.5) is false for all \(l\in\mathbb{N}\), otherwise consider a subsequence. Due to the definition of \(\omega_{\omega,\varepsilon_{l}}\), this implies that
\[\omega_{j,\varepsilon_{l}}\notin\operatorname{span}\{u_{1},\cdots,u_{j-1}\}.\]
In this case, we would have
\[a^{N}(\omega_{j,\varepsilon_{l}},\omega_{j,\varepsilon_{l}})\leq\lambda_{j-1 }(\omega_{j,\varepsilon_{l}},\omega_{j,\varepsilon_{l}})_{L^{2}(M,e^{\phi}d \mu_{g})}.\]
Since \(\lambda_{j}\) is simple, this contradicts to (4.4). Therefore, (4.5) holds.
We now let \(u\in\operatorname{span}\{u_{1},\cdots,u_{j}\}\). Then
\[a^{N}(\omega_{j,\varepsilon_{l}}+u,\omega_{j,\varepsilon_{l}}+u) \\ \leq(\lambda_{j}+\alpha)(\omega_{j,\varepsilon_{l}},\omega_{j, \varepsilon_{l}})_{L^{2}(M,e^{\phi}d\mu_{g})}+\lambda_{j}(u,u)_{L^{2}(M,e^{ \phi}d\mu_{g})}+2a^{N}(\omega_{j,\varepsilon_{l}},u), \tag{4.6}\]
for sufficiently large \(l\in\mathbb{N}\). Let us estimate, the last term of the right-hand side. Let \(\chi_{\varepsilon_{l}}\) be the function described in the proof of Lemma 4.4, then
\[a^{N}(\omega_{j,\varepsilon_{l}},u)=a^{N}(u_{j,\varepsilon_{l}},u)-c_{j}^{\varepsilon_{l}}a^{N}(u_{j},u)\\ =a^{N}(u_{j,\varepsilon_{l}},\chi_{\varepsilon_{l}}u)+a^{N}(u_{j,\varepsilon_{l}},u-\chi_{\varepsilon_{l}}u)-c_{j}^{\varepsilon_{l}}a^{N}(u_{ j},u)=a^{N}(u_{j,\varepsilon_{l}},\chi_{\varepsilon_{l}}u)+o(1)\]
as \(l\to\infty\). Since \(u_{j,\varepsilon_{l}}\in\operatorname{D}(-\Delta_{Mix,\varepsilon_{l}}^{N})\) and \(u\chi_{\varepsilon}\in\operatorname{D}(a_{\varepsilon_{l}})\), it follows that
\[a^{N}(u_{j,\varepsilon_{l}},\chi_{\varepsilon_{l}}u)=\lambda_{j,\varepsilon_ {l}}(u_{j,\varepsilon_{l}},\chi_{\varepsilon_{l}}u)_{L^{2}(M,e^{\phi}d\mu_{g} )}=\lambda_{j,\varepsilon_{l}}(\omega_{j,\varepsilon_{l}},u)_{L^{2}(M,e^{ \phi}d\mu_{g})}+o(1),\]
as \(l\to\infty\). Therefore,
\[a^{N}(\omega_{j,\varepsilon_{l}},u)\leq(\lambda_{j}+\alpha)(\omega_{j, \varepsilon_{l}},u)_{L^{2}(M,e^{\phi}d\mu_{g})}.\]
Therefore, (4.6) gives
\[a^{N}(\omega_{j,\varepsilon_{l}}+u,\omega_{j,\varepsilon_{l}}+u)\leq(\lambda _{j}+\alpha)(\omega_{j,\varepsilon_{l}}+u,\omega_{j,\varepsilon_{l}}+u)_{L^{2 }(M,e^{\phi}d\mu_{g})},\]
for sufficiently large \(l\in\mathbb{N}\). Due to (4.5) and Lemma 4.1, we obtain
\[N(\lambda_{j}+\alpha,\Delta_{N}^{F})\geq j+1.\]
This contradicts to \(\lambda_{j}+\alpha<\lambda_{j+1}\).
Now, we are ready to prove the main result.
Proof of Theorem 1.3.: Let \(V_{j}\subset\mathbb{C}\) be an open neighbourhood of \(\lambda_{j}\) which does not contain any other eigenvalues of \(-\Delta_{N}^{F}\). Since \(\lambda_{j}\) is simple, Theorem 4.4 implies that, for sufficiently small \(\varepsilon>0\), \(\lambda_{j,\varepsilon}\) is the only eigenvalue of \(-\Delta_{Mix,\varepsilon}^{F}\) in \(V_{j}\). For \(\omega\in V_{j}\) and \(x\), \(y\in\partial M\), Proposition 1.1 gives
\[G_{\partial M}^{\omega}(x,y)= \frac{1}{2\pi}d_{g}(x,y)^{-1}-\frac{H(x)}{4\pi}\log d_{h}(x,y)+ \frac{g_{x}(F,\nu)}{4\pi}\log d_{h}(x,y)\] \[+\frac{1}{16\pi}\left(\Pi_{x}\left(\frac{\exp_{x}^{-1}(y)}{|\exp _{x}^{-1}(y)|_{h}}\right)-\Pi_{x}\left(\frac{\star\exp_{x}^{-1}(y)}{|\exp_{x} ^{-1}(y)|_{h}}\right)\right)\] \[+\frac{1}{4\pi}h_{x}\left(F^{\|}(x),\frac{\exp_{x}^{-1}(y)}{| \exp_{x}^{-1}(y)|_{h}}\right)\] \[+\frac{u_{j}(x)u_{j}(y)}{\lambda_{j}-\omega^{2}}e^{\phi(y)}+R_{ \partial M}^{\lambda_{j,\varepsilon}}(x,y).\]
From Green's identity, we know that
\[u_{j,\varepsilon}=(\lambda_{j,\varepsilon}-\omega^{2})\int_{M}G_{M}^{\omega}(x,y) u_{j,\varepsilon}(y)d\mu_{g}(y)+\int_{\Gamma_{\varepsilon,a}}G_{M}^{\omega}(x,y) \partial_{\nu}u_{j,\varepsilon}(y)d\mu_{h}(y).\]
We choose \(\omega^{2}=\lambda_{j,\varepsilon}\) and restrict the last identity to \(\Gamma_{\varepsilon,a}\), to obtain
\[\int_{\Gamma_{\varepsilon,a}}G_{\partial M}^{\sqrt{\lambda_{j,\varepsilon}}}(x,y)\partial_{\nu}u_{j,\varepsilon}(y)d\mu_{h}(y)=0. \tag{4.7}\]
Next, we will use the coordinate system given by (2.1). We denote
\[\tilde{u}_{j}(t^{\prime}):=u_{j}(x^{\varepsilon}(t_{1},at_{2})),\quad\tilde{ \phi}(t^{\prime}):=\phi(x^{\varepsilon}(t_{1},at_{2}))\]
and
\[v_{\varepsilon}:=\partial_{\nu}u_{j,\varepsilon},\quad\tilde{v}_{\varepsilon }(t^{\prime}):=v_{\varepsilon}(x^{\varepsilon}(t_{1},at_{2})).\]
Note that in these coordinates the volume form for \(\partial M\) is given by
\[d\mu_{h}(y)=a\varepsilon^{2}(1+\varepsilon^{2}Q_{\varepsilon}(s^{\prime}))ds _{1}\wedge ds_{2}, \tag{4.8}\]
for some smooth function \(Q_{\varepsilon}\) whose derivatives of all orders are bounded uniformly in \(\varepsilon\). Therefore, if we put the expression for \(G_{\partial M}^{\sqrt{\lambda_{j,\varepsilon}}}\) into (4.7) and use Lemmas 3.6-3.9, we obtain
\[0= \frac{1}{2\pi}\varepsilon L_{a}\tilde{v}_{\varepsilon}+a \varepsilon^{2}\frac{\tilde{u}_{j}(t^{\prime})}{\lambda_{j}-\lambda_{j, \varepsilon}}\int_{\mathbb{D}}\tilde{u}_{j}(s^{\prime})\tilde{v}_{ \varepsilon}(s^{\prime})e^{\tilde{\phi}(s^{\prime})}ds^{\prime}\] \[-\frac{1}{4\pi}\varepsilon^{2}(H(x^{*})-\partial_{\nu}\phi(x^{*}) )R_{\log,a}\tilde{v}_{\varepsilon}+\frac{1}{16\pi}\varepsilon^{2}(\kappa_{1}( x^{*})-\kappa_{2}(x^{*}))R_{\infty,a}\tilde{v}_{\varepsilon}\] \[+\frac{1}{4\pi}\varepsilon^{2}R_{F,a}\tilde{v}_{\varepsilon}+ \varepsilon^{2}R_{a}^{\lambda_{j,\varepsilon}}\tilde{v}_{\varepsilon}-\frac{1 }{4\pi}\varepsilon^{2}\log\varepsilon(H(x^{*})-\partial_{\nu}\phi(x^{*}))R_{ I,a}\tilde{v}_{\varepsilon}+\varepsilon^{3}\log\varepsilon\mathcal{A}_{ \varepsilon}\tilde{v}_{\varepsilon}, \tag{4.9}\]
where \(R_{a}^{\omega}:C_{c}^{\infty}(\mathbb{D})\mapsto D^{\prime}(\mathbb{D})\) is the operator given by the kernel
\[aR_{\partial M}^{\omega}(x^{\varepsilon}(t_{1},at_{2}),x^{\varepsilon}(s_{1}, as_{2}))\]
for \(\omega\in V_{j}\), and \(\mathcal{A}_{\varepsilon}:H^{1/2}(\mathbb{D};ds^{\prime})^{*}\mapsto H^{1/2}( \mathbb{D};ds^{\prime})\) is an operator with the norm bounded uniformly in \(\varepsilon\). From now on, we will denote by \(\mathcal{A}_{\varepsilon}\) any operator which takes \(H^{1/2}(\mathbb{D};ds^{\prime})^{*}\mapsto H^{1/2}(\mathbb{D};ds^{\prime})\) whose operator norm is bounded uniformly in \(\varepsilon\).
Let us denote
\[\mathcal{R}^{\lambda_{j},\lambda_{j,\varepsilon}}:=\varepsilon^{2}R_{a}^{ \lambda_{j,\varepsilon}}-\varepsilon^{2}R_{a}^{\lambda_{j}}\]
\[\mathcal{R}_{\varepsilon} :=-\frac{1}{4\pi}\varepsilon^{2}\log\varepsilon(H(x^{*})- \partial_{\nu}\phi(x^{*}))R_{I,a}-\frac{1}{4\pi}\varepsilon^{2}(H(x^{*})- \partial_{\nu}\phi(x^{*}))R_{\log,a}\] \[+\frac{1}{16\pi}\varepsilon^{2}(\kappa_{1}(x^{*})-\kappa_{2}(x^{ *}))R_{\infty,a}+\frac{1}{4\pi}\varepsilon^{2}R_{F,a}+\varepsilon^{2}R_{ \partial M}^{\lambda_{j}}(x^{*},x^{*})R_{I,a}+\varepsilon^{3}\log\varepsilon \mathcal{A}_{\varepsilon}.\]
By Lemma 5.1 in [37], we know that
\[\Big{\|}R_{a}^{\lambda_{j}}-R_{\partial M}^{\lambda_{j}}(x^{*},x^{*})R_{I,a} \Big{\|}_{H^{1/2}(\mathbb{D};ds^{\prime})^{*}\mapsto H^{1/2}(\mathbb{D};ds^{ \prime})}=O(\varepsilon\log\varepsilon),\]
and hence, (4.9) becomes
\[0=\frac{1}{2\pi}\varepsilon L_{a}\tilde{v}_{\varepsilon}+a\varepsilon^{2} \frac{\tilde{u}_{j}(t^{\prime})}{\lambda_{j}-\lambda_{j,\varepsilon}}\int_{ \mathbb{D}}\tilde{u}_{j}(s^{\prime})\tilde{v}_{\varepsilon}(s^{\prime})e^{ \tilde{\phi}(s^{\prime})}ds^{\prime}+\mathcal{R}_{\varepsilon}\tilde{v}_{ \varepsilon}+\mathcal{R}^{\lambda_{j},\lambda_{j,\varepsilon}}\tilde{v}. \tag{4.10}\]
Assume that
\[\int_{\mathbb{D}}\tilde{u}_{j}(s^{\prime})\tilde{v}_{\varepsilon}(s^{\prime})e^{ \tilde{\phi}(s^{\prime})}ds^{\prime}=0. \tag{4.11}\]
Then, recalling (4.8), we get
\[\int_{\Gamma_{\varepsilon,a}}u_{j}(x)\partial_{\nu}u_{j,\varepsilon}(x)e^{ \phi(x)}d\mu_{h}(x)=O(\varepsilon^{4}).\]
Using Green's identity, we derive
\[(\lambda_{j,\varepsilon}-\lambda_{j})(u_{j},u_{j,\varepsilon})_{ L^{2}(M,e^{\phi}d\mu_{g})}=\int_{M}(\Delta_{g}^{F}u_{j}(x)u_{j,\varepsilon}(x)-u_{j}(x )\Delta_{g}^{F}u_{j,\varepsilon}(x))e^{\phi(x)}d\mu_{g}(x)\\ =-\int_{\Gamma_{\varepsilon,a}}u_{j}(x)\partial_{\nu}u_{j, \varepsilon}(x)e^{\phi(x)}d\mu_{g}(x)=O(\varepsilon^{4}).\]
Then, by Lemma 4.5, it follows
\[|\lambda_{j,\varepsilon}-\lambda_{j}|=O(\varepsilon^{4}).\]
Therefore, by using Proposition 3.10 and recalling Remark 3.5, we estimate
\[\|\mathcal{R}_{\varepsilon}+\mathcal{R}^{\lambda_{j},\lambda_{j,\varepsilon} }\|_{H^{1/2}(\mathbb{D})^{*}\mapsto H^{1/2}(\mathbb{D})}=O(\varepsilon^{2} \log\varepsilon).\]
Since \(L_{a}\) is invertable as an operator from \(H^{1/2}(\mathbb{D})^{*}\) to \(H^{1/2}(\mathbb{D})\), see Section 4 in [37], it follows that
\[L_{a}+\frac{2\pi}{\varepsilon}\left(\mathcal{R}_{\varepsilon}+\mathcal{R}^{ \lambda_{j},\lambda_{j,\varepsilon}}\right):H^{1/2}(\mathbb{D})^{*}\mapsto H^ {1/2}(\mathbb{D})\]
is an invertable operator. Therefore, (4.10) and (4.11) imply that \(\tilde{v}_{\varepsilon}=0\) on \(\mathbb{D}\), and hence, \(\partial_{\nu}u_{j,\varepsilon}=0\) on \(\Gamma_{\varepsilon,a}\). Then, by Lemma 4.2, we would have \(u_{j,\varepsilon}=\) on \(M\), and hence \(\lambda_{j,0}=0\). This contradicts to \(\lambda_{j,\varepsilon}>\lambda_{j}\geq 0\). Therefore,
\[\int_{\mathbb{D}}\tilde{u}_{j}(s^{\prime})\tilde{v}_{\varepsilon}(s^{\prime} )e^{\tilde{\phi}(s^{\prime})}ds^{\prime}\neq 0.\]
Therefore, we can define
\[\tilde{\psi}_{\varepsilon}:=\frac{\tilde{v}_{\varepsilon}}{\int_{\mathbb{D}} \tilde{u}_{j}(s^{\prime})\tilde{v}_{\varepsilon}(s^{\prime})e^{\tilde{\phi}( s^{\prime})}ds^{\prime}}.\]
Then, (4.10) becomes
\[0=\frac{1}{2\pi}\varepsilon L_{a}\tilde{\psi}_{\varepsilon}+a\varepsilon^{2} \frac{\tilde{u}_{j}(t^{\prime})}{\lambda_{j}-\lambda_{j,\varepsilon}}+\left( \mathcal{R}_{\varepsilon}+\mathcal{R}^{\lambda_{j},\lambda_{j,\varepsilon}} \right)\tilde{\psi}_{\varepsilon}.\]
Let us hit both sides by \(\frac{2\pi}{\varepsilon}L_{a}^{-1}\), to obtain
\[0=\tilde{\psi}_{\varepsilon}+2\pi a\varepsilon\frac{L_{a}^{-1}\tilde{u}_{j}}{ \lambda_{j}-\lambda_{j,\varepsilon}}+\frac{2\pi}{\varepsilon}L_{a}^{-1}\left( \mathcal{R}_{\varepsilon}+\mathcal{R}^{\lambda_{j},\lambda_{j,\varepsilon}} \right)\tilde{\psi}_{\varepsilon}. \tag{4.12}\]
Since
\[\left\|L_{a}^{-1}\left(\mathcal{R}_{\varepsilon}+\mathcal{R}^{\lambda_{j}, \lambda_{j,\varepsilon}}\right)\right\|_{H^{1/2}(\mathbb{D})^{*}\mapsto H^{1/2 }(\mathbb{D})}=O(\varepsilon^{2}\log\varepsilon),\]
relation (4.12), implies that
\[\|\tilde{\psi}\|_{H^{1/2}(\mathbb{D})^{*}}=\frac{1}{\lambda_{j}-\lambda_{j, \varepsilon}}O(\varepsilon). \tag{4.13}\]
This will be used later. Now, we multiply (4.12) by \(\tilde{u}_{j}e^{\tilde{\phi}}\) and integrate over \(\mathbb{D}\) to derive
\[0=1+2\pi a\varepsilon\frac{\langle L_{a}^{-1}[\tilde{u}_{j}],\tilde{u}_{j}e^{ \tilde{\phi}}\rangle}{\lambda_{j}-\lambda_{j,\varepsilon}}+\frac{2\pi}{ \varepsilon}\langle L_{a}^{-1}\left(\mathcal{R}_{\varepsilon}+\mathcal{R}^{ \lambda_{j},\lambda_{j,\varepsilon}}\right)\tilde{\psi}_{\varepsilon},\tilde{u} _{j}e^{\tilde{\phi}}\rangle.\]
Equivalently, we write
\[\lambda_{j,\varepsilon}-\lambda_{j}=2\pi a\varepsilon\langle L_{a}^{-1}\tilde{ u}_{j},\tilde{u}_{j}e^{\tilde{\phi}}\rangle+\frac{2\pi}{\varepsilon}( \lambda_{j}-\lambda_{j,\varepsilon})\langle L_{a}^{-1}\left(\mathcal{R}_{ \varepsilon}+\mathcal{R}^{\lambda_{j},\lambda_{j,\varepsilon}}\right)\tilde{ \psi}_{\varepsilon},\tilde{u}_{j}e^{\tilde{\phi}}\rangle.\]
Let us put (4.12) into the equation above to obtain
\[\lambda_{j,\varepsilon}-\lambda_{j} =2\pi a\varepsilon\langle L_{a}^{-1}\tilde{u}_{j},\tilde{u}_{j}e ^{\tilde{\phi}}\rangle-4\pi^{2}a\langle L_{a}^{-1}\left(\mathcal{R}_{ \varepsilon}+\mathcal{R}^{\lambda_{j},\lambda_{j,\varepsilon}}\right)L_{a}^{- 1}\tilde{u}_{j},\tilde{u}_{j}e^{\tilde{\phi}}\rangle\] \[-\frac{4\pi^{2}}{\varepsilon^{2}}(\lambda_{j}-\lambda_{j, \varepsilon})\langle L_{a}^{-1}\left(\mathcal{R}_{\varepsilon}+\mathcal{R}^{ \lambda_{j},\lambda_{j,\varepsilon}}\right)L_{a}^{-1}\left(\mathcal{R}_{ \varepsilon}+\mathcal{R}^{\lambda_{j},\lambda_{j,\varepsilon}}\right)\tilde{ \psi}_{\varepsilon},\tilde{u}_{j}e^{\tilde{\phi}}\rangle. \tag{4.14}\]
Taking into account (4.13), the last identity implies that
\[\lambda_{j,\varepsilon}-\lambda_{\varepsilon}=O(\varepsilon),\]
so that Proposition 3.10 gives
\[\left\|\mathcal{R}^{\lambda_{j},\lambda_{j,\varepsilon}}\right\|_{H^{1/2}( \mathbb{D})^{*}\mapsto H^{1/2}(\mathbb{D})}=O(\varepsilon^{4}).\]
Hence, we can put \(\mathcal{R}^{\lambda_{j},\lambda_{j,\varepsilon}}\) into \(\varepsilon^{2}\log\varepsilon\mathcal{A}_{\varepsilon}\) term in the definition of \(\mathcal{R}_{\varepsilon}\). Further, from (4.13), we obtain
\[\langle L_{a}^{-1}\mathcal{R}_{\varepsilon}L_{a}^{-1}\mathcal{R}_{\varepsilon }\tilde{\psi}_{\varepsilon},\tilde{u}_{j}e^{\tilde{\phi}}\rangle=\frac{1}{ \lambda_{j}-\lambda_{j,\varepsilon}}O\left(\varepsilon^{5}\log^{2}\varepsilon \right).\]
Therefore, equation (4.14) gives
\[\lambda_{j,\varepsilon}=\lambda_{j}+2\pi\varepsilon a\langle L_{a}^{-1} \tilde{u}_{j},\tilde{u}_{j}e^{\tilde{\phi}}\rangle-4\pi^{2}a\langle L_{a}^{-1 }\mathcal{R}_{\varepsilon}L_{a}^{-1}\tilde{u}_{j},\tilde{u}_{j}e^{\tilde{\phi }}\rangle+O(\varepsilon^{3}\log^{2}\varepsilon).\]
Recalling the definition of \(\mathcal{R}_{\varepsilon}\) and the boundedness of \(\mathcal{A}_{\varepsilon}:H^{1/2}(\mathbb{D};ds^{\prime})^{*}\to H^{1/2}( \mathbb{D};ds^{\prime})\) we obtain
\[\lambda_{j,\varepsilon}-\lambda_{j}= 2\pi\varepsilon a\int_{\mathbb{D}}L_{a}^{-1}\tilde{u}_{j}(s^{ \prime})\tilde{u}_{j}(s^{\prime})e^{\tilde{\phi}(s^{\prime})}ds^{\prime}\] \[+\varepsilon^{2}\log\varepsilon a\pi(H(x^{*})-\partial_{\nu} \phi(x^{*}))\langle L_{a}^{-1}R_{I,a}L_{a}^{-1}\tilde{u}_{j},\tilde{u}_{j}e^{ \tilde{\phi}}\rangle\] \[+\varepsilon^{2}a\pi(H(x^{*})-\partial_{\nu}\phi(x^{*}))\langle L _{a}^{-1}R_{log,a}L_{a}^{-1}\tilde{u}_{j},\tilde{u}_{j}e^{\tilde{\phi}}\rangle\] \[-\varepsilon^{2}a\frac{\pi}{4}\left(\kappa_{1}(x^{*})-\kappa_{2}(x ^{*})\right)\langle L_{a}^{-1}R_{\infty,a}L_{a}^{-1}\tilde{u}_{j},\tilde{u}_{j }e^{\tilde{\phi}}\rangle\] \[-\varepsilon^{2}a\pi\langle L_{a}^{-1}R_{F,a}L_{a}^{-1}\tilde{u}_ {j},\tilde{u}_{j}e^{\tilde{\phi}}\rangle\] \[-\varepsilon^{2}4\pi^{2}aR_{\partial M}^{\lambda_{j}}(x^{*},x^{*}) \langle L_{a}^{-1}R_{I,a}L_{a}^{-1}\tilde{u}_{j},\tilde{u}_{j}e^{\tilde{\phi }}\rangle\] \[+O(\varepsilon^{3}\log^{2}\varepsilon). \tag{4.15}\]
We recall the definitions of \(\tilde{u}_{j}\), \(\tilde{\phi}\) and use Taylor series, to obtain
\[\int_{\mathbb{D}}L_{a}^{-1}\tilde{u}_{j}(s^{\prime})\tilde{u}_{j} (s^{\prime})e^{\tilde{\phi}(s^{\prime})}ds^{\prime}=\int_{\mathbb{D}}L_{a}^{-1} \tilde{u}_{j}(s^{\prime})u_{j}(x(\varepsilon s_{1},a\varepsilon s_{2}))e^{ \phi(x(\varepsilon s_{1},a\varepsilon s_{2}))}ds^{\prime}\\ =\int_{\mathbb{D}}L_{a}^{-1}\tilde{u}_{j}(s^{\prime})\left(u_{j}( x^{*})e^{\phi(x^{*})}+\varepsilon(c_{1}s_{1}+c_{2}s_{2})+R_{2}^{1}( \varepsilon s^{\prime})\right)ds^{\prime}\]
where \(R^{1}_{2}\) is the reminder term of the Taylor series for \(\tilde{u}_{j}e^{\tilde{\phi}}\) near zero and \(c_{1}\), \(c_{2}\) are appropriate constants. Using (4.4) in [37], we derive
\[\int_{\mathbb{D}}L^{-1}_{a}\tilde{u}_{j}(s^{\prime})\tilde{u}_{j}( s^{\prime})e^{\tilde{\phi}(s^{\prime})}ds^{\prime}=\int_{\mathbb{D}}\left(u_{j}(x^{ *})+\varepsilon(b_{1}s_{1}+b_{2}s_{2})+R^{2}_{2}(\varepsilon s^{\prime}) \right)\times\\ \times L^{-1}_{a}\left(u_{j}(x^{*})e^{\phi(x^{*})}+\varepsilon(c_ {1}s_{1}+c_{2}s_{2})+R^{1}_{2}(\varepsilon s^{\prime})\right)ds^{\prime}\]
where \(R^{2}_{2}\) is the reminder term of the Taylor series for \(\tilde{u}_{j}\) near zero and \(b_{1}\), \(b_{2}\) are appropriate constants. Next, we note that
\[\int_{\mathbb{D}}L^{-1}_{a}[1](s^{\prime})s_{j}ds^{\prime}=0,\qquad\|R^{j}_{2 }(\varepsilon\cdot)\|_{H^{1}(\mathbb{D})}=O(\varepsilon^{2}),\]
for \(j=1\), \(2\). Therefore, the penultimate identity gives
\[\int_{\mathbb{D}}L^{-1}_{a}\tilde{u}_{j}(s^{\prime})\tilde{u}_{j} (s^{\prime})e^{\tilde{\phi}(s^{\prime})}ds^{\prime}=|u_{j}(x^{*})|^{2}e^{\phi( x^{*})}\int_{\mathbb{D}}L^{-1}_{a}[1](s^{\prime})ds^{\prime}+O(\varepsilon^{2})\\ =\frac{2\pi}{K_{a}}|u_{j}(x^{*})|^{2}e^{\phi(x^{*})}+O(\varepsilon ^{2}).\]
Since \(u_{j}\) is smooth on \(\partial M\), it follows that \(\tilde{u}_{j}(t)-\tilde{u}_{j}(0)=O_{H^{1/2}}(\varepsilon)\) as \(\varepsilon\to 0\). Additionally, we recall that \(R_{I,a}\), \(R_{log,a}\), and \(R_{\infty,a}\) are bounded operators from \(H^{1/2}(\mathbb{D})\) to \(H^{1/2}(\mathbb{D})^{*}\). Therefore, using (4.4) in [37], we write
\[\langle L^{-1}_{a}R_{log,a}L^{-1}_{a}\tilde{u}_{j},\tilde{u}_{j}e ^{\tilde{\phi}}\rangle =\langle R_{log,a}L^{-1}_{a}\tilde{u}_{j},L^{-1}_{a}[\tilde{u}_{j }e^{\tilde{\phi}}]\rangle\] \[=|u_{j}(x^{*})|^{2}e^{\phi(x^{*})}\langle R_{log,a}L^{-1}_{a}[1],L^{-1}_{a}[1]\rangle+O(\varepsilon).\]
Further, using (3.9), we obtain
\[\langle R_{log,a}L^{-1}_{a}\tilde{u}_{j},L^{-1}_{a}[\tilde{u}_{j }e^{\tilde{\phi}}]\rangle\] \[=|u_{j}(x^{*})|^{2}e^{\phi(x^{*})}\frac{a}{K^{2}_{a}}\int_{ \mathbb{D}}\frac{1}{(1-|s^{\prime}|^{2})^{1/2}}\int_{\mathbb{D}}\frac{\log \left((t_{1}-s_{1})^{2}+a^{2}(t_{2}-s_{2})^{2}\right)^{1/2}}{(1-|t^{\prime}|^{ 2})^{1/2}}dt^{\prime}ds^{\prime}+O(\varepsilon).\]
Similarly, we collect expressions for \(\langle L^{-1}_{a}R_{I,a}L^{-1}_{a}\tilde{u}_{j},\tilde{u}_{j}e^{\tilde{\phi}}\rangle\), \(\langle R_{\infty,a}L^{-1}_{a}\tilde{u}_{j},L^{-1}_{a}[\tilde{u}_{j}e^{\tilde {\phi}}]\rangle\) and \(\langle R_{F,a}L^{-1}_{a}\tilde{u}_{j},L^{-1}_{a}[\tilde{u}_{j}e^{\tilde{\phi}}]\rangle\) below
\[\langle L^{-1}_{a}R_{I,a}L^{-1}_{a}\tilde{u}_{j},\tilde{u}_{j}e ^{\tilde{\phi}}\rangle=a|u_{j}(x^{*})|^{2}e^{\phi(x^{*})}\int_{\mathbb{D}}L^{-1 }_{a}[1](t^{\prime})dt^{\prime}\int_{\mathbb{D}}L^{-1}_{a}[1](t^{\prime})dt^{ \prime}+O(\varepsilon)\\ =\frac{4\pi^{2}a}{K^{2}_{a}}|u_{j}(x^{*})|^{2}e^{\phi(x^{*})}+O( \varepsilon),\]
\[\langle R_{\infty,a}L^{-1}_{a}\tilde{u}_{j},L^{-1}_{a}[\tilde{u}_{ j}e^{\tilde{\phi}}]\rangle=|u_{j}(x^{*})|^{2}e^{\phi(x^{*})}\frac{a}{K^{2}_{a}}\times\\ \times\int_{\mathbb{D}}\frac{1}{(1-|s^{\prime}|^{2})^{1/2}}\int_ {\mathbb{D}}\frac{(t_{1}-s_{1})^{2}-a^{2}(t_{2}-s_{2})^{2}}{(t_{1}-s_{1})^{2}+ a^{2}(t_{2}-s_{2})^{2}}\frac{1}{(1-|t^{\prime}|^{2})^{1/2}}dt^{\prime}ds^{\prime}+O( \varepsilon),\]
and finally (see page 10045 in [36]),
\[\langle R_{F,a}L^{-1}_{a}\tilde{u}_{j},L^{-1}_{a}[\tilde{u}_{j}e^{\tilde{\phi} }]\rangle=O(\varepsilon).\]
Using the identities above and (4.15) we complete the proof.
## 5. Acknowledgement
M.N. was partially supported by the grant of the Science Committee of the Ministry of Education and Science of the Republic of Kazakhstan, Grant No. AP14870361. M.N. was partially supported by the Academy of Finland, grants 353096 and 347715.
|
2308.11030
|
Ramulator 2.0: A Modern, Modular, and Extensible DRAM Simulator
|
We present Ramulator 2.0, a highly modular and extensible DRAM simulator that
enables rapid and agile implementation and evaluation of design changes in the
memory controller and DRAM to meet the increasing research effort in improving
the performance, security, and reliability of memory systems. Ramulator 2.0
abstracts and models key components in a DRAM-based memory system and their
interactions into shared interfaces and independent implementations. Doing so
enables easy modification and extension of the modeled functions of the memory
controller and DRAM in Ramulator 2.0. The DRAM specification syntax of
Ramulator 2.0 is concise and human-readable, facilitating easy modifications
and extensions. Ramulator 2.0 implements a library of reusable templated lambda
functions to model the functionalities of DRAM commands to simplify the
implementation of new DRAM standards, including DDR5, LPDDR5, HBM3, and GDDR6.
We showcase Ramulator 2.0's modularity and extensibility by implementing and
evaluating a wide variety of RowHammer mitigation techniques that require
different memory controller design changes. These techniques are added
modularly as separate implementations without changing any code in the baseline
memory controller implementation. Ramulator 2.0 is rigorously validated and
maintains a fast simulation speed compared to existing cycle-accurate DRAM
simulators. Ramulator 2.0 is open-sourced under the permissive MIT license at
https://github.com/CMU-SAFARI/ramulator2
|
Haocong Luo, Yahya Can Tuğrul, F. Nisa Bostancı, Ataberk Olgun, A. Giray Yağlıkçı, Onur Mutlu
|
2023-08-21T20:39:10Z
|
http://arxiv.org/abs/2308.11030v2
|
# Ramulator 2.0: A Modern, Modular, and Extensible DRAM Simulator
###### Abstract
We present Ramulator 2.0, a highly modular and extensible DRAM simulator that enables rapid and agile implementation and evaluation of design changes in the memory controller and DRAM to meet the increasing research effort in improving the performance, security, and reliability of memory systems. Ramulator 2.0 abstracts and models key components in a DRAM-based memory system and their interactions into shared _interfaces_ and independent _implications_. Doing so enables easy modification and extension of the modeled functions of the memory controller and DRAM in Ramulator 2.0. The DRAM specification syntax of Ramulator 2.0 is concise and human-readable, facilitating easy modifications and extensions. Ramulator 2.0 implements a library of reusable templated lambda functions to model the functionalities of DRAM commands to simplify the implementation of new DRAM standards, including DDRS, LPDDRS, HBM3, and GDDR6. We showcase Ramulator 2.0's modularity and extensibility by implementing and evaluating a wide variety of RowHammer migration techniques that require _different memory controller design changes_. These techniques are added modularly as separate implementations _without_ changing any code in the baseline memory controller implementation. Ramulator 2.0 is rigorously validated and maintains a fast simulation speed compared to existing cycle-accurate DRAM simulators. Ramulator 2.0 is open-sourced under the permissive MIT license at [https://github.com/CMU-SAARI/ramulator2](https://github.com/CMU-SAARI/ramulator2).
## 1 Introduction
Cycle-accurate DRAM simulators enable modeling and evaluation of detailed operations in the memory controller and the DRAM device. In recent years, growing research and design efforts in improving the performance, security, and reliability of DRAM-based memory systems require a cycle-accurate simulator that facilitates rapid and agile implementation and evaluation of intrusive design changes (i.e., modification of functionalities of the simulated system as opposed to simple parameter changes) in the memory controller and DRAM. Unfortunately, existing cycle-accurate DRAM simulators are not modular and extensible _enough_ to meet such a requirement.
We identify two key issues in the design and implementation of existing cycle-accurate DRAM simulators. First, they do _not_ model key components of a DRAM-based memory system in a _fundamentally modular_ way, making it difficult to implement and maintain different intrusive design changes. For example, USIMM [1] does not separate the DRAM specification from the memory controller. Similarly, the templated implementations of the DRAM specifications in Ramulator [2] (referred to as Ramulator 1.0 in this paper) cause undesired coupling between the DRAM specification and the memory controller.
Second, existing simulators do _not_ implement DRAM specifications in a concise and intuitive way, making it difficult to add new DRAM commands and define new timing constraints. For example, both DRAMsim2 [3] and DRAMsim3 [4] implement a single DRAM device model that aggregates all the DRAM specifications from all supported DRAM standards in a single C++ class. Ramulator 1.0's DRAM specifications are based on low-level and verbose C++ syntax (e.g., it uses eight full lines of C++ code just to specify solely a single tCCD_L timing constraint in DDR4 [5]).
To address these issues, we present Ramulator 2.0 [6], a successor to Ramulator 1.0 [2] that provides an easy-to-use, modular, and extensible software infrastructure for rapid and agile implementation and evaluation of DRAM-related research and design ideas. Ramulator 2.0 has two distinguishing features. First, it implements a modular and extensible code framework by identifying and modeling the key components in a DRAM-based memory system into separate _interfaces_ and _implementations_. With this framework, different design changes (e.g., different address mapping schemes, request scheduling policies, new DRAM standards, RowHammer mitigations) can be implemented as _independent_ implementations that share the _same_ interface, enabling easy modification and extension of Ramulator 2.0.
Second, to facilitate easy modification of DRAM specifications (e.g., DRAM organization, commands, timing constraints), Ramulator 2.0 implements concise and human-readable definitions of DRAM specifications on top of the lookup table based hierarchical DRAM device model in Ramulator 1.0. Ramulator 2.0's DRAM specifications 1) are defined with simple string literals, 2) leverage permutations of different DRAM commands to concisely define timing constraints, and 3) use a library of templated lambda functions that are _reusable_ across different DRAM standards to define the functionalities of DRAM commands (e.g., the same RFM command implementation can be (and is) used by DDR5 [7], LPDDRS [8], and GDDR6 [9], HBM3 [10]). These improvements are implemented with the new features of C++20 [11] (e.g., constant-evaluated immediate functions), enabling significant duplicate-code reduction and easy modification and extension of the modeled DRAM device's functionalities _without_ sacrificing simulation speed.
We showcase the modularity and extensibility of Ramulator 2.0 by implementing and evaluating a variety of RowHammer mitigation techniques (PARA [12], TWiCe [13], Graphene [14], Hydra [15], Randomized RowSwap (RRS) [16], and an ideal refresh-based mitigation [17]) that require _different_ additional functionalities in the memory controller. These RowHammer mitigations plug themselves into the _same_ baseline memory controller implementation _without_ changing the memory controller's code, which was not possible in Ramulator 1.0 [2] and is not possible in any other DRAM simulator we are aware of [1, 3, 4].
In summary, the key features and contributions of Ramulator 2.0 are:
* Ramulator 2.0 is a modular and extensible DRAM simulator written in C++20 [11] that enables rapid and agile implementation and evaluation of design changes in the memory system. Ramulator 2.0 can either work as a standalone simulator, or be used as a memory system library by a system simulator (e.g., gem5 [18], ZSim [19]).
* We showcase the modularity and extensibility of Ramulator 2.0 by implementing and evaluating six different RowHammer mitigation techniques as plugins to a single unmodified memory controller implementation.
* Ramulator 2.0 implements a wide range of new DRAM standards, including DDR5 [7], LPDDR5 [8], HBM3 [10], and GDDR6 [9] (as well as old ones, e.g., DDR3 [20], DDR4 [5], HBM(2) [21]).
* Ramulator 2.0 is rigorously validated and maintains a fast simulation speed compared to existing cycle-accurate DRAM simulators.
* We open-source Ramulator 2.0 [6] under the permissive
MIT license to facilitate and encourage open research and agile implementation of new ideas in memory systems. We also integrate it with gem5 [18].
## 2 Ramulator 2.0 Design Features
We walk through the two key design features of Ramulator 2.0 that enable rapid and agile implementation of design changes in the memory system. Section 2.1 introduces the high-level software architecture of Ramulator 2.0 based on the key concepts of _interface(s)_ and _implementation(s)_. Section 2.1.1 provides a deeper look into the modularity and extensibility enabled by Ramulator 2.0 by showcasing how different RowHammer mitigations can all be implemented as _plugins_ of the same baseline unmodified memory controller implementation. Section 2.2 introduces the concise and human-readable DRAM specification syntax of Ramulator 2.0 that facilitates easy modification and extension of the functionality of the DRAM device.
### _Modular and Extensible Software Architecture_
Ramulator 2.0 models all components in a DRAM-based memory system with two fundamental concepts, _Interface_ and _Implementation_, to achieve high modularity and extensibility. An interface is an abstract C++ class defined in a.h header file that models the common high-level functionality of a component as seen by other components in the system. An implementation is a concrete C++ class defined in a.cpp file that inherits from an interface, modeling the actual behavior of a component. Components interact with each other through pointers to each other's interfaces stored in the implementations. With such a design, the functionality of a component can be easily changed by instantiating a different implementation for the same interface, involving _no_ changes in the code of unrelated components.
Figure 1 shows the high-level software architecture of Ramulator 2.0 with the key interfaces we identify in a DRAM-based memory system (dark boxes) and their typical implementations (light boxes) when modeling a DDR5 system with RowHammer mitigation. The arrows illustrate the relationships among different components in the simulated system (i.e., how they call each other's interface functions). We highlight the memory request path with red arrows, DRAM command path with blue arrows, and DRAM maintenance requests (e.g., refreshes) with green arrows. A typical execution of the simulation is as follows: First, memory requests are sent 1 from the frontend (either parsed from traces or generated by another simulator, e.g., gem5 [18]) to the memory system, where the memory addresses are mapped 2 to the DRAM organization through the address mapper. Then, the requests are enqueued 3 in the request buffers of the DRAM controller. The DRAM controller is responsible for 1) ticking the refresh manager 4, which could enqueue high-priority maintenance requests (e.g., refreshes) back to the controller, 2) querying the request scheduler 5, which in turn consults the DRAM device model 6 to decode the best DRAM command to issue 7 to serve a memory request, and 3) issuing the DRAM command 8, which updates the behavior and timing information of the DRAM device model. Finally, the memory controller executes the finished request's callback 1 to notify the frontend.
Users can easily extend Ramulator 2.0 without intrusive changes to existing code by creating different implementations of each existing interface in three easy steps: 1) create a new.cpp file, 2) create the new implementation class that inherits from both the implementation base class and the existing interface class, and 3) implement the new functionality in the new implementation class. Similarly, a new interface can be added simply adding a.h file containing the abstract interface class definitions. All interfaces and implementations in Ramulator 2.0 _register themselves_ to a class registry that bookleeps the relationship among different interfaces and implementations. Using this registry, Ramulator 2.0 automatically recognizes and instantiates different implementations for each interface from a human-readable configuration file. Users do _not_ need to manually maintain any boilerplate code to describe the relationships between interfaces and implementations.
#### 2.1.1 Memory Controller Plugins
We make a key observation that many modeled functions in the memory controller (e.g., controller-based RowHammer mitigations that tracks the issued activation commands) and utilities needed for evaluation (e.g., collecting statistics from the issued DRAM commands and analyzing the memory access patterns) are triggered (updated) by the currently-scheduled DRAM command. To avoid having many similar memory controller implementations for every single such modeled function and utility, we model these functions as plugins to the memory controller. As an example, Figure 2 shows in detail how various RowHammer mitigation techniques (e.g., PARA [12], Graphene [14], Hydra [15]) can be implemented as such controller plugins.
The plugin interface has a simple update(DRAM_CMD,ADDR) function that the controller calls 1 in Figure 1 and 2) to notify the plugin implementations about the DRAM command and address issued by the memory controller. The RowHammer mitigation implementation then updates its internal state (e.g., generates a random number for PARA, updates the row activation count ta
Fig. 1: High-level software architecture of Ramulator 2.0 using an example DDR5 system configuration
ble for Graphene, or queries the row count cache for Hydra). If the implementation detects the need to refresh the potential RowHammer victim rows, it calls the priority_enqueue() function (2) in Figure 1 and 2) of the memory controller interface to send a high-priority refresh request for the identified victim rows, ready to be scheduled in the following cycles, as determined by the mitigation techniques. To showcase the modularity and extensibility of memory controller plugins, Section 3.3 provides a cross-sectional evaluation of the performance overhead of six different RowHammer mitigation techniques, all implemented as memory controller plugins.
### _Concise and Intuitive DRAM Specifications_
Ramulator 2.0 facilitates easy modification and extension of DRAM specifications (e.g., the organization of the DRAM device hierarchy, DRAM commands, timing constraints, mapping between DRAM commands and organization levels) in two major ways. First, Ramulator 2.0 allows the user to directly define the DRAM specifications _by their names_ with human-readable string literals, as Listing 1 shows.
```
1//Differentlavailsintheorganizationhierarchy
2inlinestaticconstermImplef_n_levels={
3"channel","rank","brankgroup","blank","row","column",
4"brank","column",
5};
6//DifferentDRAMcommands
7inlinestaticconstermimplem_commandands={
8"CC","pR="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R="R
a single line of code in each standard (instead of duplicating the entire RequireAllBanksClosed function for each DRAM standard as in Ramulator 1.0).
## 3 Validation & Evaluation
### _Validating the Correctness of Ramulator 2.0_
To make sure Ramulator 2.0's memory controller and DRAM device model implementation is correct (i.e., the DRAM commands issued by the controller obey both the timing constraints and the state transition rules), we verify the DRAM command trace against Micron's DDR4 Verilog Model [22] using a similar methodology to prior works [2, 3, 4]. To do so, we implement a DRAM command trace recorder as a DRAM controller plugin that can store the issued DRAM commands with the addresses and time stamps using the DDR4 Verilog Model's format. We collect DRAM command traces from eight streaming-access and eight random-access synthetic memory traces and different intensities (i.e., the number of non-memory instructions between memory instructions). We feed the DRAM command trace to the Verilog Model, configured to use the same DRAM organization and timings as we use in Ramulator 2.0. We find no timing or state transition violations.
### _Performance of Ramulator 2.0_
We compare the simulation speed of Ramulator 2.0 with three other cycle-accurate DRAM simulators: Ramulator 1.0 [2], DRAMsim2 [3], DRAMsim3 [4], and USIMM [1]. All four simulators are configured with comparable system parameters. We generate two memory traces, one with a random access pattern and another with a streaming access pattern, each containing five million memory requests (readwrite ratio = 4:1). For each simulator and trace, we repeat the simulation for each trace ten times. Table I shows the minimum, average, and maximum simulation runtimes across the ten repetitions. We conclude that, despite the increased modularity and extensibility, Ramulator 2.0 achieves a comparably fast (and even faster) simulation speed versus other existing cycle-accurate DRAM simulators. We provide the complete set of scripts, configurations, and traces to reproduce our results in [6].
### _Cross-Sectional Study of RowHammer Mitigations_
To demonstrate the modularity and extensibility of Ramulator 2.0, we implement six different RowHammer mitigation techniques, PARA [12], an idealized version of TWiCe [13], Graphene [14], Hydra [15], Randomized RowSwap (RRS) [16], and an ideal refresh-based mitigation (Ideal) [17]. All of these mechanisms are implemented in the form of memory controller plugins as described in Section 2.1.1. Figure 3 shows the performance overhead (weighted speedup normalized to a baseline configuration running the same workloads _without_ any RowHammer mitigation, y-axis) of different RowHammer mitigations as the RowHammer threshold (i.e., the minimum number of DRAM row activations to cause at least one bitflip, \(\text{tRH}\), \(\text{x-axis}\)) decreases from 5000 to 10. We use traces generated from SPEC2006 [23] and SPEC2017 [24] to form 25 four-core multiprogrammed workloads that we feed through a simplistic out-of-order core model (the complete set of scripts and traces to reproduce these experiments are in [6]).
We make the following two observations. First, all evaluated RowHammer mitigations (except for Ideal) cause significant performance overhead compared to the ideal mitigation as tRH decreases to very low values. Second, for \(\text{tRH}<50\), the performance overhead of RRS becomes too high for the simulation to make progress. The reason for this is that the activation caused by a row swap triggers even more row swaps, preventing DRAM from serving memory access requests. We conclude that existing RowHammer mitigation techniques are not scalable enough to very low tRH values (below 50). As such, more research effort is needed to develop more efficient and scalable RowHammer mitigation techniques.
## 4 Conclusion
We present Ramulator 2.0, a modern, modular, and extensible DRAM simulator as a successor to Ramulator 1.0. We introduce the key design features of Ramulator 2.0 and demonstrate its high modularity, extensibility, and performance. We hope that Ramulator 2.0's modular and extensible software architecture and concise and intuitive modeling of DRAM facilitates more agile memory systems research.
|
2304.06841
|
Video alignment using unsupervised learning of local and global features
|
In this paper, we tackle the problem of video alignment, the process of
matching the frames of a pair of videos containing similar actions. The main
challenge in video alignment is that accurate correspondence should be
established despite the differences in the execution processes and appearances
between the two videos. We introduce an unsupervised method for alignment that
uses global and local features of the frames. In particular, we introduce
effective features for each video frame by means of three machine vision tools:
person detection, pose estimation, and VGG network. Then the features are
processed and combined to construct a multidimensional time series that
represent the video. The resulting time series are used to align videos of the
same actions using a novel version of dynamic time warping named Diagonalized
Dynamic Time Warping(DDTW). The main advantage of our approach is that no
training is required, which makes it applicable for any new type of action
without any need to collect training samples for it. Additionally, our approach
can be used for framewise labeling of action phases in a dataset with only a
few labeled videos. For evaluation, we considered video synchronization and
phase classification tasks on the Penn action and subset of UCF101 datasets.
Also, for an effective evaluation of the video synchronization task, we present
a new metric called Enclosed Area Error(EAE). The results show that our method
outperforms previous state-of-the-art methods, such as TCC, and other
self-supervised and weakly supervised methods.
|
Niloufar Fakhfour, Mohammad ShahverdiKondori, Sajjad Hashembeiki, Mohammadjavad Norouzi, Hoda Mohammadzade
|
2023-04-13T22:20:54Z
|
http://arxiv.org/abs/2304.06841v3
|
# Video alignment using unsupervised learning of local and global features
###### Abstract
In this paper, we tackle the problem of video alignment, the process of matching the frames of a pair of videos containing similar actions. The main challenge in video alignment is that accurate correspondence should be established despite the differences in the execution processes and appearances between the two videos. We introduce an unsupervised method for alignment that uses global and local features of the frames. In particular, we introduce effective features for each video frame by means of three machine vision tools: person detection, pose estimation, and VGG network. Then the features are processed and combined to construct a multidimensional time series that represent the video. The resulting time series are used to align videos of the same actions using a novel version of dynamic time warping named Diagonalized Dynamic Time Warping(DDTW). The main advantage of our approach is that no training is required, which makes it applicable for any new type of action without any need to collect training samples for it. For evaluation, we considered video synchronization and phase classification tasks on the Penn action dataset [37]. Also, for an effective evaluation of the video synchronization task, we present a new metric called Enclosed Area Error(EAE). The results show that our method outperforms previous state-of-the-art methods, such as TCC [11] and other self-supervised and supervised methods.
## 1 Introduction
Many sequential processes happen daily in the world. Waking up, drinking water, and growing a plant are examples of sequential processes that are always happening. Although these processes are performed with different varieties and qualities, all the processes that show a specific action have common time points. For example, drinking a glass of water may happen at different speeds, places, and containers, but all the processes that indicate drinking water consist of 3 main steps: lifting the glass, drinking water, and lowering the glass. As a result, each process, or in other words, each action, consists of one or more phases, which are the same in terms of the order of occurrence in all similar processes. Video alignment is a method in which the frames of the videos of two identical actions that differ in things such as scene, camera angle, and speed are matched to each other.
The main challenge in the task of video alignment is the difference in the execution process and the appearance of the frames in videos containing the same action. For example, an action such as picking up an object from the ground can be done in various ways while recorded by the cam
Figure 1: We propose an unsupervised method to align pairs of videos that presented the same actions. We model a video as time series which consists of global and local features extracted from each frame. In addition, we introduce a novel DTW, called Diagonalized Dynamic Time Warping (DDTW), to find corresponding frames in each pair of videos.
era. This action may be done once in \(5\) seconds and another time in \(10\) seconds. The object that is risen from the ground may be a ball or a cup, big or small, red or blue. Also, the camera may record this action from the front, side, or any other angle. All these differences cause video alignment and choosing the correct method for video modeling to face many challenges.
In recent years, much research has been done in action recognition, anomaly detection, tracking, etc. But video alignment has received less attention, while video alignment can be used to improve all of the above. For example, in [21], novel normalized pose features invariant to video subjects' anthropometric characteristics are introduced. The method was evaluated in the task of action recognition, and significant results were achieved. In [20], the action recognition problem is considered as two separate problems, action duration misalignment, and action evolution misalignment. Based on this assumption, a two-stage action alignment network is presented in this work. Video Classification [6, 36] and action detection [38, 13, 32] are other examples of the applications of video alignment that have received attention in recent years.
In the field of video alignment, self-supervised [11, 16, 22, 39, 28] and weakly-supervised methods [2, 4, 8, 14] have been presented in recent years. In some works, video alignment has been tried to solve using Dynamic Time Warping(DTW) [6, 8, 39]. Since DTW is not derivable and cannot be implemented using neural networks, in these works, some modified types of DTW, such as soft-dtw, which are derivable, have been used [6, 8]. Another category of video alignment methods is based on cycle consistency loss [11, 14, 28, 35]. In [11], they presented a self-supervised method for learning correspondences between frames in the time domain. In this article, the network is trained based on the cycle-consistency cost function, and then the trained network is used to match the frames of a pair of videos with each other. Unlike [11, 28] deals with correspondence learning in both time and space domains based on cross-cycle stability. In [14, 16], the network is trained based on frame level and video level simultaneously. In [16], the network is trained based on a cost function including two terms, soft-dtw and temporal regularization. In [14], a weakly supervised method is presented based on a cost function consisting of dtw and cycle-consistency. Unlike other works, in [23], a more comprehensive look at the subject of video alignment is given. In this work, video representation is learned to align two videos while the possibility of background frames, redundant frames, and non-monotonic frames are considered.
One of the shortcomings of the existing methods is the need to train deep networks for each class of action, which requires a lot of training samples from each action. In this work, we present an unsupervised method that can be used to align pairs of videos containing any action without any need to train a network.
The main contribution of this article is, using an unsupervised approach for representing a video as a multidimensional time series representing features of its frames over time. To construct the features of a frame, we simultaneously use person detection and pose estimation algorithms to extract local features and the VGG network to extract global features. The combination of local and global features provides an effective representation of videos for accurate alignment. To evaluate the effectiveness of the introduced features, we compared their performance with one of the existing self-supervised methods [11] in phase classification on the Penn action dataset [37]. In addition, a new evaluation metric is introduced in this work to compare the performance of alignment methods with each other.
In summary, our contributions include the following:
* Presenting an unsupervised method to align two videos.
* Modeling videos using their global and local features as well as their static and dynamic features.
* Presenting a modified DTW method for aligning time series with limited deviation.
* Presenting a new metric to compare the performance of alignment methods.
## 2 Method
This section describes our unsupervised method for aligning videos containing similar actions. Specifically, we model a video as a time series representing the global and local features of the video that are extracted from each frame of it. Global features are extracted by means of the VGG pre-trained network. These features show the information related to the entire frame. Local features are extracted based on pose estimation and person detection algorithms; these features are responsible for modeling changes in the subject performing the action. Figure 2 provides an illustrative overview of our proposed method for video alignment.
### Feature Extraction
Features play a vital role in the areas of image processing [7, 12, 19] and machine vision, as extracting more valuable features leads to a better result. We use two kinds of features to model a video as a time series: global features, which are used to model the entire frame, and local features, which are used to model the details of the main subject performing the action. Features are extracted using three methods: pose estimation, person detection, and VGG network.
#### 2.1.1 Local Features
Each action is performed by one or more main subjects. Although similar actions might differ in the scene, speed, and recording quality, the main subjects follow the same process. One important step in characterizing different actions is to represent the details of the movements of the main subject by a number of features. We call these features local features. Local features consist of two types: static and dynamic, which are responsible for representing the within-frame and between-frame information related to the main subject, respectively.
Static FeaturesIn this work, static features refer to the features extracted from each frame independently from other frames. Static features consist of static pose features and static box features. These features represent the details of the current location of the main subject.
Static Pose FeaturesStatic pose features consist of the positions of the key points of the main subject in each frame.
Human pose estimation refers to determining the position of human joints (known as body key points) [1, 9, 26]. In this work, to extract the key points of the main subject, we use MeTRAbs [30] algorithm for pose estimation, which extracts 24 key points.
After the key points are extracted, to remove the effect of the initial position of the main subject in the first frame, we shift the hip joint in the first frame to the coordinate center and shift all key points in all frames accordingly.
\[f_{sp}^{m}(n)=k^{m}(n)-k^{1}(1)\qquad\forall m,n \tag{1}\]
Where \(k^{m}(n)\) denotes the 2D coordinates of the \(m\)-th key point in frame \(n\) (\(m=1\) refers to the hip joint key point) and \(f_{sp}^{m}(n)\) denotes the static pose feature corresponding to the \(m\)-th key point in frame n. Finally, \(48\) static pose features are extracted from each frame.
Static Box FeaturesKey points extracted from pose estimation contain helpful information and details. Although it seems to achieve good results, using too many details for alignment cannot model the main subject's global motion. Therefore, adding more general information that models the movement and state change of the main subject creates more complete features for the alignment. For this, deep sort algorithm [34] with Yolo V5 [5] is used to extract the main subject boxes in each frame.
After extracting the subject's box in each frame, the length-to-width ratio and center of the boxes are used as static box features. These features explain the change in position and angle of the main subject's body in each frame. In order to remove the effect of the initial position and the appearance characteristics of the main subject, the central part of the box in the first frame is placed on the coordinate, and the length-to-width ratio of the box is set to \(1\). Other
Figure 2: In our method, two types of features are used to build time series: local features, including (pose and box features) and global features. Depending on the pose and box extracted, static and dynamic features are calculated for each image. To calculate the global features, we multiply the pixels of each frame by Gaussian weight according to the extracted box and apply the final frame to the input of the VGG network and extract the global features based on it.
frames are also normalized based on the changes of the first frame:
\[f_{sb}^{1}(n)=c(n)-c(1)\qquad\forall n \tag{2}\]
\[f_{sb}^{2}(n)=\frac{r(n)}{r(1)}\qquad\forall n \tag{3}\]
where \(c(n)\) and \(r(n)\) denote the coordinates of the center and the length-to-width ratio of the subject's box in frame \(n\), respectively. Also, \(f_{sb}^{m}(n)\) denotes the static box feature (\(m=1\) and \(m=2\) refer to the center and the length-to-width ratio of the box, respectively). Therefore, \(3\) features are extracted from each frame.
Dynamic FeaturesIn addition to static features, to appropriately model a video of an action, some features for representing the changes between frames are also required. In this work, dynamic features refer to the features extracted based on the changes between successive frames. More specifically, these features consist of displacement vectors between the static features.
Dynamic Pose FeaturesThe first part of dynamic features consists of the displacement vector between the key points in each frame and its previous frame as.
\[f_{dp}^{m}(n)=k^{m}(n)-k^{m}(n-1)\qquad\forall m,n \tag{4}\]
where \(f_{dp}^{m}(n)\) denotes the dynamic pose feature for the \(m\)-th key point in frame \(n\). Note that \(f_{dp}^{m}(1)\) is considered to be zero. Finally, \(48\) dynamic pose features are extracted for each frame.
Dynamic Box FeaturesThe displacement vector of the center position and changes in the length-to-width ratio of the boxes is another dynamic feature that can model the progress of an action. Similar to dynamic pose features, the second part of dynamic features consists of the displacement vector between static box features in each frame and its previous frame as:
\[f_{db}^{m}(n)=f_{sb}^{m}(n)-f_{sb}^{m}(n-1)\qquad\forall m,n \tag{5}\]
Where \(f_{db}^{m}(n)\) denotes the dynamic pose feature in frame \(n\). Note that \(f_{db}^{m}(1)\) is considered to be zero. Finally, \(3\) dynamic box features are extracted for each frame.
Interpolation for Missing DataEach of the pose and detection algorithms may fail to extract the key points and boxes in some frames. Linear interpolation is used to estimate the missing key points using those in the most recent frames before and after the current frame for which the key points are detected.
#### 2.1.2 Global Features
As mentioned before, we use a combination of local and global features of the frames over time to represent videos. Where global features represent information over the whole frame. Obviously, an action is performed by the subject, and therefore local features are more directly related to the type of action being performed than global features. However, the objects around the main subject, the appearance of the subject, and the background serve as important side information to represent an action.
We use VGG16 network [31], which is pre-trained on the Imagenet dataset [10] to extract global features. To adapt the network, we replace the fully connected layers with the max-pooling layer of stride \((1,1)\) and filter size \((7,7)\), flatten layer, and the max-pooling layer of stride \((1,1)\) and filter size \((1,8)\). In order to focus more on the subject than other details of the scene, a truncated 2D Gaussian weight mask is applied to the pixels of the input frame before feeding it to the network. Figure 3 illustrates the final network. The truncated 2D Gaussian weight mask is designed according to the following points:
* Pixels located inside the subject box, with a high probability, is more related to the action.
* A margin is considered for the box boundaries to reduce the error caused by the box extraction algorithm as well as not to attenuate the objects that are very close to the subject. More specifically, the height and width of the box are increased by \(20\) pixels.
The weights of the mask are constant outside of the corrected box and are \(0.2\) less than the smallest weight of the 2D Gaussian on the boundaries of the box.
\[g_{x,y}=\exp{(-\frac{(x-x_{center})^{2}+(y-y_{center})^{2}}{2})} \tag{6}\]
\[w_{x,y}=\begin{cases}g_{x,y}&p_{x,y}\in mbox\\ g_{min}-0.2&p_{x,y}\notin mbox\end{cases} \tag{7}\]
\(w_{x,y}\) denotes the weight that should be multiplied by pixel \(p(x,y)\). \(mbox\) indicates a box with a margin. \(g_{min}\) represents the lowest coefficient in the boundary areas of the \(mbox\).
### Construction of Time Series
After extracting the local and global feature vectors, they are concatenated, and a feature vector with a length of \(166\) is constructed for each frame. After that, the feature vectors of the frames of each video together form a time series that represents the video. In order to reduce the noise of the extracted features, a moving average with a window length
equal to \(5\) is used. Also, the mean and variance for each time series are normalized. The final method of constructing a time series from a video is shown in Figure 2.
### Ddtw
Dynamic time warping(DTW) [25, 29] is one of the most popular algorithms for measuring the similarity between a pair of sequences and computing the best way to align them, no matter whether their lengths are equal or not. Different kinds of DTW have been developed in various fields [17, 27, 33], and also some works have used DTW in video alignment tasks [15, 24]. In this work, a novel method called Diagonalized Dynamic Time Warping(DDTW) is introduced, which is a generalization of the DTW method. Consider the sets \(X=\{x_{1},x_{2},\cdots,x_{n}\}\) and \(Y=\{y_{1},y_{2},\cdots,y_{m}\}\) as the frames of the first and second video, respectively, and build an \(m\times n\) table \(D\) such that \(D_{i,j}\) is the Euclidean distance between the feature vector of \(x_{i}\) and \(y_{j}\). In conventional DTW, the algorithm finds the best alignment (a path from the down-left corner of the table to the top-right corner which can only move in three directions \(\rightarrow\uparrow\nearrow\) ) with the minimum sum of \(D_{i,j}\)'s. In DDTW, a penalty coefficient is considered if the path gets further than a threshold from the diagonal. The reason for this penalty is the observation that the frames of similar actions performed by different subjects are almost linearly corresponding to each other. Therefore, the DTW path is close to the diagonal. We consider a margin \(m\) and build a new table \(D^{\prime}\) as:
\[D^{\prime}_{i,j}=\begin{cases}D_{i,j}&d\leq m\\ D_{i,j}(1+\lambda(d-m))&d>m\end{cases} \tag{8}\]
where \(d\) is the orthogonal Euclidean distance between the table's \((i,j)\)- cell and the diagonal, and \(\lambda\) is the DDTW coefficient. Then the best path, which has the minimum sum of the \(D_{i,j}\)'s should be found. (Figure 4)
### Enclosed Area Error(EAE)
It is almost impossible to manually align two videos frame-by-frame in order to use them as ground truth. Therefore, in the literature, each video is divided into a number of phases, and metrics such as phase classification and correct phase rate are used to evaluate the alignment output, but the problem with such metrics is that there is no ground truth alignment between frames, and these metrics are only based on the number of frames that are aligned to a frame from the correct phase of the other video. In this section, a new metric for the alignment of two videos or generally two sequences which consist of some phases, is introduced. During each phase, it is natural to suppose that the process is going forward linearly, which is a rather correct assumption for the Penn action dataset [37] and generally for human-action video datasets. By this assumption and knowing the boundaries of the phases in each video, a ground truth for the alignment can be obtained. We know some points on the ground truth path,and by the linearity assumption, the ground truth would be a piecewise-linear path going through those points(Figure 6). The metric equals the area between the ground truth and alignment path divided by the whole area of the rectangle.
## 3 Datasets and Evaluation
### Dataset
The performance of our video alignment technique is evaluated on the Penn action dataset [37]. This dataset provides a collection of actions performed by different people with different variations. Also, to compare the performance of our technique with other methods, we used the phase labels prepared by the authors of [16] for this dataset. The complete list of all actions with the number of phases is given in table 1
Figure 4: DDTW method, the green lines parallel to the diagonal show the margin. The blue path shows the alignment of frames, and going out of the margin results in a penalty, which is calculated according to the distance from the diagonal.
Figure 3: VGG network is used to calculate the global features. The input of the network is a weighted frame based on the truncated 2D Gaussian weight. Fully connected layers with the max-pooling layer of stride \((1,1)\) and filter size \((7,7)\), flatten layer, and the max-pooling layer of stride \((1,1)\) and filter size \((1,8)\) are replaced.
### Baselines
For the experiments, in addition to using our proposed alignment method to align the videos, TCC method [11] is also implemented and used, which is one of the best models in the video alignment task on the Penn action dataset [37]. Moreover, a trivial baseline that aligns the frames of two videos linearly only based on their length is developed. This trivial method achieves outstanding results on the videos of this dataset which are trimmed, and extra/idle frames are removed from them. An experiment is designed using synthesized videos that are closer to real-world videos to show the failure of the trivial method. The goal of this experiment is to show that the good performance of the trivial method on this dataset is by chance which is because of the special conditions of its videos. Still, our method is robust against various conditions.
**TCC [11]**: This self-supervised representation learning method trains a network using temporal cycle-consistency, which is a differentiable cycle-consistency loss that can find matched frames between two videos. This method produces per-frame embeddings for both videos and then uses Euclidian distance to align the frames of the second video to those of the first video.
**Trivial**: This method aligns two videos only based on the number of frames, using an assumption that in all videos, the process is going forward linearly. It means that if the first and second videos have \(n\) and \(m\) frames, respectively, then the \(i-th\) frame of the first video is aligned to the \(\frac{i}{n}\times j-th\) frame of the second video. Figure 5 shows EAE for trivial and our method on two pairs of videos.
### Metrics
In this work, three evaluation metrics are used: EAE, correct phase rate, and phase classification accuracy. EAE, which is our proposed metric, is explained in Section 2.3. This metric evaluates video synchronization task.
**Correct Phase Rate [3, 18]**: This metric finds the portion of frames in the reference video which are aligned to a frame in the correct phase in the second video. This metric, which evaluates the video synchronization task, is calculated after applying DDTW.
**Phase classification accuracy**: This is the per frame phase classification accuracy on test data. To perform phase
\begin{table}
\begin{tabular}{|c|c|} \hline Action & Number of Phases \\ \hline Baseball Pitch & 4 \\ \hline Baseball Swing & 3 \\ \hline Bench Press & 2 \\ \hline Bowling & 3 \\ \hline Clean and Jerk & 6 \\ \hline Golf Swing & 3 \\ \hline Jumping Jacks & 4 \\ \hline Pullups & 2 \\ \hline Pushups & 2 \\ \hline Situps & 2 \\ \hline Squats & 4 \\ \hline Tennis Forehand & 3 \\ \hline Tennis Serve & 4 \\ \hline \end{tabular}
\end{table}
Table 1: Number of phases for each activity in Penn action dataset.
Figure 5: The enclosed area for our predicted and trivial path for three pairs of videos. For the trivial method, the alignment path is the straight line passing through the lower left and upper right corners of the table.
Figure 6: Enclosed area is the area between ground truth and predicted path. The EAE metric computes what fraction of the table’s area is in the enclosed area.
classification, an SVM is trained using the phase labels for each frame of the videos. Also, \(10\)-fold cross-validation on test data is used to evaluate more accurately.
### Results
The proposed method is evaluated using two tasks: phase classification and video synchronization.
#### 3.4.1 Phase Classification
Our final features are evaluated on the phase classification task and compared with TCC [11]. In this setting, \(10\)-fold cross-validation is used on test data. As it is shown in Table 2, our method significantly outperforms TCC in this task, showing the effectiveness of the proposed features. It is worth mentioning that in this experiment, video alignment is not performed.
For the sake of the comprehensiveness of the experiments, the few-shot phase classification task is also evaluated. In this experiment, it is assumed that many videos exist for each action, but just a few have phase labels. First, the case where only phase labels for one video from each action are available is considered. In this case, the SVM classifier is trained using labeled video, and the phase labels of the frames of other videos are predicted. Table 3 shows the results of our method compared to TCC [11]. As indicated in Table 3, the proposed method outperforms TCC in this task too.
Furthermore, the performance of our method is compared with the TCC [11] method by increasing the number of training videos. In this experiment, for each number of training videos(ranging from \(1\) to \(20\)), a random selection of the training videos is repeated \(20\) times, and then the resulting classification accuracies are averaged. According to the results, which are shown in Figure 7, by increasing the number of training videos, the gap between the performance of our proposed method and TCC becomes larger, reaching about \(10\%\) difference when the number of training videos is at least \(8\) videos.
#### 3.4.2 Video Synchronization
The results of video synchronization metrics for the three methods are provided for each action in Table 4. Results show that our final method (feature vector extraction + DDTW) performs significantly better than TCC [11] in the video alignment task.
An experiment is provided to show that the good results of the trivial method on the Penn dataset cannot be generalized to other datasets. The reason for the good performance of this method here is that the videos in the Penn action dataset are trimmed, i.e., idle frames are removed from the beginning and end of the videos. Also, there is no noticeable change in the speed of performing different phases of any action. In other words, the videos of the same action can be aligned with each other by linear expansion or shrinkage in the time domain. In this experiment, realistic modifications are applied to the videos to generate new ones for which the trivial method fails to align them successfully with the original videos. To this end, a "wait phase" is added at the first of all videos, which is a repetition of the first three frames of the video, to form the augmented data; Suppose the original video has \(n\) frames, then \(\frac{n}{2}\) frames are added at the beginning to generate a new video with \(\frac{3n}{2}\) frames. This modification is realistic because it is natural to wait and concentrate before starting an exercise. As it is shown in table 5, the trivial method fails to align modified videos to original ones effectively, and our method performs better than TCC [11] in this task.
\begin{table}
\begin{tabular}{|c|c|} \hline & Phase Classification \\ \hline TCC [11] & 69.6 \\ \hline Ours & **81.1** \\ \hline \end{tabular}
\end{table}
Table 2: Average of Phase Classification Accuracy percentages on different activities of Penn action dataset.
\begin{table}
\begin{tabular}{|c|c|} \hline & Phase Classification \\ \hline TCC [11] & 55 \\ \hline Ours & **56.7** \\ \hline \end{tabular}
\end{table}
Table 3: Average of Few-Shot (only one video for each action). Phase Classification Accuracy percentages on different activities of Penn action dataset.
Figure 7: Few-Shot Phase Classification result for different number of labeled videos
## 4 Conclusion
This paper presents an unsupervised method for aligning two videos with the same action but different execution and appearance. In this method, a video is modeled as a multi-dimensional time series containing global and local, static and dynamic features of its frames. A modified DTW method is also introduced for aligning time series, and a new metric is presented to compare the performance of time series alignment methods effectively. The results show that the proposed method provides significant performance improvement compared to the TCC method and can be implemented for any action performed by one subject without any need for training any network. This work adds to the field of video alignment and has the potential to improve various video-related tasks such as action recognition, anomaly detection, and tracking.
|
2301.12032
|
BinaryVQA: A Versatile Test Set to Evaluate the Out-of-Distribution
Generalization of VQA Models
|
We introduce a new test set for visual question answering (VQA) called
BinaryVQA to push the limits of VQA models. Our dataset includes 7,800
questions across 1,024 images and covers a wide variety of objects, topics, and
concepts. For easy model evaluation, we only consider binary questions.
Questions and answers are formulated and verified carefully and manually.
Around 63% of the questions have positive answers. The median number of
questions per image and question length are 7 and 5, respectively. The state of
the art OFA model achieves 75% accuracy on BinaryVQA dataset, which is
significantly lower than its performance on the VQA v2 test-dev dataset
(94.7%). We also analyze the model behavior along several dimensions including:
a) performance over different categories such as text, counting and gaze
direction, b) model interpretability, c) the effect of question length on
accuracy, d) bias of models towards positive answers and introduction of a new
score called the ShuffleAcc, and e) sensitivity to spelling and grammar errors.
Our investigation demonstrates the difficulty of our dataset and shows that it
can challenge VQA models for next few years. Data and code are publicly
available at: DATA and CODE.
|
Ali Borji
|
2023-01-28T00:03:44Z
|
http://arxiv.org/abs/2301.12032v1
|
# BinaryVQA: A Versatile Test Set to Evaluate the Out-of-Distribution Generalization of VQA Models
###### Abstract
We introduce a new test set for visual question answering (VQA) called BinaryVQA to push the limits of VQA models. Our dataset includes 7,800 questions across 1,024 images and covers a wide variety of objects, topics, and concepts. For easy model evaluation, we only consider binary questions. Questions and answers are formulated and verified carefully and manually. Around 63% of the questions have positive answers. The median number of questions per image and question length are 7 and 5, respectively. The state of the art OFA model achieves 75% accuracy on BinaryVQA dataset, which is significantly lower than its performance on the VQA v2 test-dev dataset (94.7%). We also analyze the model behavior along several dimensions including: a) performance over different categories such as text, counting and gaze direction, b) model interpretability, c) the effect of question length on accuracy, d) bias of models towards positive answers and introduction of a new score called the "ShuffleAcc", and e) sensitivity to spelling and grammar errors. Our investigation demonstrates the difficulty of our dataset and shows that it can challenge VQA models for next few years. Data and code are publicly available at: DATA and CODE.
## 1 Introduction
Visual question answering [5, 10] is a multidisciplinary task at the intersection of computer vision, NLP, knowledge representation, reasoning, common sense knowledge, etcetra. The goal is to answer a text-based question given an input still image or a video.
Recent VQA models are able to answer binary questions above 95% accuracy, which is astonishing considering that in principle, any questions can be asked on an image. At the same time, though, this alarms that perhaps we are not using the test sets that have the right level of difficulty. Using the same test set over the years has the risk of over-fitting, as researchers often tune their models towards the statistics of the test sets (even when the annotations are held hidden). To mitigate this issue, it is crucial to have several versatile independent test sets to evaluate models and to track the progress. While several test sets are available for problems such as image classification (_e.g._[14, 28, 6]) and object detection (_e.g._[24, 20, 22]), the VQA field lacks enough difficult test sets. Our study is an effort in this direction. This discussion naturally relates to the out-of-distribution studies showing that models are biased towards the test sets that are similar to the sets over which they have been trained on. Likewise, they underperform over test sets that are even slightly different [28, 32, 35]. In this regard, here we also testing the out-of-distribution performance of the VQA models.
Our test set contains 1,024 images crawled from publicly-available and free-to-distribute sources. We used Google and Bing search engines with different search phrases to collect the images. We made sure that no im
Figure 1: Samples from our dataset. Our dataset covers a wide variety of concepts including counting, crowd, emotions, drawings, paintings, camouflage, clothing, time, weather, body parts, age, text, gaze direction, etc. It also includes questions that address spatial understanding of models (_e.g._ the blue rectangle in the last image of the 3rd row). See Appendix A for more examples.
age contains sensitive material, has poor resolution, or violates the copyright law1. The gathered data encompass a wide variety of visual concepts over both RGB images, paintings, drawings, cartoons, and clip arts (Fig. 1). We have made sure that all the questions are unambiguous and answers are correct. Our test set contains more questions per image (\(\sim\)7) than the VQA v2 test set (\(\sim\)3). We only consider the binary questions, since essentially any question can be converted to a "yes/no" question. This simplifies the model evaluation and eliminates the complicated process of matching sentences of predicted answers with actual answers. Notice that this argument does not necessarily mean that we only need models that give binary answers.
Footnote 1: We choose images that were public domain, did not have copyright, or were released by the government.
Although our test set is smaller than the VQA test set, it comes with the benefit of better control over the complexity of the questions and quality of the answers. Controlling the difficulty level of the questions generated by the Amazon Mechanical Turk (AMT) workers is challenging, as workers may choose to ask simple and short questions to save time. Unlike the questions in the VQA dataset [5] that are supposed to fool a toddler, alien, or a smart robot, some BinaryVQA questions can even challenge adults. To answer the majority of the questions, one has to carefully analyze the images. Further, small versatile and carefully curated test sets like ours can alleviate the legal issues concerning consents, licensing, privacy and security which are harder to control in datasets containing millions of images.
In curating the BinaryVQA, we have made three choices. First, this test set is intentionally not paired with a training set. This is to encourage generalization and to prohibit models to take advantage of correlations between testing and training sets. These correlations are easily accessible to models but are not detectable by humans [9]. Second, our dataset comes with a license that disallows researchers to update the parameters of any model for any reason on it. This is again to avoid over-fitting. Third, to mitigate the danger of leaking our data to other training sets, we mark every image by a one pixel green border that must be removed on the fly before testing.
In addition to the test set, we also introduce new dimensions along which VQA models can be tested, in particular sensitivity of the models to small perturbations in the questions. We find that, unlike humans, current models are highly sensitive to minor grammar mistakes. Further, we study the bias of models towards generating positive answers, whether models indeed require the image to answer the questions, and whether they choose the right image regions to do so. In a nutshell, our results show that state of the art VQA models struggle on our dataset. This suggests that, in conjunction with other datasets, our dataset can be used to push the VQA models to become better.
## 2 VQA Datasets
Several VQA datasets have been introduced [18, 26, 40]. In these datasets, images are either taken from an existing vision dataset (_e.g._ MSCOCO; [24]) or are artificially created (_e.g._ Abstract Scenes; [5], computer graphics; [17, 4]). Further, questions are generated either automatically [17, 18, 25, 29, 41, 2, 42], from crowd workers [18, 21, 43, 5, 8], or from in-house participants [18, 38]. Unlike these datasets, questions in our dataset are carefully constructed by experts such that to answer them a detailed inspection of the image is necessary. Some prominent VQA datasets are listed in Table 1. Relevant ones to our work are described next.
**COCO-QA [29]** includes 123,287 images from the MSCOCO (72,783 for training and 38,948 for testing) and each image has one question/answer pair. Questions are automatically generated from the image descriptions and are categorized into four types based on the type of expected answer: object, number, color, and location. A downside of the COCO-QA dataset is that 9,072 (23.29%) of test questions also appear in the training questions.
**VQA [5, 11]** is one of the most widely used datasets ([https://visualqa.org/](https://visualqa.org/)). It comprises two parts, one using natural images called VQA-real (sourced from MSCOCO), and a second one with cartoon images called VQA-abstract. The latest more comprehensive version of this dataset, VQA v2.0 consists of 1.1 million (image, question) pairs with 13 million associated answers.
**Visual Genome [21]** is aimed to enhance the progress on cognitive tasks, especially spatial relationship reasoning. It contains over 108K images, with about 35 objects, 26 attributes, and 21 pairwise relationships between objects.
**Visual7W [43]** includes seven types of WH questions (what, where, when, who, why, which and how) to examine capability of a model in visual understanding. Questions
\begin{table}
\begin{tabular}{|l|c c|c|} \hline
**Dataset** & **\# Images** & **\# Questions** & **Question Type(s)** \\ \hline \hline DAQUAR [25] & 1449 & 12468 & Object identification \\ \hline COCO-QA [29] & 123287 & 115000 & Questions automatically generated from COCO captions \\ \hline VQA [5] & 204721 & 614163 & Combining vision, language and common-sense \\ \hline Visual Mathles [41] & 10738 & 360001 & Fill in the blanks \\ \hline Visual/W [45] & 47,300 & 2201154 & /Ws, locating objects \\ \hline CLEVR [17] & 100000 & 835354 & Synthetic question generation using relations \\ \hline Tally-QA [1] & 1650000 & 306907 & Counting objects on varying complexities \\ \hline KVQA [31] & 24602 & 183007 & Questions based on Knowledge Graphs \\ \hline VizWiz [15] & 31000 & 31000 & Questions by visually impaired users \\ \hline TextVQA [-3] & 28408 & 45336 & Questions demanding reasoning about text \\ \hline \end{tabular}
\end{table}
Table 1: Overview of VQA datasets described in this paper.
are asked in the multiple-choice format. There are four candidates for each question, and only one candidate is the correct answer.
**Visual Madlibs [41]** consists of 360,001 targeted descriptions spanned across 12 different types of templates and their corresponding images.
**VizWiz [13]** is constructed from interactions of visually impaired users with a mobile application. It consists of 31,000 visual questions together with 10 crowdsourced answers per question. Images often have poor quality due to poor lighting, focus, and framing of the content of interest. Further, questions are on average more conversational and are sometimes incomplete.
**TextVQA [33]** contains 45,336 questions on 28,408 images that require reasoning about text to be answered. Images are taken from the Open Images v3 dataset [20]. TextVQA is available at [https://textvqa.org](https://textvqa.org).
In addition to above, some non-photo-realistic datasets such as CLEVR [17], NLVR [34], and FigureQA [19] have also been introduced to study visual reasoning independent of language. Some datasets such as Fact-Based VQA [37] explicitly require external knowledge to answer questions. GQA [16] is a popular dataset, which also involves phrases to address the relations.
Our work relates to research that addresses the functional diagnostics of pre-trained language models (_e.g._[27, 30]). It also relates to works that examine adversarial robustness and out-of-distribution generalization of VQA models (_e.g._[7, 23]). For example, [23] shows that non-expert annotators can easily attack the best VQA models.
We construct an adversarial dataset to challenge the best VQA models. Although there are few such datasets for free-form VQA (_e.g._ VQA-CP [3]), here we show that even that answering yes/no questions is not yet solved.
## 3 BinaryVQA Dataset
Our dataset contains 7,800 questions across 1,024 images. Majority of the questions start with "Is" and "Are" as shown in the sunburst plot in Fig. 2. The most common terms in the questions are person, wearing, people, and image (right panel in Fig. 2). We do not include WH questions and all questions have "yes" or "no" answers. We ensured that each image is valid through human review. We formulated the questions and then presented them along with their answers to three AMT workers for verification. Please see Appendix D for details. Out of all questions, only 41 QA pairs received the incorrect majority vote, which were fixed subsequently.
Statistics of the BinaryVQA dataset are shown in Fig. 3. Out of the 7,800 questions, 4,897 have positive answers and the remaining 2,903 have negative answers, resulting in a ratio of about 62.7% (positive/all images). The median positive to all questions ratio per image is 0.625. 38 images (3.7%) have all of their questions answered "yes", while no image has all of its questions answered "no". The median number of questions per image is 7 which means that half of the images have more than 7 questions. The median number of positive questions (questions with answer "yes") is 4 and the median number of negative questions is 3. The mean number of questions per image in BinaryVQA is 7.62 which is higher than 5.4 for VQA v2. BinaryVQA questions range from 3 to 20 words. The mean and median question length are 5.64 and 5 words, respectively. VQA v2 questions range from 4 to 10 words (average 5). The average image resolution is 840.3 \(\times\) 650.4 (w \(\times\) h) with the average aspect ratio of 1.32.
Sample images are shown in Fig. 1. BinaryVQA images and questions cover a wide variety of topics and concepts including drawings, paintings, uncommon views of objects, hybrid animals, out of context objects and odd scenes (elephant in the room, car in the swimming pool, black sheep among white sheep), weather conditions, time, interactions among people, actions (fighting, running, walking, dancing), emotions (sadness, happiness, surprise, anger), counts and quantity, gender, age, race, gaze direction, object materials, objects in the mirror, body parts (_e.g._ whether mouth or eyes are open, whether teeth are visible), animals, fruits, clothing (T-shirt, long sleeve, pants), shadow, color, crowd, clouds, tattoos, camouflage, illusions, non-existing objects, and logical reasoning.
In formulating the questions, we tried to remove any ambiguity (_e.g._ in giving addresses relative to the image, objects, people in the scene, or image viewer; left side of the rightmost person; left of the image). When only some people in the image (_e.g._ standing ones) are doing an action, we did not ask "Are these people doing X". Instead, we asked "Are the standing people in this image doing X".
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**Q type** & **List of words** \\ \hline sky & sky \\ spatial & rectangle, \\ vegetation & tree,plant, flower \\ gaze direction & looking \\ real/drawing & painting, drawing \\ in/out/doors & indores, outdoors \\ daytime & daytime, nighttime \\ emotions & happy, sad, angry, upset \\ time & clock, time, watch, hour, minute, seconds \\ gender & man, woman,female, male, bay, girl \\ text & text, number, English, Roman, word, written \\ age & age, old, young, child, kid, baby, adult, teenager \\ weather & weather, snowy, sunbody, cloudy, fairy, memory, jogs \\ color & color, white, red, blue, yellow, black, purple, green, silver, blood \\ actions & fighting, walking, sitting, standing, running, climbing, lying, \\ & dancing, varying \\ direction & right, left, top, bottom, above, below, side, leftmost, rightmost, next \\ counting & more than, has than, no, three, ten, fifteen, twenty, \\ & two hundred, exactly, only \\ body parts & face, head, hand, head, floor, feet, eye, torso, ear, \\ & belly, bedy, button, finger, hair, shoulder, neck, mouth, nose, body \\ clothing & shoe, jean, dress, tie, stirr, short, long sleeve, sock, hat, cap, \\ & earning, skin, piercing, necface, scarf, ey eyelessessess. \\ & belt, cloths, wearing \\ animals & animal, cat, dog, elephant, tiger, horse, owl, chicken, hen, \\ & rooster, wolf, fox, octors, steps, beep, eagle, lion, giraffle, monkey, \\ fruits & cow, scorpion, turtle, fly, unsosito, dinosaur, phregon, spider \\ fruits & fruit, apple, banana, acorn, tomato, poton, pontomeramate, \\ & pear, peach, orange, parelon, notarremolen, cherry, strawberry, \\ & corn, pumpkin, pineapple, lemon,epepper, avocado, cabbage, \\ & lettuce, coconut, cucumber, engulfan, broccoli \\ \hline \end{tabular}
\end{table}
Table 2: List of words per question type in the BinaryVQA dataset.
Some questions test whether models can tell the type of the image (_e.g._ "Is this a drawing?" and "Is this a painting?") and whether they can answer questions over different types of images (_e.g._ drawings, paintings, cartoons, clip art, black and white images). Some questions ask about the text, for example "Is there text?", "Is the word X written somewhere in this image?", "Is the text written in English?", "Is the number 53813 written somewhere in the image?". External knowledge and common sense are needed to answer some questions (_e.g._ "Is this a map of Japan?, "Is this person a celebrity?"). In order to further test the spatial understanding of the models, we placed a blue rectangle around some objects in the image and targeted the questions only on those regions (See Fig. 1). An example question is "Is the spatula inside the blue rectangle blue?". To test the consistency of models and see whether they truly understand the image, for some images we include questions that contradict each other (_e.g._ "Is the boy standing?" _vs_ "Is the boy sitting?"). Some other sample questions are "Is the whole body of the person visible?", "Is she holding a wine in her left hand?", "Are some birds printed on her skirt?", "Is her right hand in her right pocket?", "Is the person on the left taller?", "Is anyone looking at the camera?", Is this person an adult?", "Is the sky clear?", "Are his feet touching the ground?", "Are there more X objects than Y objects?", "Is object X to the left of object Y?", "Is the person in the image female?", and "Is the person opening the door with his right hand?". We clustered the questions based on the
Figure 3: BinaryVQA dataset statistics. Left: Distribution of the number of questions and its breakdown on positive and negative answers. Half of the images have more than 7 questions. Middle: Ratio of positive to all questions. On average images contain more positive questions than negative ones. Right: Distribution of question length. Half of the questions have length greater than five.
Figure 2: Left: Distribution of questions in our dataset by their first three words. The ordering of the words starts towards the center and radiates outwards. The arc length is proportional to the number of questions containing the word. Right: Venn-style word clouds of words in the questions. The most frequent word is ‘person’ indicating that questions are often about people in the images.
terms that appeared in them, as shown in Table 2. For example, questions with words gender, man, woman, female, male, boy, girl address the gender. Notice that a question may fall into more than one category. These categories will be used later to analyze the models.
We did not incorporate any bias towards gender, age, or race during data collection, and tried to be as inclusive as possible in gathering images and formulating questions. We include and balance questions that address different ages and genders. The age groups are (baby, 26), (kid, 42), (children, 26), (Tenager, 5), (Young, 16), and (old, 12). The gender groups are (woman, 350), (women, 38), (man, 448), and (men, 79). We did not include any question that ask about race. These issues are more important to address over large training sets. This is because sometimes models trained on such datasets are directly deployed in the real-world.
The BinaryVQA dataset is substantially different from the VQA v2 validation set (the real images) measured in terms of the Frechet Inception Distance (FID) [15]. The FID is equal to 50.9 indicating a large distribution shift, and hence high diversity (using 7K images). To put this number in perspective, the FID between VQA v2's validation and its test set is approximately 23.8. Notice that the lower the FID, the more similar the two distributions.
## 4 Analyses and Results
To see how well the state of the art VQA models perform on our dataset2, we choose the OFA model [39] which is currently the leading scorer on the VQA v2 test-std dataset3. It achieves 94.66% accuracy on "yes/no" questions. We also include a simple baseline model [5, 42] to see whether transitioning from simple to complicated models in VQA has indeed been meaningful4. To put the results in perspective, we also ran the Pythia model5. In this section, we focus on explaining the results using the OFA model. Summary results for both models are shown in Table 3.
Footnote 2: We used a 12 GB NVIDIA Tesla K80 GPU to do the experiments.
Footnote 3: [https://paperswithcode.com/sota/visual-question-answering-on-uos-uyz-test-std](https://paperswithcode.com/sota/visual-question-answering-on-uos-uyz-test-std)
Footnote 4: [https://github.com/iamaditya/VOA_Demo.git](https://github.com/iamaditya/VOA_Demo.git)
Footnote 5: [https://github.com/Eurus-Holmes/Pythia-VOA](https://github.com/Eurus-Holmes/Pythia-VOA)
The distribution of model scores on the BinaryVQA dataset is shown in the left panel of Fig. 4. The average accuracy of the OFA model is 75% which is much higher than the 62% accuracy of the baseline model. The OFA model, however, does significantly worse on our dataset than the VQA v2 dataset (around 20% absolute performance drop). We attribute this to the more complex nature of the questions and images in our dataset. Sample predictions of both models are shown in Fig. 5.
The OFA is able to correctly answer all questions for 160 images (15.6%) whereas the baseline is right for only 50 images (4.8%). The OFA model fails all questions over 314 images (30.7%) while the baseline answers all questions wrong over 673 images (65.7%).
Performance of the models over question types is shown in the right panel of Fig. 4. The OFA model does better than the baseline in the majority of the question types. It performs below the baseline model over counting (57.2%), text (59.7%), and spatial (63%) categories. It does, however, perform very well on weather (100%), daytime/nighttime (95.5%) and indoors/outdoors (96%) categories. Surprisingly, the OFA model does relatively well in answering questions pertaining to gaze direction (68.7%) without using any ad-hoc module to process faces, eyes, and gaze angles. The same argument holds over the real/drawing category (80.6%). We find that models have indeed improved drastically over the years, but there is still a large gap to close. Further, our dataset is significantly harder than the VQA v2 dataset (in "yes/no" questions) making it a great auxiliary test set to the existing ones.
We found that models perform about the same over real images, paintings, or drawings. OFA model scores \(\sim\) 74.12% over the paintings or drawings (568 questions across 69 drawings/paintings) which is slightly lower than its 75.47% accuracy on real images (7,232 questions over 955 images). The corresponding numbers for the baseline model are 60.03% and 63.47%. The OFA model is correct in answering the counting questions 57.2% of the time. This model is accurate 69% of the time over the number category on the VQA v2 dataset. Some difficult questions for the OFA model are shown in Fig. 6 over different categories.
### Model interpretability
VQA models are very efficient in answering the questions, but how much do they really understand the images? Are their answers grounded on image content, or are merely due to some correlations? Several attempts have been made to address this (_e.g._[2, 12]) and limiting the image area to a spatial location as is done here (_i.e._ images containing the blue rectangles) is one way to do so. In this section, we propose a new way to interpret the models by masking the image content and study its effect. To this end, we run the OpenCV face detector [36] and mask the faces in images. We then evaluate the OFA model on these images and plot the performance per category as shown in the left panel of Fig. 7. Notice that here we limit our analysis to those images for which at least one face is detected (309 out of 1024 images). Some question categories that highly depend on face information such as "gaze direction", "age", "gender", and "emotions" are severely degraded, which suggests that models indeed use the right information. Degradation or enhancement over some categories such as "text" or "animals" may be partially attributed to the false detections of the face detector. This, however, needs further investigation. Note that our masking approach can also be extended to more common objects.
Figure 4: Left: Distribution of per image accuracy for models. OFA model is correct \(\sim\) 75% of the time. Middle: Number of questions per question type. Right: Accuracy per question type for models. OFA model does better than the baseline on most of the question types.
Figure 5: Sample images along with the question, ground truth answer (GT), prediction of the OFA model (M1) and prediction of the baseline model (M2). See appendix for more examples.
### Impact of question length on accuracy
Questions in VQA datasets have different levels of complexity. Intuitively, a longer question may be harder to answer than a short one, since it involves unpacking and understanding the dependencies among words in the sentences and their corresponding objects in the image. The right panel of Fig. 7 shows the model accuracy as a function of question length. Due to rarity, questions longer than 10 words are discarded (only 150 occurrences). As it can be noticed, accuracy decays as the question length grows. The mean accuracy of the OFA model over questions less than 8 words is 72.3%. Its accuracy over questions longer than 8 words (and less than 10) is 51.6%. The corresponding numbers for the baseline model in order are 62.3% and 52.8%. This result corroborates previous findings over the VQA dataset and shows that models underperform over longer questions. Since our dataset contains longer questions than the VQA dataset, it can better test this aspect of models.
### Analysis of "yes" bias in models
VQA datasets usually contain more questions with "yes" answers than questions with "no" answers. This is partially due to the tendency of annotators to query the existing content in images. Consequently, a smart chance model that often produces positive answers may win over a sophisticated model. One approach to combat this issue, as is done over the VQA v2 dataset, is to balance the distribution of positive and negative questions. Here, we introduce a new score called "ShuffleAcc" to automatically address this. A subset of \(2n\) questions consisting of \(n\) positive and \(n\) negative questions are randomly selected (here \(n=2000\)). The average model accuracy over \(m\) such subsets is then computed (here \(m=50\)). A model that consistently generates a "yes" (or "no") answer will achieve 50% accuracy. The same argument holds for a model that randomly chooses "yes" 50% of the time. The ShuffleAcc scores of OFA and baselines models in order are 75% and 62.4% which are about the same as their performance using the traditional accuracy score. This entails that these models do not suffer from inherent biases towards positive answers.
### Sensitivity to spelling and grammar errors
Studies on understanding and evaluating VQA models have been primarily focused on the visual component of VQA. Less attention, however, has been paid to diagnosing errors in the NLP component, in particular the sensitivity of models to perturbations on asked questions. This is particularly important to study since we know humans are still able to correctly answer questions even in presence of significant spelling and grammar mistakes, so long the meaning of the question remains the same. Here, we study three simple perturbations that are unlikely to change the answer.
**Within-word character swap.** Here, we first randomly select a word (with length \(>3\)) in the question. Next, we randomly choose two characters in this word and swap them. For example, the question "Is there a person in the image?" will turn into "Is there a peosrn in the image?". We then evaluate the OFA model by varying the number of words, from 1 to 3, for which we swap two characters. OFA accuracy drops to 61.4% with swap in one word, 53.5% with swaps in two words, and 49.1% with swaps in three words. These results clearly show that spelling errors drastically hinder the models. Humans often do not notice these changes during reading.
To test whether this result generalizes to other datasets, we repeated these experiments over the VQA-v2 test set. The accuracy of the OFA model drops to 91.7%. This number drops to 84.7% with swap in one word, 77.3% with swaps in two words, and 65.5% with swaps in three words. Similar observations are made for the baseline model.
Figure 6: Failure cases of the OFA model over different categories of the BinaryVQA dataset.
**Omission of the articles.** Here, all the articles ("the", "a", "an") are removed from the question. For instance, the question "Is the person on the right holding a camera?" will be converted to "Is person on right holding camera?". The performance of the OFA model drops to 73.8% indicating that this model, similar to humans, is robust to the omission of the articles.
**Negating the question.** Questions in the BinaryVQA dataset are formulated positively without using the word "not". Logically, if the question is negated the answer should also be negated6 For example, if the answer to the question "Is there a firefighter on the crane?" is "yes", then the answer to the question "Is there not a firefighter on the crane?" should be "no". For this analysis, we focus only on "Is there" type questions. Out of 1,841 such questions, the OFA model maintained its decision in 738 cases when the question was negated. This amounts to about 40% of the cases, which is far above 0%. Ideally, the model should always reverse its decision.
Footnote 6: Of course there are exceptions in the conversational language, _e.g._ Isn’t there a person in the room? Answer: No! (assuming there are no people in the room).
### Ablation analyses
Following our interpretability analysis above, here we conduct two analyses which can be considered as sanity checks or baselines for models. Models can be right for wrong reasons, and vice versa. In the first analysis, we ask all the questions over a black image or a white noise image. The OFA model performs well below chance, about 36.4% and 36.89% over these images, respectively. This indicates that this model indeed requires the image to produce the right answer.
The second analysis investigates whether a model can consistently produce the "no" answer to questions for which we know the answer is surely "no". We asked 15 questions in the form of "Is there a/an X in the image?" where X represents one of the following objects 'white orange', 'dragon', 'blue horse', 'backgammon board', 'parrot', 'boxer dog', 'ostrich', 'dinosaur egg', 'galaxy','mermaid', 'telescope', 'unicorn', 'centipede', 'yellow cow', 'yeti' over all the 1024 images. The mean accuracy of OFA model across all 15 \(\times\) 1024 questions is 93.1% using original images. The breakdown per each of these questions is shown in Appendix C. Interestingly, when we asked these questions on white noise images, the accuracy jumped to 100%. These results again demonstrate that the OFA model indeed highly relies on the image content.
## 5 Discussion and Conclusion
Understanding complex questions in VQA is a big challenge, so is the understanding of complex scenes. Our dataset is better suited to address the latter, whereas other datasets can address the former. It can be used to test models that already perform above 95% on binary questions of VQA-v2 dataset. Our dataset contains a lot of questions which are really challenging and need close examination of the image to be answered. Such questions ask about non-standard objects, surreal imagery, and/or other oddities (_e.g._ an eagle with a banana for a beak, water spout wearing sneakers, an odd clothespin-like object on one side and spoon on the other, a face with multiple pairs of eyes).
We share a zip file containing images, questions, metadata, and detailed documentation. BinaryVQA is licensed under Creative Commons Attribution 4.0 (Appendix E).
Figure 7: Left: Performance of the OFA with and without faces masked. Sample images with faces masked are also shown. Right: Performance of the OFA model as a function of question length.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} Model & Avg Acc. & ShuffledAcc & Char Swap & Article & Question\({}^{*}\) & Acc on \\ & & & (one word) & Omission & Negation (\%) & VQA \(\chi^{2}\)\({}^{+}\) \\ \hline \hline Baseline & 62.5 & 62.4 & 51.5 & 59.3 & 35 & 80.5 \\ OFA & 75 & 75 & 61.4 & 73.8 & 40 & 94.66 \\ Pythia & 72.1 & 72.2 & 58.8 & 69.4 & 46 & 86.7\({}^{\dagger}\) \\ \end{tabular}
\end{table}
Table 3: Summary of model performance on BinaryVQA dataset.
\(*\) = Percentage of questions for which the model retained is answer after negation. \(+\) = Human perf. is about 95.48 from [https://visualqa.org/roe.html](https://visualqa.org/roe.html)\(\dagger\) = Pythia v0.1 the winning entry in 2018 VQA benchmark [https://visualqa.org/roe_2018.html](https://visualqa.org/roe_2018.html)
|
2303.04029
|
Systematic Modeling Approach for Environmental Perception Limitations in
Automated Driving
|
Highly automated driving (HAD) vehicles are complex systems operating in an
open context. Complexity of these systems as well as limitations and
insufficiencies in sensing and understanding the open context may result in
unsafe and uncertain behavior. The safety critical nature of the HAD vehicles
demands to model limitations, insufficiencies and triggering conditions to
argue safe behavior. Standardization activities such as ISO/PAS 21448 provide
guidelines on the safety of the intended functionality (SOTIF) and focus on the
performance limitations and triggering conditions. Although, SOTIF provides a
non-exhaustive list of scenario factors that may serve as a starting point to
identify and analyze performance limitations and triggering conditions, yet no
concrete methodology is provided to model these factors. We propose a novel
methodology to model triggering conditions and performance limitations in a
scene to assess SOTIF. We utilize Bayesian network (BN) in this regard. The
experts provide the BN structure and conditional belief tables are learned
using the maximum likelihood estimator. We provide performance limitation maps
(PLMs) and conditional performance limitation maps (CPLMs), given a scene. As a
case study, we provide PLMs and CPLMs of LIDAR in a defined scene using real
world data.
|
Ahmad Adee, Roman Gansch, Peter Liggesmeyer
|
2023-03-07T16:42:29Z
|
http://arxiv.org/abs/2303.04029v1
|
# Systematic Modeling Approach for Environmental Perception Limitations in Automated Driving
###### Abstract
Highly automated driving (HAD) vehicles are complex systems operating in an open context. Complexity of these systems as well as limitations and insufficiencies in sensing and understanding the open context may result in unsafe and uncertain behavior. The safety critical nature of the HAD vehicles demands to model limitations, insufficiencies and triggering conditions to argue safe behavior.
Standardization activities such as ISO/PAS 21448 provide guidelines on the safety of the intended functionality (SOTIF) and focus on the performance limitations and triggering conditions. Although, SOTIF provides a non-exhaustive list of scenario factors that may serve as a starting point to identify and analyze performance limitations and triggering conditions, yet no concrete methodology is provided to model these factors.
We propose a novel methodology to model triggering conditions and performance limitations in a scene to assess SOTIF. We utilize Bayesian network (BN) in this regard. The experts provide the BN structure and conditional belief tables are learned using the maximum likelihood estimator. We provide performance limitation maps (PLMs) and conditional performance limitation maps (CPLMs), given a scene. As a case study, we provide PLMs and CPLMs of LIDAR in a defined scene using real world data.
SOTIF, autonomous vehicle safety, safety of the intended functionality, Bayesian networks, parameter learning
## I Introduction
Highly automated driving (HAD) vehicles are complex systems operating in an open context [1]. The complexity and open context nature may result in unsafe and uncertain behavior due to limitations and insufficiencies in sensing and understanding the operational environment [1]. Modeling such limitations and insufficiencies requires the consideration of all possible scenarios and factors influencing the HAD vehicle performance. The international organization for standardization (ISO) published the publicly available specification (PAS), ISO/PAS 21448 road vehicles safety of the intended functionality (SOTIF) [2]. The goal of the SOTIF guidelines is to identify the performance limitations and triggering conditions that may lead to potentially hazardous behavior. Specifically, SOTIF is applied to the intended functionality where proper situational awareness is critical to safety and the situational awareness is derived from complex sensors and processing algorithms [2].
Evaluating a perception system (sensor and its processing algorithm) in terms of their limitations, capabilities or inherent uncertainties is not a straightforward task. A perception system cannot be characterized based on a rudimentary set of safety requirements or key performance indicators (KPIs), as the performance of such system depends on many influencing factors. For example, functional performance of a LIDAR based perception system may depend on the spatial distribution of detection, reflection, weather and road conditions.
Modeling the dependencies and influencing factors of the perception system to assess performance limitations and consequently the relevant uncertainties is important for SOTIF argumentation [3]. Such models can provide valuable insights on the functional performance of the system during development. ISO/PAS 21448 [2] provides a list of such dependencies in terms of scenario factors but does not provide concrete steps to model these scenario factors.
Probabilistic graphical models (PGMs) [4] in general and Bayesian networks (BNs) [5] in particular have rapidly gained popularity in the dependability research [6, 7]. The BN is a directed acyclic graph (DAG) that consists of nodes and edges. Every node is a random variable \((X_{1},\ldots,X_{n})\), which represents an element of the system or its context. The edges represent a directed relationship between two nodes and run from the parent node \((pa)\) towards the child node \((ch)\). Together, nodes and edges represent the structure of the probabilistic network (Fig. 1). The strength of these dependencies are governed by conditional probability distributions \(\Pr(ch\mid pa)\)[4]. Mathematically, the BN can be written as follows.
\[\Pr(X_{1},\ldots,X_{n})=\prod_{i}^{n}\Pr(X_{i}\mid pa(X_{i})) \tag{1}\]
BN is effective in modeling uncertainty and probability reasoning of a system. It exploits the dependence relationship through the local conditions in the model to perform uncertainty analysis for prediction, classification and causal inference of influencing factors.
In this publication, we formulate a model using BN for known triggering conditions and performance limitations in a given scene. A human expert provides the SOTIF relevant scenario factors and models the causal relations among them using a BN structure. We perform parameter learning of BN to quantify the dependencies in the model. In order to explain the performance limitations and triggering conditions effects on SOTIF, posterior probability analysis and causal inference is conducted. We construct performance limitation maps (PLMs) and conditional performance limitation maps (CPLMs) using these analyses. Causal inference identifies the most contributing influencing factors on performance. Together, PLMs, CPLMs and causal inference provide valuable insights on the SOTIF. This may help the analyst in the safety case generation, identification of the performance limitations, generation of targeted test, validation and verification campaigns and influencing factors, which in turn can help in defining refinement measures. Summarizing, we provide the following contributions.
* We introduce a method to model known triggering conditions and performance limitations in a scene.
* We introduce PLMs as the representation of SOTIF metric.
* We introduce CPLMs to quantify the effects of triggering conditions and influencing factors on SOTIF.
* We implement the methodology and provide PLMs and CPLMs of LIDARs case study while utilizing real world data.
The publication is structured as follows: Sec. II presents the proposed methodology. Sec. III briefly describes the setup used for data acquisition. Sec. IV provides the application of proposed methodology on LIDAR perception. In sec. V, results of the implementation are evaluated. Sec. VI provides the evaluation of the approach and robustness of the results. Sec. VII provides an overview on the state of the art. Finally, in sec. VIII we discuss conclusion and future work.
## II Proposed Methodology
We introduce a modeling methodology using BN to identify, model and quantify performance limitations as well as triggering conditions in a scene. The experts provide the structure of BN while the conditional belief tables (CBTs) are learned from real sensor data. Fig. 2 shows the flowchart of the methodology we adopt in this publication. A detailed explanation of the steps proposed in the flowchart (Fig. 2) follows.
### _SOTIF Relevant Scenario Factors_
The first step towards modeling relevant SOTIF scenario factors is the identification of performance limitations and triggering conditions in a given scene [2]. SOTIF provides a dynamic element and scenery centric non-exhaustive list of scenario factors [2]. Although this list can be a starting point, yet identification of triggering conditions and performance limitations is dependent on many other aspects including the context of driving, perception system in question and existing setup among other. For example, consider the following two descriptions.
1. Context: Highway, Perception: Radar based, Studied behavior: False Positives.
2. Context: Urban, Perception: LIDAR based, Studied behavior: Position Trueness.
Both description may lead to different scenario factors. In the former, the human expert might be interested in steel bridges, tin cans and other such instances while in the latter the factors of interest may include weather conditions, exhaust gases and reflections. The process is similar to hazard identification and risk assessment (HARA) from ISO 26262 [8], but does not explicitly considers malfunctioning behavior of components. It assesses the intended functionality of HAD
Fig. 1: An example of grid map and scene modeling attributed to the cells: LIDAR detections are discretized in grid cell around the field of view. Four LIDARs are attached at the roof of the HAD vehicle for detection. Bottom part shows a Bayesian network along with conditional belief table for \(\Pr(Road\mid Weather)\).
Fig. 2: Flowchart describing the flow of the proposed methodology. SOTIF relevant scenario factors and expert knowledge are encoded into scene model defined by the BN structure. Data is gathered accordingly and learning of parameters is performed.
vehicle functions especially where situational awareness is critical to safety. We utilize the scenario factors from ISO/PAS 21448 [2] as well as expert opinion, previous data and existing setup (constraint on data acquisition and/or data labels) to model the scene in our methodology (Fig. 2).
SOTIF related undesired behavior (e.g. braking when not required and vice versa) may originate from FP and FN detections [2]. Since we are more focused at the perception level of the functionality, we only consider FN and true positive (TP) of the perception system. The overall methodology we define in this publication, is however generic and can be applied to complete functional chain (sense, plan, decide and act) of the system under study. The choice of undesired behavior is highly dependent on the system under study, the scene model and metrics that can support the safety case. Apart from TP and FN, SOTIF related undesired behavior such as FP, positional error, contour matching, classification as well as regression quality can also be modeled to assess the performance limitation and the effects of triggering conditions on the functional performance. As an example, for the second case in which LIDAR based perception system is analyzed in the context of urban driving, the expert may provide the following factors.
* **Occlusion**: In urban driving, there may be a relatively higher probability of occlusion occurrence as parked cars, trees may occlude objects.
* **FN/FP** rate: The overall FN/FP rate in the urban context of driving.
* **Weather** conditions: Different weather may effect the LIDAR performance.
* **Reflection** from objects: Reflection from different objects (buses windows) effects the FP rate.
* **Illumination**: Higher illumination may increase the reflection from objects.
The above-mentioned factors are non-exhaustive. Scenario factors are provided and refined based on the expert opinion and ability for data acquisition. The resulting factors then can be used to model the causal relation.
### _Model of the Causal Relation_
Modeling the qualitative and casual relations amongst the scenario factors, triggering conditions and performance limitations is a significant component of this methodology. We utilize BN structure for this purpose. Traditionally, the BN structure modeling is based either on the expert knowledge [9] or on the learning from data (structure learning) [10]. However, in structure learning from data the number of graph candidates grow exponentially with the number of variables in the data [11]. Discerning true graph by using observational data alone from other graphs that model the same set of conditional independencies is also challenging. Due to these challenges, we opt for the former technique in this work.
Scene description, which include SOTIF relevant scenario factors and corresponding undesired behavior(s) constitute the nodes of the BN structure. As a first step towards derivation of the structure, the experts establish hierarchical dependencies between undesired behavior, triggering conditions of the scene, and performance limitations and provide propositions e.g. the proposition \(p1:\) high occlusion may result in higher FNs. We then construct BN with arcs representing the dependencies and nodes representing the undesired behavior, triggering conditions and performance limitations derived from these propositions e.g., the proposition \(p1\) is modeled as an explicit node (Fig. 3). The resulting BN structure asserts that a child node is governed by a causal mechanism that probabilistically determines its value based on the values mechanism of its parents [4]. The stochastic attribute of such models helps modeling aleatory uncertainty [12].
### _Data Acquisition and Pre-processing_
Dataset \(\mathcal{D}\) acquired and utilized in our methodology consists of fully observed instances of the network variables.
\[\mathcal{D}=\xi[1]\ldots\xi[M] \tag{2}\]
Where \(\xi[.]\) represents a data instance and \(M\) represents the number of instances in \(\mathcal{D}\).
We calculate SOTIF related undesired behavior for each data instance, if the undesired behavior is not labeled. For example, data instances may not be labeled with FNs. However, this is an ad-hoc step for data processing that may or may not be required, depending upon the available dataset.
In order to fully grasp the effects of SOTIF relevant scenario factors (conditional dependencies in BN) and performance limitations around the HAD vehicle, we discretize the spatial distributions of detections in a grid map (Fig. 1). Modeling spatial distribution of triggering conditions and performance limitations in a grid map is important for the following reasons.
1. Scenario factors are spatially distributed e.g. in a weather situation involving dense fog the FN rate of the grid cells farther from the HAD vehicles will be different than the nearer ones, for some perception systems.
2. Safety criticality is variable around the vehicle in the sense that events nearer to the HAD vehicle are generally considered more critical.
Data instances thus can be spatially associated around the HAD vehicle to fully associate the observed instances with their respective detection points in space. In this way, a grid map is created around the HAD vehicle to represent SOTIF relevant perception metrics/properties (Fig. 1). For the construction of grid map, a coordinate system (e.g. Cartesian or polar) is selected as well as the grid size. Each grid cell is then represented by a separate BN and its corresponding CBTs (Fig. 1). The structure of each BN is kept constant in this work.
Suppose the data instances are distributed into \(\mathcal{N}\) number of grid cells (thus \(\mathcal{N}\) number of BNs) based on the Cartesian \((x,y)\) or polar \((r,\theta)\) coordinates of detection. The dataset (Eq. 2) can be re-written as.
\[\mathcal{D}^{k}=\xi^{k}[1]\ldots\xi^{k}[M^{k}]\forall k\in\mathcal{K} \tag{3}\]
Where \(\mathcal{K}\) is a set as follows.
\[\mathcal{K}=\{1,2,\ldots,\mathcal{N}\} \tag{4}\]
Here \(k\) represent \(k^{th}\) grid cell and BN.
### _Parameter Learning_
Once BN structure (Sec. II-B) is determined and corresponding data is acquired (Sec. II-C), the CBTs can be learned. We determine the CBTs and thus the strength of the dependencies by utilizing the maximum likelihood estimator (MLE) [4]. We perform non-parametric learning, not assuming prior probabilities. Given a variable \(X\) with parents \(\mathbf{U}\), we will have a parameter \(\theta^{k}_{x|\mathbf{u}}\) for each combination of \(x^{k}\in Val(X)\) and \(\mathbf{u}^{k}\in Val(\mathbf{U})\) for a CBT. The likelihood function for such case is as follows.
\[L_{X}(\theta^{k}_{X|U}:\mathcal{D}^{k})=\prod_{m}\theta^{k}_{x[m|\mathbf{u}[m ]]}=\prod_{\mathbf{u}\in Val(\mathbf{U})}\prod_{x\in Val(X)}\theta^{kM^{k}[ \mathbf{u},x]} \tag{5}\]
Here \(\theta^{k}_{x|\mathbf{u}}\) represents the parameter to be learned, \(k\) represents the \(kth\) BN around the HAD vehicle and \(m\) represents the \(m^{th}\) data instance in the dataset. Maximizing the likelihood function from Eq. 5 results in the learned parameter.
\[\theta^{k}_{x|\mathbf{u}}=\frac{M^{k}[\mathbf{u},x]}{M^{k}[\mathbf{u}]} \tag{6}\]
Here \(M^{k}[\mathbf{u},x]\) represents the combined occurrence of \(u\) and \(x\) for the \(k^{th}\) BN. Eq. 6 defines the MLE.
### _Refinement_
The aim of refinement steps is to improve the BN (both structure and CBTs), so that exhaustive and complete models for SOTIF can be produced. We believe that this a hybrid approach (involving experts while partially automating the approach) may provide the most suitable results. Every step explained in the previous sections and depicted in the Fig. 2 is subject to iterative refinement based on the analyses and obtained results. This includes additions/deletion of scenario factors, restructuring of the BN structure or acquisition of more data.
## III Experimental Setup
The experimental setup consists of two Hesai Pandar 64 and two Velodyne Ultra Puck VLP-32C LIDAR sensors installed on the roof corners of a car (Fig. 1). The recorded data consists of different labels including bounding boxes, pose, visibility state and vehicle activity among others surrounding \(360^{\circ}\) of the HAD vehicle. The data was collected mostly on the highway in the nearby regions of Stuttgart, Germany. The data consists of around twenty thousand instances. A deep neural network (DNN) was trained and used as the processing algorithm. Two experts from the field with substantial experience in the LIDAR based perception systems provided their opinions on LIDAR insufficiencies, triggering conditions and limitations based on the observations in the data acquisition process and experiences with the LIDAR based perception systems.
## IV Implementation
In this section, we demonstrate the application of our methodology on the LIDAR sensing dataset discussed in the previous section.
### _SOITIF Relevant Scenario Factors_
The experts provide different factors that may effect the LIDAR perception system performance (Sec. II-A). Based on the expert inputs, SOTIF scenario factors and availability of data acquisition setup, we include the nodes shown in Fig. 3 as SOTIF relevant scenario factors.
Truncated or occluded objects may only produce sparse point measurements. The occlusion and truncation both representing the visibility state of an object is defined analogously to the KITTI benchmark [13]. Weather conditions may effect the road conditions and light intensity that in turn can effect reflection on road. Especially heavy rain may cause flooding on road, which in turn can decrease the TP in detections [14].
We use FN and TP rate to represent the SOTIF measure as they are considered adequate measures for SOTIF analysis [2].
### _Model of the Causal Relation_
Based on the propositions from the previous section, the BN structure is developed. The effects discussed in the previous section can be encoded in the following simple propositions.
Proposition 1**: **Truncation** and **occlusion** in detection may influence FN and TP.
Proposition 2**: **Weather** conditions may effect road conditions and scene illumination, which in turn can effect the TP/FN rate.
Proposition 3**: **Road condition** and scene **illumination** can effect **reflection** in the scene, which in turn can effect the TP/FN rate.
The resulting BN structure is shown in Fig. 3. The BN model contains seven nodes. Once the CBTs are established, the BN can be updated with new information.
Fig. 3: BN based on the SOTIF relevant scenario factors and expert knowledge describing the causal structure used in our implementation. False negative and true positive are selected alternatively. Color coding is provided to support the subsequent figures of the analysis.
### _Data Acquisition and Pre-processing_
The dataset we use in this paper provides detection and corresponding ground truth (separate datasets). One Bounding box is labeled for each object detection and its corresponding ground truth. All relevant nodes (Fig. 3) are labeled except TP and FN. In order to evaluate TP and FN for each data instance we use mean squared error (MSE).
\[MSE=\frac{1}{n}\sum_{i=0}^{n}(Y_{i}-\hat{Y}_{i})^{2} \tag{7}\]
Where \(n\) represents number of samples, \(Y_{i}\) represents the ground truth and \(\hat{Y}_{i}\) represents the detection. We execute Eq. 7 for individual detections and find a corresponding sample in the ground truth using \(x\) and \(y\) values. All those data instances from detection which return a data instance from ground truth are considered TP, while all those data points from ground truth that do not return a corresponding value from detection are considered to be FN. Data instances with \(|x|>140\) meters and \(|y|>50\) meters are not considered as the defined optimal range of LIDAR.
Resolution of grid cells in the grid-map is an interesting aspect as it directly influences the TP and FN rate. This phenomenon is analogous to discretization of continuous spatial distribution as a BN node [15]. As coarsening may result is less precise and accurate while refinement may result in precise and less accurate CBTs [16], a well thought discretization is required. Both, static and dynamic discretization can be performed in this regard [16]. Based on the availability of data for each cell and complete representation of all the nodes of the BN structure (Fig. 3), we use \(x=20\) and \(y=10\) meters accordingly, in this publication.
### _Parameter Learning_
We perform parameter learning for individual BN (representing a grid cell) using its corresponding data instances and Eq. 6. After the establishment of the BN structure and learning of the distribution parameters (CBTs), the BN can be used as an effective tool for analysis and estimation. The resultant PLMs and CPLMs can be used as metrics to assess the SOTIF, identify triggering conditions as well as provide safety cases and validation targets.
### _Refinement_
We provide preliminary refinement steps in the results (Sec.V).
## V Results
In this section, we present the results obtained by applying our methodology.
### _Performance Limitation Map_
Grid maps pertaining to TP and FN are shown in Fig. 4, 5. Essentially PLMs represent the marginalized posterior probability distributions \((\Pr(ch))\) of specific nodes. The heat map reference is reversed for TP to keep the same color for undesired probabilities. We can infer the following conclusions.
1. We observe better detection capabilities near the HAD vehicle.
2. Both TP and FN rate are symmetrically distributed across \(X\) and \(Y\)\(axes\) with slightly higher FN rate (and lower TP rate) in front and on the right side of the HAD vehicle.
By using the PLMs (Fig. 4, 5), the uncertainty of the scene can be represented with a PLM, which can be expressed as a quantitative evaluation of the safety of intended functionality (SOTIF) for a given scene and system under consideration.
### _Conditional Performance Limitation Map_
Another interesting analysis result comes in the form of a CPLM, conditioned on individual or multiple nodes of the scene. This corresponds to conditional probability of a child \((ch)\) node given its parent(s) \((pa)\) node \((\Pr(ch\ |\ pa))\). The \(pa\) can be selected individually or in combination. In
Fig. 4: Performance limitation map for FN rate in the described scene and available data used for learning. Better performance of LIDAR is be observed near the HAD vehicle.
Fig. 5: Performance limitation map for TP in the described scene. Better performance of LIDAR is observed near the HAD vehicle.
the light of ISO/PAS 21448, it can be seen as how triggering conditions influence the performance [2]. We provide CPLMs of FN conditioned on occlusion (Fig. 6,7). Evidently, \(occlusion{=}largely\ occluded\) scenes have higher probabilities of FNs than \(occlusion{=}fully\ visible\) scenes. We can infer the following conclusions.
1. Largely occluded scenes have higher FN rate than fully visible scenes for LIDARs, given the data.
2. The average \(\Pr(FN\mid Occlusion)\) rate is symmetrically distributed across \(X\) and \(Y\) axes with slightly higher FN rate in front and on the right side of the HAD vehicle.
### _Causal Inference_
The strength of BN to provide backward propagation of evidence [4] provides an added advantage of performing causal inference. In other words, casual inference estimates the strength of a parent node on the child node or any other node in the structure\((\Pr(pa\mid ch\ or\ any))\). For example, given the BN, consider the following query.
QueryWhat causes the FN rate?
This query can be answered by setting the FN in the grid map to \(Yes\). Fig. 8,9 shows the causal inference of \(\Pr(Illumination{=}Day\mid FN=Yes)\) and \(\Pr(Occlusion=largely\ occluded\mid FN=Yes)\) respectively. We can infer the following conclusions.
1. \(Occlusion=largely\ occluded\) has higher impact than \(illumination=day\) on FN.
Such results may directly indicate the relevant triggering
Fig. 8: Causal inference map for FN when illumination (day). Illumination state ”day” has varied effects on the detection.
Fig. 6: Conditional performance limitation map (CPLM) for FN (Yes) conditioned on Occlusion (fully visible) in the described scene. CPLM for FN describes that occlusion (fully visible) scenes may not causes higher FN rate.
Fig. 7: Conditional performance limitation map (CPLM) for FN conditioned on Occlusion (largely occluded) in the described scene. CPLM for FN describes a higher FN rate for occlusion (largely occluded) scenes.
Fig. 9: Causal inference map for FN when occlusion (largely occluded). Largely occluded states have considerable effects on the detections away from the HAD vehicle.
conditions of performance limitations and may provide a way forward for informed improvement in the design of the system from the SOTIF viewpoint. For example, based on the CPLM (Fig. 7) and a benchmark for FN rate for each cell (given a constant severity and controllability), we may infer that \(largely\)\(occluded\) scenes are risk factors to SOTIF. However, defining a benchmark is out of the scope of this work.
### _Refinement_
Some of the refinement steps proposed by the experts are.
* More data is required for truncation node in order to establish or negate a causal relation.
* Abrupt zero values in regions where surrounding grid cells have relatively higher values (Fig. 6) are observed for occlusion (\(fully\)\(visible\)). These cells require further analysis and data instances for robust results.
The refinement steps are non-exhaustive and provision of an exhaustive list of steps is out of the scope of this work.
## VI Evaluation
We perform evaluation of our learned PLM using the test dataset. We predict the FN by essentially setting the weather, occlusion, road and reflection states from the test dataset as evidence and predicting the FN. The results are then compared with the FNs computed by using Eq. 7. Tab. I shows the results of our evaluation. We observe that substantial accuracy in the results can be achieved. However, like any other data oriented implementation, measuring the true underlying parameter distribution (or CBTs) is a challenging task. In general, a parameter learning algorithm for BN extracts the joint relative frequency if they have conditional relation in their structure e.g. \(X\mid Y\)[4]. As the real CBTs are unknown, the method approximates it using the dataset \(\mathcal{D}\). The resulting CBTs represent the characteristics of real and unknown CBTs. Special care must be taken for tasks that are safety critical in nature. In the following, we discuss some of the assumptions that the parameter learning of BNs are based on which may challenge the robustness of the results.
### _Representation of the Open Context_
The first and foremost assumption taken in any model is that it is considered as a good approximation of open context. In the specific case of BN, the structure represents the causal model and the CBT represents the relative occurrence as the approximation of open world phenomena for both data oriented or expert elicited CBTs. It may happen that not all the influencing factors are encoded in the BN structure and data does not represent the true relative frequency of phenomena. Dataset that does not well represent the open context may result in error prone PLMs and CPLMs.
### _Rare Event Problem_
This concerns the well-known rare event occurrence frequency problem and its representation. This problem arises when there are important states of nodes which occur with lower frequency e.g. \(illumination:tunnel\)\(light\) is expected to occur with lower frequency than \(illumination:day\). From the SOTIF standpoint, these states can also be safety critical. Evaluating robust CPLM for such states becomes challenging and is subject to perturbations. Such states can be artificially inserted in the data but the resultant marginalized probabilities will not be the true representation of the real world.
In this regard, a relative representation of each state frequency in the data explicitly modeled in the results can be a promising direction.
### _Training and Test Data_
Test dataset may inappropriately be segregated from the training data. Generally, test dataset should not be correlated with training dataset. However, in reality, highly correlated dataset is used because data is recorded at the same locations and it is recorded sequentially. This may lead to overestimated accuracy of the PLM.
### _Data Abstraction and ODD Taxonomy_
Every scene is defined based on some abstraction. This is analogous to the data discretization problem in BN [15]. Different abstractions may result in different maps e.g. a lower and more specific abstraction of \(illumination\) node will be the values of light intensities instead of states such as \(day\), a further lower abstraction might be taking a continuous light intensities distribution. Such distribution may result in different maps hence challenging the robustness of the results. Since these abstractions can be governed by operational design domain (ODD) taxonomies, a well-established ODD taxonomy can be used as the benchmark for data abstraction for analyses. Moreover, dynamic discretization can also be used in this regard [16].
## VII Related Work
In recent years, extensive research has been done on the topic of SOTIF and scenario based safety of HAD vehicles [17]. However, to the best of authors' knowledge, existing approaches lack in the systematic identification, modeling, quantification and analysis of SOTIF relevant scenario factors. Berk et al. [18] formalize the reliability-based validation of the environment perception for safe automated driving and discuss the associated challenges. The work focuses on the perception failure rate \(\lambda_{per}\) and discusses the false negative (FN) and false positive (FP) as uncertainties. The implementation also provides qualitative and semi-quantitative analyses of sensor
\begin{table}
\begin{tabular}{|c|c|} \hline
**Evidence** & **FN (Yes) accuracy** \\ \hline Weather & 75.4102\% \\ \hline Occlusion & 71.5815\% \\ \hline Road & 76.1166\% \\ \hline Reflection & 76.3445\% \\ \hline \end{tabular}
\end{table} TABLE I: Evaluation of FN rate when new evidence arrives. Only weather, occlusion, road and reflection nodes are considered evidence nodes. Instead of grid map, overall prediction rate is calculated.
perception reliability. Ali et al. [19] analyze the hazards arising due to variabilities in collaborative cyber physical systems (CPSs). Environmental, infrastructural, spatial and temporal variabilities are considered as factors causing uncertainties. They also develop a fault traceability graph to trace the faults considered by multiple hazard analyses in the collaborative CPSs with variability. Edward Schwalb [20] provides a probabilistic framework for incrementally bounding the residual risk associated with autonomous drivers and enabling the quantifying progress. The work introduces continuous monitoring by autonomous driver for imminent hazards and selects actions that maximizes the time to materialization (TTM) of these hazards. The approach also enables implementing the continuous expansion of SOTIF through measurement of improvements from regressions using posterior probabilities. Finally, Kramer et al. [21] provide integrated method for safety assessment of automated driving functions, which covers the aspects of functional safety and SOTIF, including identification and quantification of hazardous scenarios. They also provide a functional insufficiency and causal chain analysis technique to identify and model SOTIF related hazards. Similar methodology is also presented in another literature [22]. However, the work provides a more theoretical view of the problem.
## VIII Conclusion and Future Work
We presented a method to develop performance limitation maps (PLMs) as well as conditional performance limitations maps (CPLMs) under the scene model to study safety of the intended functionality (SOTIF). We identify the relevant triggering conditions, which are provided by experts and reasoned through data. The methodology encodes the parameter learning for Bayesian network (BN) for the implementation.
This methodology particularly argues SOTIF under manageable effort. In its core, the provided methodology enables the analyst to identify performance limitations under various triggering conditions, their causal relations and limitations, conditioned on various phenomena critical under SOTIF. This further assists the analyst to establish mitigation strategies for the identified performance limitation under triggering conditions. In order to argue the adequacy of the approach, LIDAR performance was studied given a scene. The scene was modeled using a BN structure and parameter learning was performed using real world data to elicit conditional belief tables (CBTs).
We also evaluated the accuracy of learned BNs to demonstrate the predictive capabilities. We achieved roughly **75%** when predicting the FN rate on the training data. We then discussed the robustness concerns of the safety methods in particular when data is used for parameter learning.
In future, we intend to explore how the robustness concerns of BNs can be addressed and mitigated. We particularly intend to provide methods focused on uncertainty measures and confidence intervals for CBTs and probabilities. Moreover, we also intend to model combined CPLMs for heterogeneous perception systems in order to identify and analyze common triggering conditions. The implementation can also be extended to test perception system based on same sensors with different governing algorithms.
|
2307.00960
|
Neural Architecture Transfer 2: A Paradigm for Improving Efficiency in
Multi-Objective Neural Architecture Search
|
Deep learning is increasingly impacting various aspects of contemporary
society. Artificial neural networks have emerged as the dominant models for
solving an expanding range of tasks. The introduction of Neural Architecture
Search (NAS) techniques, which enable the automatic design of task-optimal
networks, has led to remarkable advances. However, the NAS process is typically
associated with long execution times and significant computational resource
requirements. Once-For-All (OFA) and its successor, Once-For-All-2 (OFAv2),
have been developed to mitigate these challenges. While maintaining exceptional
performance and eliminating the need for retraining, they aim to build a single
super-network model capable of directly extracting sub-networks satisfying
different constraints. Neural Architecture Transfer (NAT) was developed to
maximise the effectiveness of extracting sub-networks from a super-network. In
this paper, we present NATv2, an extension of NAT that improves multi-objective
search algorithms applied to dynamic super-network architectures. NATv2
achieves qualitative improvements in the extractable sub-networks by exploiting
the improved super-networks generated by OFAv2 and incorporating new policies
for initialisation, pre-processing and updating its networks archive. In
addition, a post-processing pipeline based on fine-tuning is introduced.
Experimental results show that NATv2 successfully improves NAT and is highly
recommended for investigating high-performance architectures with a minimal
number of parameters.
|
Simone Sarti, Eugenio Lomurno, Matteo Matteucci
|
2023-07-03T12:25:09Z
|
http://arxiv.org/abs/2307.00960v1
|
# Neural Architecture Transfer 2:
###### Abstract
Deep learning is increasingly impacting various aspects of contemporary society. Artificial neural networks have emerged as the dominant models for solving an expanding range of tasks. The introduction of Neural Architecture Search (NAS) techniques, which enable the automatic design of task-optimal networks, has led to remarkable advances. However, the NAS process is typically associated with long execution times and significant computational resource requirements. Once-For-All (OFA) and its successor, Once-For-All-2 (OFAv2), have been developed to mitigate these challenges. While maintaining exceptional performance and eliminating the need for retraining, they aim to build a single super-network model capable of directly extracting sub-networks satisfying different constraints. Neural Architecture Transfer (NAT) was developed to maximise the effectiveness of extracting sub-networks from a super-network. In this paper, we present NATv2, an extension of NAT that improves multi-objective search algorithms applied to dynamic super-network architectures. NATv2 achieves qualitative improvements in the extractable sub-networks by exploiting the improved super-networks generated by OFAv2 and incorporating new policies for initialisation, pre-processing and updating its networks archive. In addition, a post-processing pipeline based on fine-tuning is introduced. Experimental results show that NATv2 successfully improves NAT and is highly recommended for investigating high-performance architectures with a minimal number of parameters.
Neural Architecture Transfer 2 Neural Architecture Search NAT OFAv2 AEP
## 1 Introduction
Deep learning has emerged as a significant revolution in recent years, significantly impacting various aspects of modern society. It has notably transformed numerous activities by leveraging artificial neural networks. These networks possess remarkable capabilities, outperforming conventional approaches in multiple tasks. One notable advantage is their ability to eliminate the requirement for manual feature engineering, as they autonomously discern meaningful patterns from the provided data. The effectiveness of deep learning networks stems from their meticulously designed layered architecture, enabling proficient feature extraction. While human research endeavors have achieved notable advancements in performance, the resulting models have exhibited a trend towards increased size [1]. Consequently, their production necessitates not only specialized expertise but also hardware, energy, and production times that have become progressively unattainable [2].
Neural Architecture Search (NAS) emerged to address the need for innovative neural architectures that are universally applicable and don't require extensive expertise. Its primary goal is to automatically discover the optimal configuration for a given dataset and task [3]. Over time, NAS has also incorporated considerations for computational and temporal constraints. These techniques aim to improve the performance of found models trading-off with the whole search process complexity. This includes minimizing time, energy consumption, and CO\({}_{2}\) emissions, as well as achieving a favorable trade-off between model performance and complexity in terms of parameters and operations. Additionally,
NAS must account for scenarios where devices running these models have limited memory or constraints. Therefore, it is crucial to broaden the scope of NAS beyond benchmark rankings and consider it as a means to address such limitations [4].
The work known as Once-For-All (OFA) represents a significant milestone in this direction. As the name suggests, the objective of this approach is to perform massive computations in a single instance by constructing a super-network, from which sub-networks satisfying different constraints can be readily extracted, while maintaining excellent performance [5]. This work has been successfully expanded through the technique known as Once-For-All-2 (OFAv2). The authors maintained the same underlying training principle as the original algorithm but adapted it to a network search space which was extended with proven techniques from the field of artificial neural network design, thereby elevating the obtained super-network to higher levels [6]. The extraction of sub-networks represents the final and crucial step. While OFA already proposed a potential solution in this regard, the Neural Architecture Transfer (NAT) algorithm was specifically developed to maximize the effectiveness of this step. NAT seeks to generate neural architectures that
Figure 1: The NATv2 summary diagram. The proposed algorithm designs customised architectures from a very large search space of possible state-of-the-art configurations. Multi-objective optimisation extends the work of NAT with new encoding and super-networks management techniques. New predictors provide accurate estimates for efficient evolutionary search. Once the optimal sub-network has been extracted, it is further refined by an additional post-processing step for fine-tuning.
exhibit strong performance across diverse objectives by leveraging knowledge transfer and adaptation from pre-trained super-network models. It employs a combination of transfer learning and many-objective evolutionary search steps. Specifically, it adapts only the portions of the super-network corresponding to sub-networks discovered along the trade-off front by the search algorithm [7].
This paper introduces NATv2, an extension of NATT that enhances the capabilities of multi-objective search algorithms on dynamic super-network architectures. NATv2 replaces the original super-network, OFAMobileNetV3, used in NAT and pre-trained with OFA's Progressive Shrinking algorithm, with super-networks generated by OFAv2. Consequently, significant qualitative improvements are achieved in the extractable sub-networks' topology, allowing for the inclusion of parallel blocks, dense skip connections, and early exits. To enhance the NATv2 archive, new policies are implemented for initialization, pre-processing, and updates. Moreover, a novel encoding type is proposed to accommodate these improvements, while the pipeline's predictors are upgraded to higher-performance techniques. Additionally, a post-processing pipeline based on fine-tuning is introduced, which further enhances model performance at a marginal increase in parameters and MACs. By integrating all these advancements, NATv2 demonstrates the ability to generate image classification networks that surpass the accuracy achieved by NAT. Furthermore, NATv2 achieves this improvement with a reduced number of parameters and MACs. An overview of the proposed technique is depicted in Figure 1.
The rest of the paper is divided into the following sections. Section 2 provides an introduction to NAS, the main works in the field of image classification, and the rationale behind their design choices. Section 3 describes the NATv2 method in detail, reporting on its workflow and paying special attention to the additions and improvements introduced by the new version. The experiments performed, the configurations used and the qualitative and quantitative comparisons are described in Section 4. Finally, Section 5 summarises the contributions of this work and concludes the manuscript.
## 2 Related Works
Neural Architecture Search (NAS) is an evolving research area within the deep learning community that combines various techniques from machine learning and optimization domains. The primary goal of NAS is to automatically design complex neural network architectures in an end-to-end process without human intervention. Despite its popularity in the AI community, NAS lacks standardised approaches due to the variety of techniques involved. Howerver, Elsken _et al._ proposed a widely accepted classification of NAS algorithms based on three key characteristics [8]:
* The _search space_, referred to as the set of all possible architectures that can be found by the algorithm.
* The _search strategy_, which defines how the algorithm explores the search space to find optimal architectures for the given task.
* The _performance evaluation strategy_, which determines how to efficiently evaluate the quality of the architectures during the search process.
Early NAS research achieved remarkable model quality, but required significant computational resources. For example, NASNet, a pioneering work in the cell-based NAS approach, emerged as a competitive solution for image classification, rivalling state-of-the-art human-designed neural networks [3]. Inspired by highly successful models such as ResNet [9] and InceptionNet [10], which featured the sequential repetition of convolutional modules, NASNet aimed to identify the most effective set of layers and connections for the given task and encapsulate them in a computational macro unit called a cell. This cell could then be stacked multiple times according to the desired depth of the final network. Unfortunately, this early work remained a challenge constrained within the confines of large computing centres, limiting its accessibility. However, a significant democratising advance came with the introduction of the PNAS algorithm.
PNAS introduced a sequential model-based optimisation technique into the NAS context to relax the computational and time constraints. The key idea is that the search process could start with thinner models and progressively add new parallel and sequential layers based on guidance from a surrogate model called predictor. The predictor is a machine learning model that estimates the potential accuracy of each candidate architecture and dynamically adapts to the training results of previously sampled networks [11]. In further advancements, the POPNAS series of algorithms were developed to improve the efficiency of the cell-based approach. These algorithms extended the use of predictors to estimate the training times of the architectures to be searched. This allowed a transition to multi-objective optimisation, which considered both minimisation of search times and quality of architectures by explicitly training networks on the Pareto front. This improvement significantly increased the search efficiency without compromising the quality of the best architectures obtained [12; 13; 14].
An alternative approach is taken by works such as AmoebaNet [15] and NSGANet [16], which use evolutionary algorithms for architecture search and employ gradient descent techniques to optimise the weights of the discovered architectures. In evolutionary algorithms, the search space represents the phenotype, while the architectures being
searched are encoded as genotypes. At each iteration, a population of architectures is maintained, and their genotypes are modified by mutation and crossover operations to produce offspring. Mutations in this context involve the random swapping of bits in the encoding, often resulting in the addition or removal of a layer, or the establishment or removal of a connection between two layers. DARTS improves search efficiency by introducing a relaxation to the search space, making it continuous. This allows the entire search space to be represented as a single large network, known as a super-network, where each edge between layers is parameterised and part of the optimisation process. This modification allows both model weights and structure to be optimised using gradient descent via two specialised training steps, one for the whole super-network and one for the sub-networks within it. In this way, the final architecture is nothing more than a sub-graph extracted from the super-network itself [4]. The efficiency of DARTS has contributed to the emergence of a new class of methods known as one-shot architecture searches. During the search process, each sampled architecture can be viewed as a composition of paths within the super-network. The weights of these paths are inherited from the super-network and fine-tuned to quickly assess the accuracy of the network [17, 18]. However, training super-networks is challenging as they are more sensitive to hyperparameter changes and weight co-adaptation. Specialised training techniques are required to avoid favouring only certain subsets of architectures in the final estimation phase.
#### Once-For-All
Once-For-All (OFA) is a pivotal work in the realm of super-network-based NAS techniques, designed to maximize search efficiency. As a prominent component of hardware-aware NAS techniques, OFA has gained significant recognition and widespread adoption due to its Progressive Shrinking (PS) optimization strategy [5]. This strategy not only enables the acquisition of excellent starting points for sub-network extraction but also concentrates the computational load into a single end-to-end training process. The PS algorithm is organized into four elastic steps, each comprising multiple phases. The first step, Elastic Resolution, involves randomly varying the size of input images. The second step, Elastic Kernel Size, gradually reduces the maximum kernel size for convolutional operators across the entire network. The third step, Elastic Depth, progressively decreases the minimum depth achievable for sub-networks. Finally, the fourth step, Elastic Width, aims to reduce the number of filters available for each convolutional layer.
The algorithm begins by defining the maximal network, which includes all PS parameters set to their maximum values. Subsequently, the PS training steps and phases are executed sequentially. The values unlocked by each phase for a specific elastic step remain available for selection in all subsequent training phases, enabling the addition of smaller networks to the search space. During each batch of images, a certain number of sub-networks are sampled from the current sample space and activated within the super-network. Their gradients are accumulated, and a single weight update step is performed.
Throughout the training process, the maximal sub-network serves as the teacher network for Knowledge Distillation [19]. KD involves transferring knowledge from a large pre-trained network (referred to as the teacher network, in this case, the maximal sub-network) to a smaller network (referred to as the student network, in this case, an active sub-network). This is achieved by using the output of the teacher network as a soft target for the student network during training. By learning from the predictions of the teacher network, the student network can achieve similar performance to the larger teacher network while being smaller in size. Once the PS algorithm is concluded, it is possible to sample sub-networks to extract their configuration via encoding, train surrogate models also known as performance predictors, and finally exploit them to identify the most suitable and performing sub-network according to different hardware constraints.
The results presented show how OFA effectively addresses the multi-model forgetting problem, which refers to the performance degradation caused by weight sharing when sequentially training multiple neural architectures within a super-network [20]. Among the notable macro-architectures introduced by the authors, the main one, called OFAMobileNetV3, is based on the Inverted Residual Bottleneck (IRB), i.e. an highly efficient type of block, based on depthwise separable convolutions, originally introduced in MobileNetV2 [21] and further refined in MobileNetV3 [22]. By default, the network consists of five stages, each consisting of four blocks. In cases where the number or size of the feature maps from the input to the output of a block is changed, the residual connection cannot be used. Therefore, the IRB block is replaced by its sequential counterpart known as an Inverted Bottleneck (IB).
#### Neural Architecture Transfer
Neural Architecture Transfer (NAT) is a recent advancement in NAS that builds on the OFA framework. It serves as an adaptive post-processing step that replaces the simple sub-networks extraction in the original OFA algorithm. NAT aims to progressively transform a pre-trained generic super-network into a task-specific super-network. The goal is to directly search and extract sub-networks that achieve the best trade-off across a range of objectives from the task-specific super-network, without the need for re-training [7].
To optimise the efficiency of the super-network adaptation process, NAT selectively fine-tunes only those parts of the super-network that correspond to sub-networks whose structures can be directly sampled from the current trade-off front distribution. This approach saves computation by focusing on the parts that contribute to improvements in the current task. The multi-objective evolutionary search in NAT is guided and accelerated by a performance prediction model that is updated online using only the best sub-networks configurations and their scores. This approach helps maintain a high-performance prediction model despite using a relatively small number of sub-networks as training samples. By relying on the predictor, NAT can save time by avoiding evaluating each member of the intermediate populations generated during the evolutionary search.
Throughout the process, the best architectures found are gradually added to an archive. In the end, NAT returns three main outputs: the set of non-dominated sub-networks from the archive, which represent the best trade-offs across objectives; the set of high trade-off sub-networks, which are solutions where choosing any neighbouring solution would result in a significant loss in some objectives compared to a unitary gain in others; and the resulting task-specific super-network, which can be reused as a starting point for a new NAT process in different deployment scenarios.
#### Once-For-All-2
Once-For-All-2 (OFAv2) represents a significant evolution from its original version, aiming at the same goal of a one-shot NAS algorithm to construct a super-network from which models suitable for different devices can be extracted. This new version introduces significant improvements, particularly in the search space [6]. In particular, the original OFAMobileNetV3 macro-architecture has been enhanced by the authors through the incorporation of parallel blocks, dense skip connections, and early exits, the latter taking advantage of the Anticipate Ensemble and Prune (AEP) technique [23]. These additions increase the flexibility and performance of the super-network.
To accommodate the aforementioned architectural changes, the PS training algorithm has been extended to the Extended Progressive Shrinking (EPS) algorithm, incorporating two new elastic steps: Elastic Level and Elastic Exit. Elastic Level supports parallel networks, while Elastic Exit is applied in the presence of early exits. In addition, OFAv2 introduces a novel teacher network extraction strategy. This strategy dynamically updates the teacher network at the end of each EPS step, ensuring the transfer of relevant knowledge for subsequent training steps.
## 3 Method
This section presents the second version of the Neural Architecture Transfer (NATv2) algorithm, which builds on the Once-For-All-2 (OFAv2) technique used to generate the the super-networks used as staring points. The focus is on the modifications made to the original algorithm to accommodate any architecture generated by OFAv2. In addition, changes to the sub-network sampling method and performance predictor are presented. NATv2 introduces two new steps in its pipeline: a pre-processing step, which incorporates the new archive initialisation method, and a two-stage post-processing method. The pre-processing step is designed to ensure that the archive is appropriately set up to start with a set of performing architectures from the beginning. The post-processing step is applied after the conclusion of the algorithm to fine-tune and refine the selected architectures, ultimately improving their performance. Being an upgraded version, it is assumed that all steps and algorithms not explicitly mentioned have remained unchanged from the original approach.
#### Expanded Search Space
NATv2 requires a new encoding paradigm to enable evolutionary search on OFAv2 super-networks. The existing encoding method used by NAT to represent sub-network structures in OFAMobileNetV3 is insufficient to capture the full range of architectural permutations. However, it serves as a starting point and undergoes precise modifications to meet the new set of constraints. For the OFAMobileNetV3 architecture, the baseline NAT representation utilizes integer-encoded strings consisting of 22 values, as depicted in Figure 2 under the label "Baseline". Each compressed representation contains specific information. The first value encodes the resolution \(R\) of the input image for the network. The second value represents the active width multiplier \(W\). Width multipliers serve as scaling factors for the number of filters during the execution of the OFA algorithm. In the vanilla configuration, as in NATv2, two distinct super-networks are constructed, with \(W\) values of 1.0 and 1.2, respectively. These two initial encoding rules are retained in the NATv2 encoding scheme.
In NAT, the set of 20 encoded values represents the combinations of kernel size \(K\) and expansion ratio \(E\) for each of the 20 internal IRB or IB blocks. To allow the inclusion of parallel blocks in the super-networks, as illustrated in Figure 3, these pairs are expanded into triplets by introducing the additional term \(A\). The value of \(A\) ranges from 1 to 7, covering all possible permutations of parallel blocks activation states. Including the special value 0, which represents
the i\({}^{\text{th}}\) block or level being excluded from the network, thus reducing the stage depth, each of the 20 P\({}_{i}\) values can take up to 64 different values. This expansion of the search space allows greater exploration without changing the encoding size, as shown in the second row of Figure 2 labelled "Parallel".
In order to incorporate early exits into super-networks, and to capture sub-networks capable of making inferences at intermediate stages, it is necessary to encode this information appropriately. This ensures that the extracted sub-networks have the potential to make inferences at the end of each stage of the super-network, as illustrated in Figure 4. To allow for this, a new variable called \(X\) is introduced in the third position of the encoded strings, as shown in the third row of the Figure 2 labelled "Early Exits". The value of \(X\) corresponds directly to the index of the selected exit, providing information about the selected exit if early exits are present in the super-network architecture. The final encoding used in NATv2 is shown in the fourth line of Figure 2 and is named "Early Exits + Parallel". It encompasses the combined modifications necessary to incorporate parallel blocks and early exits. Compared to the version employed in NAT, the search space has been significantly increased at the minimal cost of one additional character in the encoding string.
Figure 2: The encodings representing the possible sub-networks within different types of super-networks have the following structure. \(R\) encodes the value corresponding to the size of the input images. \(W\) encodes the value of the width multiplier, which determines the width of the network architecture. \(X\) encodes information about the selected exit, specifically for super-networks that support early exits. \(L_{i}\) encodes the configuration of the \(i^{\text{th}}\) IRB/IB block for non-parallel networks. \(P_{i}\) encodes the configuration of the \(i^{\text{th}}\) level, i.e. set of blocks in parallel, for parallel networks. The “Baseline” configuration represents the encoding used in NAT, while the other configurations represent the encodings proposed and used in NATv2.
#### Archive Initialization and Update
NATv2 takes a different approach to managing the archive of optimal sub-networks. The changes primarily affect two key stages of the NAT algorithm: the archive initialisation and archive growth steps. With respect to the archive initialisation step, NAT directly samples architectures from the search space, thus evenly distributing possible values within the encoding. As a result, the initial archive has a strong bias towards networks with a maximum stage depth of 4. This bias arises from the fact that skippable IRB blocks (specifically, the 3rd or 4th block within a stage) can be encoded with values ranging from 0 to 9, but only assigning a value of 0 will cause these blocks to be skipped. Essentially, due
Figure 4: The NATv2 super-networks can contain several intermediate outputs, called early exits, which are strategically placed after each network stage. The idea behind early exits is that intermediate outputs may have comparable or better performance than the final one. Each early exit serves as an individual output that can be used independently for prediction. Alternatively, the outputs of multiple early exits can be aggregated or combined using an ensemble technique.
Figure 3: In NATv2, the super-networks can be enhanced by the introduction of two new blocks running in parallel to the existing IRBs/IBs and dense skip connections within each stage. The first parallel block consists of a pointwise convolutional layer, a batch normalisation layer, and a non-linear activation function. The second parallel block consists of a batch normalisation layer followed by a non-linear activation function and eventually preceded by a max pooling operator. These blocks represent a lighter alternative to IRBs/IBs which contributes to the diversification of possible sub-network topologies and has the potential to improve computational efficiency.
to uniform sampling, all values have an equal chance of being selected, making stages with a depth of 3 uncommon and stages with a depth of 2 quite rare. This problem becomes more pronounced when parallel blocks are supported, as the number of possible encodings for each network level is expanded up to 64 values. To address this issue, NATv2 approaches the archive initialisation phase by sampling subnets in a way that ensures uniformity not within the search space domain, but rather within the depth (for both stages and networks) and in the block configurations (for both parallel and non-parallel levels). This adaptation encourages greater heterogeneity in the NATv2 initialisation process, while improving the generalisation of the predictor through a more diverse training set.
In contrast to the archive growth step in NAT, NATv2 introduces a sub-network replacement step. Instead of starting with a limited set of architectures and allowing the archive to grow iteratively by adding newly discovered sub-networks, NATv2 directly populates the archive with the maximum number of architectures right from the start. During each iteration, the weakest architectures within the archive are replaced with those obtained through the evolutionary search. Consequently, the size of the archive remains constant throughout the process. This approach aims to enhance the average quality of the architectures within the archive, leading to improved performance of the predictor model. Notably, this improvement is particularly evident in the early iterations due to the larger pool of input data available for the predictor to learn from. The overall effect is an increased capability of the performance predictor in gauging the quality of sub-networks in NATv2.
In addition, NATv2 includes a pre-processing step within the archive initialization phase. This pre-processing entails sampling a significantly larger number of architectures, specifically ten times larger than the desired archive size, denoted as \(A_{s}\). The sampled sub-networks are evaluated and compared, and only the top \(A_{s}-2\) networks, along with the maximal and minimal networks, are selected to form the initial archive. Introducing a set of high-quality architectures at the beginning of the algorithm can contribute to obtain better performing sub-networks throughout the process. It is worth noting that the inclusion of the pre-processing step comes at the cost of increased execution time, but this is only a one-time cost per execution. The extent of this impact depends on the size of the initially sampled architecture set. In NATv2, the archive size \(A_{s}\) has been defined as 300.
#### Performance Predictor
The execution of the evolutionary search process in NATv2 generates a significant number of sub-networks. However, evaluating the performance of each of these sub-networks individually would render the algorithm computationally impractical, even with the use of weight sharing. To overcome this challenge, NATv2, like its predecessor, relies on a performance predictor model. This predictor performs a regression task and is trained online. At the start of each NAT iteration, the predictor model is fitted by taking as input the set of encodings corresponding to the sub-networks currently present in the archive, with their corresponding top-1 accuracy serving as the target. By leveraging this dataset, the predictor model is trained to predict the performance of architectural encodings that it has not encountered before. This approach allows NATv2 to efficiently estimate the performance of numerous sub-networks without having to evaluate each one individually, making the algorithm computationally tractable.
In NATv2, a significant expansion of candidate predictor models has been undertaken. Initially, the following machine learning models were considered:
* Gaussian Process (GP) [24],
* Radial Basis Function (RBF) [25],
* Multilayer Perceptron (MLP) [26],
* Classification and Regression Tree (CART) [27],
* Radial Basis Function Ensemble (RBFE) [25].
In order to thoroughly investigate the effectiveness of the predictor, various regression mechanisms have been explored, leading to the inclusion of the following models
* Support Vector Regressor (SVR) [28],
* Ridge Regressor [29],
* K-Nearest Neighbours Regressor (KNN) [30],
* Bayesian Ridge Regressor [31].
In addition, given their claimed success in the literature, the following models have been added to the candidate predictor list:
* End-to-End Random Forest-based Performance Predictor (E2EPP) [32],
* Light Gradient Boosting Machine (LGBM) [33],
* Catboost [34].
This extensive selection of machine learning models allows a comprehensive exploration of potential predictors in NATv2, highlighting the importance of the role of the predictor in the algorithm.
#### Training Networks with Early Exits
Special attention was paid to the training algorithm applied to super-networks with early exits, enriching it with new techniques introduced and proven effective in both the AEP and OFav2 works. In particular, the AEP training method is relied upon; given a network with multiple exits, the network undergoes a form of joint training by creating a weighted ensemble of its exits. Different weighting strategies can be used to adjust the contribution of each exit, but in general the benefits of this training method were clearly demonstrated in the original study. Another important technique is ENS-KD, a new knowledge distillation technique introduced in OFav2, which is also based on the AEP approach. As shown in Figure 5, given a teacher network with multiple exits, the knowledge transferred to the student isn't limited to that available at the last layer of the teacher network, but rather information from all its exits is weighted and combined according to the AEP method to distill more significant information, resulting in improved performance of the student networks.
Returning to the training of the NATv2 super-networks with early exits, during the warm-up phases, the maximal networks extracted from the OFav2 super-network under experimentation, i.e. those corresponding to width multipliers 1.0 and 1.2, are trained using the AEP training method, following the DESC weighting strategy. The use of the AEP joint training method is possible because the maximal network within an OFav2 super-network with early exits retains all the exits present in the super-network. For the NATv2 adaptation phase, in which the super-network is fine-tuned by sequentially activating sub-networks within it, a training algorithm corresponding to the last phase of the EPS algorithm is used. This is because NAT immediately makes all the possible values of each elastic parameter available for sampling. On the contrary, the steps and phases of EPS progressively reveal new elastic parameters and their values. It is only in the last phase of EPS that all elastic parameter values are available for sampling.
NAT uses Knowledge Distillation to improve sub-networks during the super-network adaptation phase. In NATv2, the standard Knowledge Distillation technique is replaced by the ENS-KD technique when performing the adaptation step on early exit super-networks. To be consistent with the EPS training that the super-networks underwent in OFav2, the sub-networks activated during the super-network adaptation step in NATv2 are single exit networks.
#### Post-Processing
The multi-objective optimization nature of the search process in NATv2 may result in the best sub-networks achieving high performance across multiple objectives but still not reaching their maximum classification potential. To address this, two distinct fine-tuning post-processing methods are introduced to maximize the accuracy of these networks. The
Figure 5: The scheme of the Knowledge Distillation technique for networks with early exits called ENS-KD, presented in OFAv2 and used in NATv2.
first method is applicable to sub-networks derived from any super-network, while the second method is specifically designed for sub-networks extracted from super-networks with early exits. Both methods consist of two sequential phases.
In the first phase, the optimal number of training epochs, denoted as \(e\), is determined for the given sub-network through fine-tuning on the target dataset. During this phase, the performance of the sub-network is continuously evaluated on the corresponding validation set. In the second phase, the sub-network is fine-tuned for \(e\) epochs using the combination of the training and validation sets. Finally, the test classification performance is computed and returned.
The difference between the two post-processing methods lies in the fine-tuning algorithm used. The first method directly fine-tunes the networks returned by NATv2 as they are, i.e. single exit networks. On the other hand, the second post-processing method utilizes the Anticipate, Ensemble and Prune (AEP) technique [23]. In this method, all exits above the one selected by NATv2 for the sub-network are extracted from the super-network and reattached to the sub-network. A joint fine-tuning of the exits is then performed. The second post-processing method allows for greater gains in accuracy compared to the traditional fine-tuning method. However, it comes at the cost of a slightly increased number of parameters and MACs.
## 4 Results and Discussion
This section outlines the experiments conducted to evaluate the performance of NATv2 and compare it to NAT, the method on which our approach is based. First, the experiments are described in detail, including the experimental setup, implementation specifics, and hardware used to ensure reproducibility. Then, three ablation studies are presented to provide insight into the intermediate steps and to evaluate the decision-making process that led to the final results. The first study focuses on identifying the optimal performance predictor to use during the search. The second examines the effectiveness of the new super-networks and their encodings compared to those used in NAT. The last explores different post-processing optimisation strategies to determine the most effective approach. Finally, we present the results of our study by comparing the performance of NAT and NATv2, with and without post-processing, on the proposed datasets and configurations. The aim is to provide compelling evidence to support our claims and convince the reader of the superiority of NATv2.
### Experiments Configuration
The experiments conducted for NATv2 used the super-network obtained with OFAv2 using the Extended Progressive Shrinking (EPS) technique. In contrast, the NAT experiments used the baseline OFAMobileNetV3 super-network obtained by OFA. Two versions of the same pre-trained super-networks were required to allow for the warm-up steps in both algorithms. The width multiplier configurations used were \(W=1.0\) and \(W=1.2\), maintaining consistency with those used in the NAT paper.
While the OFA networks were trained on ImageNet, the OFAv2 networks were trained on the Tiny ImageNet dataset instead. To maintain consistency, and due to resource constraints, the Tiny ImageNet dataset was also used instead of ImageNet in both the NAT and NATv2 configurations. The experiments were performed on three widely used image classification datasets: CIFAR-10, CIFAR-100 and Tiny ImageNet. Further details on these datasets are given in Table 1.
To ensure comparable results, the same set of hyperparameters was used for all experiments. The NATv2 models were trained using the SGD optimiser with a momentum of 0.9 and a weight decay parameter set to \(3\cdot 10^{-4}\). The learning rate was initially set to \(2.5\cdot 10^{-3}\) and adjusted using a cosine annealing scheduler. A batch size of 256 was used, and during each super-network adaptation epoch, four sub-networks per batch were sampled. The same hyperparameters were used for the warm-up phases, with the exception of the initial learning rate, which was set to \(7.5\cdot 10^{-3}\).
In order to maximise the effectiveness of the post-processing step, numerous combinations of optimisers and learning rates were tested. These experiments aimed to determine the optimal configurations and were conducted on sub-networks derived from the initial NATv2 experiments using the CIFAR-100 dataset, with NATv2 run with the objectives "Accuracy
\begin{table}
\begin{tabular}{|l c c c c|} \hline
**Dataset** & **Classes** & **Train size** & **Validation size** & **Test size** \\ \hline Tiny ImageNet [35] & 200 & 85 000 & 15 000 & 10 000 \\ CIFAR10 [36] & 10 & 45 000 & 5 000 & 10 000 \\ CIFAR100 [36] & 100 & 45 000 & 5 000 & 10 000 \\ \hline \end{tabular}
\end{table}
Table 1: The details of the datasets used in this work in terms of number of classes and splits.
& Params" and "Accuracy & MACs". To explore different post-processing combinations for single exit networks and early exit networks, the following optimisers were tested: SGD, AdamW [37], and Ranger (the combination of the LookAhead [38] and RAdam [39] optimisers). For each optimiser, two initial learning rates were tuned, i.e. \(10^{-4}\) and \(10^{-5}\).
In the case of AEP-based post-processing, which is applicable to sub-networks derived from early exit super-networks, experiments were conducted for all four AEP exit weighting strategies. When the SGD optimiser was used for post-processing, the previously reported values for momentum and weight decay were used, while no additional hyperparameters were specified for the other two optimisers. The batch size for these experiments was set to 64, and the networks were trained for a maximum of 150 epochs using a cosine annealing learning rate scheduler. Early termination was used, with a patience value of 30 epochs based on validation loss. All models were implemented using PyTorch 1.12.1 and experiments were run on an NVIDIA Quadro RTX 6000 GPU. The evolutionary algorithms used for the NAT and NATv2 search steps, specifically NSGA3 [40], were obtained from the pymoo library [41].
#### Performance Predictors Analysis
In the first ablation study, the predictor models were compared by keeping the training set size fixed at 300 and using the encoding methods proposed in Section 3 to generate input features. The performance of the surrogate models were evaluated using correlation values as the reference metric. These correlation values were obtained by analysing sub-networks extracted from the available configurations of the OFAv2 algorithm, which also includes sub-networks from OFA. The aim of this study was to identify the most accurate predictor of sub-network accuracy that could be used in both NAT and NATv2 evaluations. To obtain the most accurate estimate, all models were evaluated using the 10-fold cross-validation technique. After calculating the validation performance by averaging, the 10 models trained on
Figure 6: The rho correlation values for different models of performance predictors considered in this paper. The predictors were trained on 300 samples using the proposed integer encoding. For a list of predictor names and their acronyms, see Section 3.
Figure 7: The time values for different models of performance predictors considered in this paper. The predictors were trained on 300 samples using the proposed integer encoding. The y-axis is represented in logarithmic form. For a list of predictor names and their acronyms, see Section 3.
different data splits were aggregated by ensemble to form a single macro-model. The best macro-model among these ensembles was selected as the predictor.
Referring to the candidate models presented in Section 3, Figure 6 shows their performance, measured as the Spearman correlation between the predicted accuracy and the actual accuracy, referred to as the "rho correlation". The best performing models were CatBoost and LGBM, both above 0.9, followed by CARTS and E2EPP. CatBoost and LGBM also consistently produced the smallest error standard deviation intervals. This result has two positive implications. Firstly, it demonstrates the correlation between the proposed encoding and the accuracy of the corresponding model. Secondly, it means that the encoding successfully captures the large heterogeneity of possible networks, resulting in excellent error standard deviation intervals for the metric under consideration.
Figure 7 illustrates the time required by the different models to complete the entire training process. Given the reasonable size of the dataset, it is not surprising that the average times are generally positive for most models. As expected, the CART, RIDGE and BAYESIAN models are the fastest due to their simplicity. However, as their performance was not so promising, these models were not considered. When choosing a performance predictor, maximising the regression metric (in this case the rho correlation) is a crucial consideration. However, it is not the only factor to consider. The time taken to fit each model is also important. Although CatBoost performed slightly better than LGBM, it was significantly slower to fit. As a result, LGBM was ultimately selected as the predictor model.
Once the predictor model was fixed, the correlation values were evaluated for different training set sizes, using both integer and one-hot encodings to generate the input features. The results shown in Figure 8 indicate that for the given configuration, integer encoding outperforms one-hot encoding in terms of performance and stability. Furthermore, it appears that the number of samples reaches an optimal point, as the improvements in rho correlation seem to plateau in both encoding configurations. Finally, considering the performance variance achieved with smaller datasets, it can be concluded that the decision to replace the small but growing archive of NAT with a larger fixed-size archive in NATv2 is advantageous and improves the accuracy of the best estimated sub-networks from the beginning of the search process.
### OFAv2 Super-Networks Analysis
The second ablation study focuses on evaluating the effectiveness of the NAT algorithm enhanced with the new encoding methods described in Section 3 and the new performance predictor. This study serves as a preliminary evaluation to determine the improvements achieved by our approach compared to the initial model at this stage. The aim of this analysis is to evaluate the variation in performance resulting from the modification of the initial super-network. Specifically, two variations are considered: the initial super-network obtained via OFA and the starting super-networks obtained via OFAv2.
The results of this study are summarised in Figure 9. Each row corresponds to a different dataset and each column represents a different optimisation objective. Starting from the first row, which corresponds to the experiments carried out on CIFAR10, it can be seen that changing the super-network leads to an overall improvement in all the configurations analysed. When optimising for accuracy only, the best model obtained from OFAv2 shows an improvement in accuracy of about 2% compared to the model obtained from OFA. Moving on to the results of the multi-objective searches, it can be observed that the models obtained not only have higher accuracy, but also have significantly fewer parameters and MACs on average. For example, when comparing architectures with fewer parameters, in addition to a 1% accuracy advantage for the architecture found by OFAv2, the number of parameters is approximately five times lower compared to the best architecture found by OFA.
Figure 8: The rho correlation values achieved by the LGBM predictor for different training set sizes and encodings.
Looking at the results obtained on CIFAR100 and Tiny ImageNet, which are more complex datasets compared to CIFAR10, the previous observations are even more significant. In particular, when focusing only on accuracy optimisations, there is an improvement of about 5% and 9% respectively in favour of the models obtained from OFAv2. The performance of the solutions found in multi-objective optimisations further supports the claim that the proposed approach becomes more effective as the complexity of the problem increases. In addition, these results highlight the success and effectiveness of the proposed encoding method in this context. The encoding demonstrates its ability to capture the complexity of the problem and contribute to the improved performance of the models.
#### Post-Processing Optimisation
The third and final ablation study focuses on fine-tuning the parameters of the post-processing strategy once the optimal sub-networks have been found by NATv2. The goal of this study is to determine the optimal combination of optimiser, initial learning rate, and weight assignment for networks with early exits. In the case of networks with early exits, the choice of weights applied to the outputs is also part of the tuning process. The results presented in this study are based on the Tiny ImageNet dataset, with NATv2 run using the "Accuracy & Params" and "Accuracy & MACs" objectives.
Figure 9: The results of the study of the effectiveness of the OFAv2 super-networks compared to the OFA super-network within the NAT algorithm, based on the proposed datasets and optimisation strategies. For both sets of experiments, the encodings proposed in Section 3 were used.
We found that for each optimiser, regardless of the type of post-processing, the average performance obtained by setting the initial learning rate to \(10^{-4}\) was better than that obtained by setting it to \(10^{-5}\). This finding is consistent with the expectation that starting with a low learning rate may result in too much dilution of learning due to the cosine annealing schedule.
Regarding the choice of optimizer, single-exit architectures tend to benefit from using SGD, while almost all multiple-exit architectures show better performance with AdamW and Ranger optimizers, with a slightly stronger preference for AdamW. In terms of the AEP strategy, it was found that using a uniform weight distribution for networks with early exits yields better results on average.
Quantitatively, as demonstrated by the final results of this work, presented below in Table 2, the inclusion of post-processing provides significant benefits to model accuracy, with an average improvement of 2.63% on Tiny ImageNet. However, it is important to consider the trade-off in terms of increased parameters (21.84% increase) and MACs (7.63% increase) when early exits are incorporated. Nevertheless, the architectures discovered by NATv2 often exhibit high computational efficiency, and the increase in complexity can be justified by the significant performance gains. Additionally, it is worth noting that the post-processing step is optional and that incurs a minimal time cost of a few minutes, which is negligible compared to the overall research process.
### Final Results
The final set of experiments aims to compare the performance of the NAT reference model and its enriched version, NATv2, with and without the post-processing step. The results of these experiments are presented in Table 2, which encompasses the evaluation on CIFAR10, CIFAR100, and Tiny ImageNet datasets. The results are reported in terms of top-1 accuracy, number of parameters, and number of multiply-accumulate (MAC) operations, both measured in millions.
For CIFAR10, the NATv2 + PP model obtaines the highest accuracy of 93.17%, which is better than the other models. However, it also has the highest number of parameters (12.42M) and MAC (77.82M). Considering the trade-off between accuracy and number of parameters, the NATv2 model obtained by optimising "Accuracy - Params" with only 0.27M parameters achieved an accuracy of 91.46%, which is higher than any model obtained using NAT. Similarly, considering the trade-off between accuracy and MACs, the NATv2 model obtained by the "Accuracy - MACs" optimisation achieves an accuracy of 89.67% with a minimum number of MACs equal to 6.35M and a very small number of parameters equal to 0.19M. This is particularly suitable for devices with very strict constraints. Generally speaking, it is fair to say that for each optimisation scenario considered, NATv2 significantly outperforms NAT for each of the research objectives in most cases. On average, without exceeding either accuracy or parameter gains, the application of post-processing is beneficial.
If we turn to the CIFAR100 dataset, we observe similar trends in terms of the performance of the models and the optimisation targets. The NATv2 + PP model found in the single-target search emerges as the best performer overall, achieving an accuracy of 73.39%. This result demonstrates the model's ability to cope with the increased complexity of the CIFAR100 dataset, far exceeding the baseline accuracy of 66.93%. Considering the trade-off between accuracy and parameter count, NATv2 successfully optimises the search, finding a model with 0.86M parameters and an accuracy of 69.50%. Compared to the NAT models identified by the same optimisation, this is a significant improvement. It is also interesting to note that this model found with NATv2, although in a multi-objective optimisation, performs better in terms of accuracy than even the best NAT model optimised without taking parameters and MACs into account. It improves its accuracy by 2.57% and reduces parameters and MACs by 86.26% and 38.80% respectively. The NATv2 model stands out again when considering the trade-off between accuracy and MAC. The model with the lowest number of MACs (8.14M) consists of a very small number of parameters, 0.21M, with an accuracy of 66.29%, which is higher or comparable to any model found by NAT in any optimisation configuration. In addition to losing 7.68 percentage points of accuracy, NAT's lightest model requires more than twelve times as many parameters and twice as many MACs.
Let us now turn our attention to the results obtained for the Tiny ImageNet data set, which represents the most challenging problem. Considering accuracy as the only optimisation objective, the NATv2 + PP model achieves the highest accuracy of 54.82%, which exceeds the baseline accuracy of 43.06%. This is a further evidence of the effectiveness of the NATv2 + PP model in the improvement of classification performance. In the analysis of the trade-off between accuracy and number of parameters, NATv2 results in a model with only 0.10 million parameters, but still achieves an accuracy of 39.92By applying post-processing to this model, which is obviously an architecture without the possibility of inserting early exits and therefore extremely flat, the accuracy rises to 45.03% without increasing either the parameters or the MACs. This further demonstrates the usefulness of this fine-tuning step. The model thus obtained turns out to be better than any model found by NAT, this time with a truly minimal number of parameters.
Overall, the experiments show that NATv2 + PP consistently achieves the highest level of accuracy across all three data sets. However, by achieving competitive accuracy with significantly fewer parameters, NATv2 demonstrates its strength in terms of parameter efficiency. Furthermore, by achieving reasonable accuracy and minimising computational requirements, NATv2 also demonstrates its efficiency in terms of MAC. In general, it can be said that NATv2 is able to achieve significantly better trade-offs than NAT, in some cases decimating the number of parameters, and is therefore
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline & **Optimisation Objective** & **Model** & **Accuracy** & **Params (M)** & **MACs (M)** \\ \hline \hline \multirow{8}{*}{**Dataset**} & \multirow{4}{*}{Accuracy} & NAT (Baseline) & 90.68 & 6,75 & 59.97 \\ & & NATv2 & 93.06 & 8.74 & 65.74 \\ & & NATv2 + PP & **93.17** & 12.42 & 77.82 \\ \cline{2-6} & \multirow{4}{*}{Accuracy \& Params} & NAT (Baseline) & 90.73 & 2.91 & 30.03 \\ & & NAT (Baseline) & 90.65 & 2.51 & 26.74 \\ & & NATv2 & 92.69 & 1.30 & 34.39 \\ & & NATv2 & 91.46 & 0.27 & 12.52 \\ & & NATv2 + PP & 93.06 & 1.56 & 37.75 \\ & & NATv2 + PP & 92.00 & 0.47 & 24.28 \\ \cline{2-6} & \multirow{4}{*}{Accuracy \& MACs} & NAT (Baseline) & 89.98 & 2.40 & 15.33 \\ & & NAT (Baseline) & 85.66 & 2.14 & 7.82 \\ & & NATv2 & 91.77 & 1.09 & 20.11 \\ & & NATv2 & 89.67 & **0.19** & **6.35** \\ & & NATv2 + PP & 92.29 & 1.35 & 22.75 \\ & & NATv2 + PP & 90.23 & 0.23 & 6.92 \\ \hline \hline \multirow{8}{*}{**Dataset**} & \multirow{4}{*}{Accuracy} & NAT (Baseline) & 66.93 & 6.26 & 55.72 \\ & & NATv2 & 71.88 & 9.75 & 56.60 \\ & & NATv2 + PP & **73.39** & 11.13 & 74.28 \\ \cline{2-6} & \multirow{4}{*}{Accuracy \& MACs} & NAT (Baseline) & 66.83 & 3.62 & 31.44 \\ & & NAT (Baseline) & 65.56 & 2.57 & 26.69 \\ & & NATv2 & 70.68 & 1.36 & 34.26 \\ & & NATv2 & 69.50 & 0.86 & 21.62 \\ & & NATv2 + PP & 72.03 & 1.70 & 31.79 \\ & & NATv2 + PP & 70.54 & 1.03 & 23.38 \\ \cline{2-6} & \multirow{4}{*}{Accuracy \& MACs} & NAT (Baseline) & 64.76 & 2.70 & 16.05 \\ & & NAT (Baseline) & 58.61 & 2.26 & 7.94 \\ & & NATv2 & 69.31 & 1.26 & 21.34 \\ & & NATv2 & 66.29 & **0.21** & **8.14** \\ & & NATv2 + PP & 71.02 & 1.59 & 24.04 \\ & & NATv2 + PP & 67.90 & 0.26 & 8.93 \\ \hline \hline \multirow{8}{*}{**Dataset**} & \multirow{4}{*}{Accuracy} & NAT (Baseline) & 43.06 & 8.10 & 61.67 \\ & & NATv2 & 53.59 & 1.66 & 43.43 \\ \cline{1-1} & & NATv2 + PP & **54.82** & 2.06 & 46.94 \\ \cline{1-1} & \multirow{4}{*}{Accuracy \& MACs} & NAT (Baseline) & 43.45 & 4.05 & 46.80 \\ \cline{1-1} & & NAT (Baseline) & 42.99 & 2.71 & 28.00 \\ \cline{1-1} & & NATv2 & 51.16 & 1.44 & 39.19 \\ \cline{1-1} & & NATv2 & 39.92 & **0.10** & **5.43** \\ \cline{1-1} & & NATv2 + PP & 54.31 & 1.85 & 41.70 \\ \cline{1-1} & & NATv2 + PP & 45.03 & **0.10** & **5.43** \\ \cline{1-1} \cline{2-6} & \multirow{4}{*}{Accuracy \& MACs} & NAT (Baseline) & 42.00 & 2.86 & 17.01 \\ \cline{1-1} & & NAT (Baseline) & 38.89 & 2.39 & 8.06 \\ \cline{1-1} & & NATv2 & 51.05 & 1.46 & 28.41 \\ \cline{1-1} & & NATv2 & 47.24 & 0.25 & 5.95 \\ \cline{1-1} & & NATv2 + PP & 53.91 & 1.87 & 31.97 \\ \cline{1-1} & & NATv2 + PP & 48.96 & 0.32 & 6.55 \\ \hline \end{tabular}
\end{table}
Table 2: The results extracted from the final set of experiments. For each dataset, the first column on the left shows the best sub-networks found, grouped by research objective. For each dataset and metric, the best result is highlighted in **bold**. For each dataset, metric and objective, the best result is underlined. For each multi-objective optimisation, the best model found for accuracy, and the best model for the second objective of the optimisation are reported. Each NATv2 experiment is also presented in its post-processed form called NATv2 + PP.
particularly suitable for model searches for devices with a small amount of memory. On the other hand, if secondary objectives are to be sacrificed for the sake of accuracy, the use of post-processing has proven to be a beneficial step in the vast majority of cases, as it allows for architectures that are still considerably lighter and faster than those realised by NAT, while at the same time achieving much better performance in terms of accuracy.
## 5 Conclusion
In this paper we have presented Neural Architecture Transfer 2 (NATv2), the extension of the Neural Architecture Transfer (NAT) technique by the implementation of two recent algorithms, namely Once-For-All-2 (OFAv2) and Anticipate Ensemble and Prune (AEP). In particular, we have shown that NATv2 can find networks that are significantly smaller in terms of parameters and operations, and more accurate than those found by applying NAT, by modifying the architectural design of the super-networks as well as the algorithm used.
The greatest improvements were obtained by applying NATv2 exactly to these modified super-networks. Among the modifications, the one that contributed to the greatest improvements was the introduction of early exits in the architectures. Among the most important algorithmic improvement, the introduction of the post-processing phase, which allows further refinement of the returned sub-networks, proved to be an extremely effective addition, virtuously increasing the performance at a negligible cost in terms of parameters and operations. The results suggest that NATv2 is a successful extension of NAT, which was already an excellent tool for the realisation of deep learning models by satisfying very strict constraints. In particular, NATv2 is highly recommended for exploring high-performance architectures with an extremely small number of parameters.
## Acknowledgment
This project has been supported by AI-SPRINT: AI in Secure Privacy-pReserving computNg conTinuum (European Union H2020 grant agreement No. 101016577) and FAIR: Future Artificial Intelligence Research (NextGenerationEU, PNRR-PE-AI scheme, M4C2, investment 1.3, line on Artificial Intelligence).
|
2303.03746
|
In-Silico Characterization of Nanoparticle Catalysts
|
Nanoparticles (NPs) make for intriguing heterogeneous catalysts due to their
large active surface area and excellent and often size-dependent catalytic
properties that emerge from a multitude of chemically different surface
reaction sites. NP catalysts are, in principle, also highly tunable: even small
changes to the NP size or surface facet composition, doping with heteroatoms,
or changes of the supporting material can significantly alter their
physicochemical properties. Because synthesis of size- and shape-controlled NP
catalysts is challenging, the ability to computationally predict the most
favorable NP structures for a catalytic reaction of interest is an in-demand
skill that can help accelerate and streamline the material optimization
process. Fundamentally, simulations of NP model systems present unique
challenges to computational scientists. Not only must considerable
methodological hurdles be overcome in performing calculations with hundreds to
thousands of atoms while retaining appropriate accuracy to be able to probe the
desired properties. Also, the data generated by simulations of NPs are
typically more complex than data from simulations of, for example, single
crystal surface models, and therefore often requires different data analysis
strategies. To this end, the present work aims to review analytical methods and
data analysis strategies that have proven useful in extracting thermodynamic
trends from NP simulations.
|
Björn Kirchhoff, Christoph Jung, Daniel Gaissmaier, Laura Braunwarth, Donato Fantauzzi, Timo Jacob
|
2023-03-07T09:09:18Z
|
http://arxiv.org/abs/2303.03746v1
|
# Journal Name
###### Abstract
Nanoparticles (NPs) make for intriguing heterogeneous catalysts due to their large active surface area and excellent and often size-dependent catalytic properties that emerge from a multitude of chemically different surface reaction sites. NP catalysts are, in principle, also highly tunable: even small changes to the NP size or surface facet composition, doping with heteroatoms, or changes of the supporting material can significantly alter their physicochemical properties. Because synthesis of size- and shape-controlled NP catalysts is challenging, the ability to computationally predict the most favorable NP structures for a catalytic reaction of interest is an in-demand skill that can help accelerate and streamline the material optimization process. Fundamentally, simulations of NP model systems present unique challenges to computational scientists. Not only must considerable methodological hurdles be overcome in performing calculations with hundreds to thousands of atoms while retaining appropriate accuracy to be able to probe the desired properties. Also, the data generated by simulations of NPs are typically more complex than data from simulations of, for example, single crystal surface models, and therefore often requires different data analysis strategies. To this end, the present work aims to review analytical methods and data analysis strategies that have proven useful in extracting thermodynamic trends from NP simulations.
+
Footnote †: dagger}\) Electronic Supplementary Information (ESI) available: An online tutorial for implementing many of the presented methods is available via [https://bjk24.github.io/in-silico-review/intro.html](https://bjk24.github.io/in-silico-review/intro.html).
+
Footnote †: dagger}\) Electronic Supplementary Information (ESI) available: An online tutorial for implementing many of the presented methods is available via [https://bjk24.github.io/in-silico-review/intro.html](https://bjk24.github.io/in-silico-review/intro.html).
+
Footnote †: dagger}\) Electronic Supplementary Information (ESI) available: An online tutorial for implementing many of the presented methods is available via [https://bjk24.github.io/in-silico-review/intro.html](https://bjk24.github.io/in-silico-review/intro.html).
## 1 Introduction
Deployment of heterogeneous catalysts in nanoparticulate form has various benefits. Not only do nanoparticles (NPs) promise high mass activity due to the more favorable surface-to-volume ratio compared to catalysts with grain sizes in the micro- or millimeter range. NPs can also express heightened or outright different catalytic properties compared to the bulk material as a result of finite- and quantum-size effects and due to the availability of a plethora of chemically distinct surface reaction sites. However, synthesis of shape- and size-controlled NPs is challenging, and Ostwald ripening as well as other degradation effects can impact the longevity of NP catalysts.[1] In order to streamline material optimization cycles, interest has therefore been growing in computational approaches that can predict favorable catalyst candidate structures for a reaction of interest. More and more, computational science is asked to establish structure-activity relationships for model NP catalysts in the 1 to 5 nm range and to investigate degradation mechanisms.
Historically, computational investigation of NP properties was usually carried out by performing density functional theory (DFT) calculations of single crystal model surfaces that correspond to the surface facets of a NP of interest.[2, 3, 4, 5, 6] However, results from surface models are often not transferable to NPs for several reasons. Firstly, the number of chemically different surface sites on a NP is larger than on single crystal surface models. In particular, the highly reactive, undercoordinated edge and vertex sites are hard to represent using a surface model.[7] Furthermore, when using stepped model surfaces to mimic the lower coordination of NP edge sites, one finds that such models contain both convex and concave surface structures while NPs typically only contain one type.[8] Another disparity between surface models and actual NPs is that quantum-size effects can affect the properties of clusters and small NPs in non-systematic ways.[9, 10] In fact, not all properties observed for NPs will converge to the bulk limit as the system size increases.[11] Surface models are therefore unsuitable to study certain NP properties, even if the surface model is used as a stand-in for very large NPs. Finally, investigation of catalyst degradation processes using surface models and DFT can be arduous given the computational limitations with regards to system size and time scale that such processes typically occur at.
|
2303.12893
|
Hyperbolic polaritons in topological nodal ring semimetals
|
In mirror-symmetric systems, there is a possibility of the realization of
extended gapless electronic states characterized as nodal lines or rings.
Strain induced modifications to these states lead to emergence of different
classes of nodal rings with qualitatively different physical properties. Here
we study optical response and the electromagnetic wave propagation in type I
nodal ring semimetals, in which the low-energy quasiparticle dispersion is
parabolic in momentum $k_x$ and $k_y$ and is linear in $k_z$. This leads to a
highly anisotropic dielectric permittivity tensor in which the optical response
is plasmonic in one spatial direction and dielectric in the other two
directions. The resulting normal modes (polaritons) in the bulk material become
hyperbolic over a broad frequency range, which is furthermore tunable by the
doping level. The propagation, reflection, and polarization properties of the
hyperbolic polaritons not only provide valuable information about the
electronic structure of these fascinating materials in the most interesting
region near the nodal rings but also pave the way to tunable hyperbolic
materials with applications ranging from anomalous refraction and waveguiding
to perfect absorption in ultrathin subwavelength films.
|
Ashutosh Singh, Maria Sebastian, Yuanping Chen, Po-Yao Chang, Alexey Belyanin
|
2023-03-22T20:11:07Z
|
http://arxiv.org/abs/2303.12893v1
|
# Hyperbolic polaritons in topological nodal ring semimetals
###### Abstract
In mirror-symmetric systems, there is a possibility of the realization of extended gapless electronic states characterized as nodal lines or rings. Strain induced modifications to these states lead to emergence of different classes of nodal rings with qualitatively different physical properties. Here we study optical response and the electromagnetic wave propagation in type I nodal ring semimetals, in which the low-energy quasiparticle dispersion is parabolic in momentum \(k_{x}\) and \(k_{y}\) and is linear in \(k_{z}\). This leads to a highly anisotropic dielectric permittivity tensor in which the optical response is plasmonic in one spatial direction and dielectric in the other two directions. The resulting normal modes (polaritons) in the bulk material become hyperbolic over a broad frequency range, which is furthermore tunable by the doping level. The propagation, reflection, and polarization properties of the hyperbolic polaritons not only provide valuable information about the electronic structure of these fascinating materials in the most interesting region near the nodal rings but also pave the way to tunable hyperbolic materials with applications ranging from anomalous refraction and waveguiding to perfect absorption in ultrathin subwavelength films.
_Introduction:_ The quantification of topological properties of condensed matter systems in the last decade has been driven to a large extent by the studies of Dirac and Weyl semimetals [1; 2]. In these materials, the conduction and the valence bands merge at isolated points in the Brilliouin zone such that the low-energy quasiparticles mimic the physics of Dirac and Weyl fermions with speed much lower than light. Low-energy optical spectroscopy provides a unique opportunity for their energy-resolved studies near band crossings, which is not always possible by other means. Perhaps the most direct consequence of the Weyl fermion dispersion is a linear in frequency conductivity [3; 4; 5; 6; 7], with modifications due to anisotropic dispersion [8] band with temperature playing important role due to the quadratic dependence of density of states on quasiparticle energy [9; 10]. A lot of effort has been spent on extracting the topological features of these materials from their optical properties; see, e.g., [11; 12; 13; 14; 15] and references therein.
In a rather new class of topological semimetals known as nodal line semimetals, the conduction and valence band touch along a line or a ring (loop) [16]. Different classes of nodal rings have been proposed, e.g., hybrid nodal rings [17], spin gapless nodal rings [18], topological nodal rings in carbon networks [19], antiperovskites [20], semimetallic carbon tetrarings [21], and orthorhombic \(C_{16}\)[22]. Among many interesting features displayed by the nodal ring semimetals (NRSM) are unusual Landau level quantization [23], and drumhead surface states [24; 25]. Furthermore, the bulk energy dispersion is highly anisotropic in momentum space as shown in Fig. 1(a) and Fig. 1(b). Moreover, abrupt change in Fermi surface topology occur when the quasiparticle energy is tuned in the vicinity of the energy gap parameter (Fig. 1(c)). Direct consequence of this feature appears in the density of states (DOS), Fig. 1(d).
One can fully expect that these unusual electronic properties of NRSM result in a peculiar and even unique optical response. Previous studies were mainly focused on the derivation of the linear optical conductivity spectra [26; 27; 28] as well as the second-order conductivity in symmetry-broken NRSM [29]. However, the aspect of the optical response which provides most insight into the physical properties, and also the one most closely connected to experiment is the propagation, absorption, reflection/refraction, and polarization properties of the normal EM modes of the material, or the polaritons. In this paper we focus on type I NRSM in which the connection between the fascinating properties of the polaritons and the underlying electronic structure is very intuitive. One obvious property of the polaritons in NRSM stems from the fact that the low-energy quasiparticle dispersion is parabolic in momentum \(k_{x}\) and \(k_{y}\) but is linear in \(k_{z}\). This leads to uniaxial anisotropy of the dielectric permittivity tensor in which the optical response is plasmonic in one spatial direction and dielectric in the other two directions. The resulting polaritons in the bulk material split into so-called ordinary and extraordinary modes, and the extraordinary mode become hyperbolic over a broad frequency range, which can easily extend to 1-2 eV and which is furthermore tunable by the doping level. Note that in "conventional" Weyl semimetals with nodal points the hyperbolic dispersion was only predicted in high magnetic fields and with Fermi level tuned to the band crossing points [11]. Note also that in a relatively better studied group of open nodal line semimetals the hyperbolic dispersion has been recently observed with tip-enhanced infrared spectroscopy [30]. In other kinds of anisotropic crystals, such as hexagonal boron nitride, the hyperbolic dispersion typically exists in a narrow mid-infrared frequency range defined by separation between anisotropic phonon resonances [31]. The hyperbolic materials are of course highly desirable for applications as they exhibit a plethora of unique prop
erties such as negative refraction, propagation through subwavelength apertures, and waveguiding by ultrathin films.
The existence of two types of polaritons and their hyperbolic character defines all aspects of the EM wave interaction with NRSM. Here we only briefly describe a few of them, hoping to stimulate subsequent studies and experiments.
_Electron states in NRSM_: An effective low-energy Hamiltonian which describes different types of the NRSM can be written as [19]
\[H({\bf k})=\left(\begin{array}{cc}t_{1}\mathscr{G}(k_{x},k_{y})&it_{2}\sin(k _{z}a)\\ -it_{2}\sin(k_{z}a)&\Delta+\gamma t_{1}\mathscr{G}(k_{x},k_{y})\end{array} \right)\, \tag{1}\]
where \(\mathscr{G}(k_{x},k_{y})=2-\cos(k_{x}a)-\cos(k_{y}a)\), \(t_{1}\) and \(t_{2}\) are the hopping parameters, \(a\) is the lattice spacing, \(\Delta\) is the gap at the \(\Gamma\) point and \(\gamma\) is the band tuning parameter which takes value \(-1\) for type-I NRSM and \(0<\gamma<1\) for type-II NRSM. A third class of topological NRSM comprises of merging type-I and type-II materials for which \(\gamma\) as well as \(\mathscr{G}(k_{x},k_{y})\) changes.
The nodal lines are protected by the mirror symmetry \(M_{z}\), \(M_{z}^{-1}H({\bf k})M_{z}=H(\bar{\bf k})\) with \(\bar{\bf k}=(k_{x},k_{y},-k_{z})\) and \(M_{z}=\sigma_{z}\). One can visualize the nodal lines as the Berry flux tubes in the momentum space. These Berry flux tubes are robust objects due to quantization of the flux to integer multiples of \(\pi\).
In this letter, we focus on type-I nodal rings with \(\gamma=-1\), and we further take \(t_{1}=t_{2}=t\) for simplicity. All results can be easily generalized for \(t_{1}\neq t_{2}\) if needed for specific compounds. One should expect the lattice constant \(a\) to be of the order of 0.1-0.3 nm, whereas the hopping energy \(t\) is typically on the scale of several eV. To fix the numerical value of the product _at_ for the plots, we assume that the "Fermi velocity" \(v_{F}\), i.e., the linear slope of the electron dispersion in Fig. 1(b), satisfies \(\hbar a^{-1}v_{F}=t\), whereas its ratio to the speed of light is \(v_{F}/c=300\). This is true within a factor of 2 for most Dirac materials. The parameter \(\Delta\) could vary in wide limits. The most optimal situation for optical studies of topological nodal ring states is when \(\Delta\) is small as compared to \(t\), so that the nodal rings and characteristic optical transitions at photon energies \(\sim\Delta\) are near the center of the Brillouin zone and well separated from higher-energy transitions between any trivial remote bands. As we see below, this will also maximize the optical anisotropy. We will set \(\Delta=0.2t\) for further discussion. The corresponding electron energy dispersion is shown in Fig. 1.
The quasiparticle energy dispersion for the Hamiltonian in Eq.(1) is given as
\[\varepsilon_{\lambda{\bf k}}=\frac{\Delta}{2}+\frac{\lambda}{2}\sqrt{(\Delta- 2\mathscr{G}(k_{x},k_{y})t)^{2}+4t^{2}\sin^{2}(k_{z}a)}\, \tag{2}\]
where \(\lambda=+1(-1)\) for conduction (valence) band. In the electric dipole approximation the interband optical transitions are vertical. The transition energy for a quasiparticle at momentum \({\bf k}\) is given by the difference between the conduction and the valence band energies,
\[\hbar\omega_{\bf k}=\sqrt{(\Delta-2\mathscr{G}(k_{x},k_{y})t)^{2}+4t^{2}\sin^ {2}(k_{z}a)}. \tag{3}\]
The normalized eigenvectors are
\[|\Psi_{\lambda{\bf k}}\rangle=\frac{1}{\mathscr{N}_{\lambda}}\begin{pmatrix}i(- \Delta+2t\mathscr{G}(k_{x},k_{y})+\lambda\hbar\omega_{\bf k})\\ 2t\sin(k_{z}a)\end{pmatrix}\, \tag{4}\]
with \(\mathscr{N}_{\lambda}=\sqrt{(-\Delta+2t\mathscr{G}(k_{x},k_{y})+\lambda\hbar \omega_{\bf k})^{2}+4t^{2}\sin^{2}(k_{z}a)}\).
_Optical permittivity_ : In equillibrium at temperature \(T\) and chemical potential \(\mu\), the linear response optical conductivity is computed within the Kubo framework [32]. We will take \(k_{B}T=t/200\) for numerical plots to include thermally excited carriers. In order to incorporate scattering-related losses at the phenomenological level, we have introduced a decay term, \(\hbar\Gamma=0.005t\). The current operator components are \(\hat{j}_{\alpha}=\hbar^{-1}\partial_{k_{\alpha}}\hat{H}\), where \(\alpha=\{x,y,z\}\). For the Hamiltonian (1) they become
\[\hat{\mathbf{j}}=\frac{eat}{\hbar}\{\sin(k_{x}a)\hat{\sigma}_{z},\sin(k_{y}a)\hat {\sigma}_{z},-\cos(k_{z}a)\hat{\sigma}_{y}\}. \tag{5}\]
We also add the background permittivity (\(\epsilon_{b}\)) due to the sum of contributions from remote bands not included in the Hamiltonian (1), and assume it to be isotropic and with negligible dispersion within the frequency range of interest to us. Its exact value shifts the plots in Fig. 2 but does not change the qualitative physical behavior; we will use \(\epsilon_{b}=15\) as a reasonable number in the infrared. The resulting dielectric tensor \(\hat{\epsilon}\) is expressed in terms of the conductivity (in SI units) as \(\hat{\epsilon}(\omega)=\epsilon_{b}\mathbb{I}_{3\times 3}+i\hat{\sigma}(\omega)/( \omega\epsilon_{0})\).
Figure 1: Energy dispersion in (a) \(k_{z}=0\), and (b) \(k_{y}=0\) momentum planes for type-I NRSM described by the Hamiltonian in Eq. (1). The vertical axis is normalized by \(t\). (c) Constant energy surfaces. For energies lower than \(\Delta\) the momentum distribution forms a toroidal shape. Increasing energy deforms the toroid and it collapses into a drum-like structure for energies greater than \(\Delta\). (d) The density of states, \(\mathcal{N}(\varepsilon)\), normalized by \(a^{-3}t^{-1}\) as a function of energy \(\varepsilon\) normalized by \(t\) for \(\Delta=0.2t\).
Due to the symmetry of the system, only the diagonal terms of the conductivity tensor survive. The details of the conductivity derivation and analytic results are provided in the Supplementary Material (SM). The general structure and scaling of the diagonal permittivity components is given by \(\epsilon_{\alpha\alpha}(\omega)=\epsilon_{b}-g\alpha_{F}\mathscr{I}_{\alpha \alpha}/(2\pi^{2}a\omega)\), where \(\mathscr{I}_{\alpha\alpha}\) are dimensionless integrals specified in the SM, \(\alpha_{F}=e^{2}\left(4\pi\epsilon_{0}\hbar c\right)^{-1}\) is the fine structure constant, and \(c\) is the speed of light.
To the leading order in the long wavelength limit, cylindrical symmetry is preserved so that \(\epsilon_{yy}\approx\epsilon_{xx}\). However, \(\epsilon_{zz}\) behaves differently, as shown in Fig. 2. First of all, the magnitude of the matrix elements of the \(j_{z}\) component of the current is higher than the ones for \(j_{x,y}\) components, as one can see from Eq. (5) and the SM. Indeed, when \(\Delta\ll t\) the main contribution comes from the states with \(|k_{\alpha}a|\ll 1\) in the vicinity of the nodal rings. In this case the ratio of matrix elements \(|j_{z}/j_{x,y}|\sim t/\Delta\gg 1\), yielding a higher magnitude of \(\epsilon_{zz}\) as compared to \(\epsilon_{xx}\). Second, while at the lowest frequencies all permittivity components are dominated by intraband plasmonic response (even when the Fermi level is at the band crossing energy, \(\mu/t=0.1\), because free carriers are still present at finite temperature), with increasing frequency the behavior of \(\epsilon_{xx}\) becomes dielectric, whereas the \(\epsilon_{zz}\) component maintains plasmonic behavior over a significantly broader frequency range. This extreme anisotropy with opposite signs of the real parts of the dielectric tensor components gives rise to the hyperbolic dispersion of the polaritons.
_Properties of NRSM polaritons_: Maxwell's equations for the electric field vector \(\mathbf{E}\propto\exp(i\mathbf{q}\mathbf{r}-i\omega t)\) of monochromatic EM waves propagating in a bulk crystal with permittivity tensor \(\hat{\epsilon}\) can be written as
\[\mathbf{n}\left(\mathbf{n}\cdot\mathbf{E}\right)-n^{2}\mathbf{E}+\hat{\epsilon}\mathbf{E}=0, \tag{6}\]
where \(\mathbf{n}=\mathbf{q}c/\omega\). For a diagonal permittivity tensor, the solution of the corresponding dispersion equation consists of two linearly polarized normal modes (polaritons) which are often called an ordinary and extraordinary wave. Since \(\epsilon_{yy}=\epsilon_{xx}\), we can consider without loss of generality the propagation with the wave vector in the \((xz)\)-plane, i.e., \(\mathbf{n}=(n_{x},0,n_{z})\). Then the refractive indices of the ordinary and extraordinary modes are given by
\[n_{o}^{2}=\epsilon_{xx}\text{ and }n_{e}^{2}=\frac{\epsilon_{xx}\epsilon_{zz}}{ \epsilon_{xx}\sin^{2}\theta+\epsilon_{zz}\cos^{2}\theta}\, \tag{7}\]
where \(\theta=\cos^{-1}(n_{z}/|\mathbf{n}|)\). The electric field vector of the extraordinary mode lies in the \((xz)\)-plane, whereas the one of the ordinary mode is along \(y\).
Figure 3(a) shows an example of the constant-frequency surface for the dispersion equation of the two modes. The surfaces are plotted for \(\mu=0.2t\) and the
Figure 2: Real (solid line) and imaginary (dashed line) parts of (a) \(\epsilon_{xx}\) and (b) \(\epsilon_{xx}\) as a function of photon energy, for two different values of the chemical potential. The value of \(\mu/t=0.1\) corresponds to the chemical potential exactly at the band crossing, as one can see from Fig. 1(d).
Figure 3: (a) The solution of the dispersion equation (6) for the extraordinary wave (yellow hyperboloid) and ordinary wave (red sphere) at a constant photon energy \(\hbar\omega\sim 0.13t\) and \(\mu=0.2t\), so that \(\epsilon_{xx}\sim 13\) and \(\epsilon_{zz}\sim-20\). (b) Re\([n_{e}]\) and Im\([n_{e}]\) as a function of photon energy for \(\mu=0.2t\). (c) Real part of \(n_{e}(\theta)\) for \(\mu=0.1t,\hbar\omega=0.1t\) (green), \(\mu=0.1t,\hbar\omega=0.15t\) (purple), \(\mu=0.2t,\hbar\omega=0.1t\) (red) and \(\mu=0.2t,\hbar\omega=0.15t\) (blue). (d) Schematic for the reflection of normally incident EM wave from an ultrathin NRSM film of thickness \(\ell\) placed on top of a substrate of complex refractive index \(n_{d}\). (e) Ordinary (extraordinary) wave reflectivity shown in red line (circle) for \(\mu=0.1t\), and in blue line (circle) for \(\mu=0.2t\). The film thickness \(\ell=300a\). (f) Color plot of the ordinary wave reflectivity for \(\mu=0.1t\) as a function of the thickness of the film (y-axis) and the photon frequency (x-axis). Here we assumed \(n_{d}=1.4+4.0i\).
frequency \(\hbar\omega\sim 0.13t\) for which the real parts of the dielectric tensor components have a much greater magnitude than the imaginary parts, so that \(\epsilon_{xx}\sim 13\) and \(\epsilon_{zz}\sim-20\). For the ordinary waves the surface is a sphere which is a particular case of the usual Fresnel ellipsoid. At the same time, for the extraordinary modes the surface is a hyperboloid. Its cross-section at \(n_{y}=0\) is
\[\frac{n_{x}^{2}}{\epsilon_{zz}}+\frac{n_{z}^{2}}{\epsilon_{xx}}=1. \tag{8}\]
In the range of frequencies where \(\text{Re}[\epsilon_{zz}]<0\) and \(\text{Re}[\epsilon_{xx}]>0\) the EM waves are able to propagate in certain directions with \(|\mathbf{n}|\gg 1\), i.e., \(|\mathbf{q}|\gg\omega/c\), as one can also see in Fig. 3(c).
The dominant feature in the spectra of hyperbolic polaritons is a characteristic peak in the extraordinary wave dispersion and absorption near the frequency which minimizes the denominator in the expression (7) for \(n_{e}^{2}\), see the spectra in Fig. 3(b) for \(\theta=\pi/3\). The resonance exists for any angle \(\theta\neq 0\) or \(\pi/2\). A similar phenomenon in classical anisotropic plasmas would be a hybrid plasmon-polariton resonance, corresponding to hybridization between longitudinal plasmons and transverse EM waves. Note also the existence of a photonic band gap at frequencies above the resonance, where the real part of the refractive index would have dropped to zero in the absence of an imaginary part of the permittivity tensor. The real part of the refractive index drops very steeply at the photonic band gap boundary, indicating a small group velocity \(v_{group}\ll c\) in this region. This behavior is similar to the dispersion of extraordinary magnetopolaritons in nodal-point Weyl semimetals [11].
Obviously, all of the above spectral and angular features in polariton propagation and absorption can have important practical applications in thin-film EM waveguides, modulators, switches etc. We will mention just one more potential application, which has been pointed out for strongly absorbing materials: ultrathin-film perfect absorbers [33]. Consider an EM wave normally incident on an ultrathin (strongly subwavelength) NRSM film, as in Fig. 3(d). In this case destructive interference between reflections from the front and back facets of the film can result in spectral windows of zero reflectivity even for a film much thinner than the incident wavelength. This is illustrated in Fig. 3(e) for a fixed film thickness and in Fig. 3(f) for a range of thicknesses of a few tens of nm, depending on the exact value of the lattice period \(a\). The zero reflectivity region is tunable by doping, film thickness, and also depends on the substrate. As was shown in [33], the best results are obtained for metallic or highly doped semiconducting substrates with mostly imaginary refractive index, such as the one chosen for Fig. 3(e,f).
In conclusion, topological nodal ring semimetals are natural hyperbolic optical materials, with associated extreme optical anisotropy, anomalous refraction, and strong plasmon-polariton resonances. Their unique combination of optical properties is highly sensitive to the material parameters and can be used for optical spectroscopy of the nodal rings. Ultrathin films of NRSM can find a number of applications as infrared waveguides, modulators, switches, and antireflection coatings. This work has been supported in part by the Air Force Office for Scientific Research Grant No. FA9550-21-1-0272 and National Science Foundation Award No. 1936276.
|
2304.07587
|
Laminar post-stall wakes of tapered swept wings
|
While tapered swept wings are widely used, the influence of taper on their
post-stall wake characteristics remains largely unexplored. To address this
issue, we conduct an extensive study using direct numerical simulations to
characterize the wing taper and sweep effects on laminar separated wakes. We
analyze flows behind NACA 0015 cross-sectional profile wings at post-stall
angles of attack $\alpha=14^\circ$--$22^\circ$ with taper ratios
$\lambda=0.27$--$1$, leading edge sweep angles $0^\circ$--$50^\circ$, and semi
aspect ratios $sAR =1$ and $2$ at a mean-chord-based Reynolds number of $600$.
Tapered wings have smaller tip chord length, which generates a weaker tip
vortex, and attenuates inboard downwash. This results in the development of
unsteadiness over a large portion of the wingspan at high angles of attack. For
tapered wings with backward-swept leading edges unsteadiness emerges near the
wing tip. On the other hand, wings with forward-swept trailing edges are shown
to concentrate wake shedding structures near the wing root. For highly swept
untapered wings, the wake is steady, while unsteady shedding vortices appear
near the tip for tapered wings with high leading edge sweep angles. For such
wings, larger wake oscillations emerge near the root as the taper ratio
decreases. While the combination of taper and sweep increases flow
unsteadiness, we find that tapered swept wings have more enhanced aerodynamic
performance than untapered and unswept wings, exhibiting higher time-averaged
lift and lift-to-drag ratio. The current findings shed light on the fundamental
aspects of flow separation over tapered wings in the absence of turbulent flow
effects.
|
Jean Hélder Marques Ribeiro, Jacob Neal, Anton Burtsev, Michael Amitay, Vassilios Theofilis, Kunihiko Taira
|
2023-04-15T15:59:33Z
|
http://arxiv.org/abs/2304.07587v2
|
# Laminar post-stall wakes of tapered swept wings
###### Abstract
While tapered swept wings are widely used, the influence of taper on their post-stall wake characteristics remains largely unexplored. To address this issue, we conduct an extensive study using direct numerical simulations to characterize the wing taper effect on laminar separated wakes. We analyze flows behind NACA 0015 cross-sectional profile wings at post-stall angles of attack \(\alpha=14^{\circ}\)-\(22^{\circ}\) with taper ratios \(\lambda=0.27\)-\(1\), leading edge sweep angles \(0^{\circ}\)-\(50^{\circ}\), and semi aspect ratios \(sAR=1\) and \(2\) at a mean-chord-based Reynolds number of 600 and a freestream Mach number of \(0.1\). Wing taper reduces the tip chord length, weakening the tip vortex, and attenuates the inboard downwash over the wing. This results in unsteadiness to develop over a large portion of the wingspan at high angles of attack. Tapered wings with backward-swept leading edges develop unsteadiness near the wing tip, while wings with forward-swept trailing edges concentrate wake oscillations at the wing root. For highly swept untapered wings, the wake is steady, while tapered wings with high leading edge sweep angles exhibit wake shedding near the tip. Wake oscillations are larger towards the root for lower taper ratios. Moreover, the effects of taper on the aerodynamic forces over tapered wings are studied, revealing that the combined effect of taper and sweep can improve the aerodynamic performance of the wing. The current findings shed light on the fundamental effects of wing taper on the post-stall wake dynamics.
## 1 Introduction
Flow separation over aerodynamic lifting devices has been a subject of research interest for decades, specially for small-scale air vehicles (Mueller, 2001; Anderson, 2010; Taira and Colonius, 2009; Zhang et al., 2020, 2020). To further understand wake dynamics at post-stall flow conditions, it is important to understand how the wing planform geometry influences the separated wake features. This characterization has been largely unexplored for tapered wings.
In aircraft design, tapered wings are used to achieve an approximation to the elliptic aerodynamic loading over the wingspan. Tapered wings are more feasible and less geometrically complex compared to elliptic wings (Prandtl, 1920; McCormick, 1995), from the point of view of manufacturing. The usage of tapered wings in aeronautics has called for initial studies to explore the wing taper effect, especially for high-Reynolds number flows (Millikan, 1936; Anderson, 1936; Irving, 1937; Soule and Anderson, 1940; Falkner, 1950). For the laminar flow regime, the effect of wing taper on the wake dynamics is critical as the local Reynolds number flow at the wing tip is drastically reduced near the tip. For flows at a chord-based Reynolds number \(Re_{c}=\mathcal{O}(10^{4})\) wing, taper affects the aerodynamic loading with an increase in the pressure drag (Traub, 2013; Traub et al., 2015). For \(Re_{c}=\mathcal{O}(10^{3})\), the aerodynamic characteristics are affected significantly
by the viscous effects and the influence of wing taper on the wakes remains elusive, especially for massively separated flows.
The wake dynamics of wings at post-stall flow conditions has attracted attention of aeronautical researchers for many decades. The early efforts to understand post-stall flows over wings were performed over two-dimensional (2-D) wings (Abbott & Von Doenhoff, 1959; Gaster, 1967; Tobak & Peake, 1982). Valuable insights were obtained from 2-D analysis characterizing the behavior of the separated laminar boundary layer (Horton, 1968) and describing the relation between vortex shedding structures, adverse pressure gradient, and shear layer characteristics (Pauley _et al._, 1990). Self-excitation mechanisms of laminar separation bubbles, in the absence of incoming disturbances exciting the shear layer instability were also studied (Theofilis _et al._, 2000), providing evidence for the appearance of the vortical patterns predicted by flow topological arguments (Hornung & Perry, 1984; Perry & Hornung, 1984). Subsequent analyses of spanwise homogeneous three-dimensional (3-D) low-Reynolds number separated flow over 2-D wings (He _et al._, 2017_a_) corroborated the existence of 2-D traveling shear-layer and 3-D stationary spanwise-periodic linear instabilities and analyzed their modal and non-modal linear growth. The analysis of 2-D flows around canonical wings continues providing fundamental insights on the effect of angle of attack and Reynolds number on the wake shedding structures (Lin & Pauley, 1996; Huang _et al._, 2001; Yarusevych _et al._, 2009; Rossi _et al._, 2018).
For separated flows, the increase in Reynolds number and the angle of attack can yield a 3-D flow field even around 2-D (or quasi-2-D) wings (Bippes & Turk, 1980; Winkelman & Barlow, 1980; Schewe, 2001; Hoarau _et al._, 2003; Pandi & Mittal, 2019). In such cases, spanwise fluctuations emerge, producing 3-D vortical structures in the wake. Floquet analysis of the time-periodic wake flow ensuing linear growth of Kelvin-Helmholtz instability on the wing associated these 3-D vortical structures with secondary linear instability of the spanwise-homogeneous wake (He _et al._, 2017_a_).
For finite wings, three-dimensionality of the vortical wake structures results from the tip effects. Around the wing tip, a strong streamwise vortex is formed, yielding 3-D wake formation with strong and complex nonlinear interactions (Winkelman & Barlow, 1980; Freymuth _et al._, 1987; Taira & Colonius, 2009; Zhang _et al._, 2020\(b\); Neal & Amitay, 2023). Tip vortices induce downwash inboard over the wing, which reduces the effective angle of attack near the tip, even suppressing stall formation (Dong _et al._, 2020; Toppings & Yarusevych, 2021) and the wake shedding for low-aspect-ratio wings (Taira & Colonius, 2009; Zhang _et al._, 2020_b_). Moreover, the tip vortex has been extensively studied to reveal its influence on the wake dynamics, aerodynamic forces, and pitch moments (Francis & Kennedy, 1979; Green & Acosta, 1991; Devenport _et al._, 1996; Pelletier & Mueller, 2000; Birch _et al._, 2004; Torres & Mueller, 2004; Buchholz & Smits, 2006; Yilmaz & Rockwell, 2012; Ananda _et al._, 2015; He _et al._, 2017\(b\); Toppings & Yarusevych, 2021, 2022). By understanding the tip vortex formation, evolution, and instability mechanisms, it is possible to develop control techniques to improve the aerodynamic performance around finite wings (Gursul & Wang, 2018; Edstrand _et al._, 2018; Navrose _et al._, 2019).
Separated wakes are also affected by wing sweep, which stabilizes flow oscillations and reduces wake three-dimensionality (Zhang _et al._, 2020\(b\); Ribeiro _et al._, 2022, 2023_b_). The flow over swept wings induces a spanwise flow component within the stalled region, significantly impacting wake characteristics (Harper & Maki, 1964). The stabilizing effect around laminar separated flows is related to the emergence of the sweep-induced spanwise flow in the stalled region (Wygnanski _et al._, 2014; Ribeiro _et al._, 2022). For laminar flow regimes, a number of experimental and numerical efforts were carried out to examine the effects of backward and forward wing sweep in many different configurations (Breitsamter & Laschka, 2001; Yen & Hsu, 2007; Zhang _et al._, 2020\(a\); Zhang & Taira, 2022; Burtsev _et al._, 2022; Ribeiro _et al._, 2023_b_).
Thus far, most studies have not considered wing taper effects on low-Reynolds number flows at high angles of attack. Only recently, a combined experimental, numerical, and theoretical effort
has been initiated towards the understanding of the laminar flow over tapered wings in post-stall flow conditions (Ribeiro _et al._, 2023; Neal _et al._, 2023; Burtsev _et al._, 2023). Effects of taper have been analyzed for planforms with tubercles to analyze swimming of whales (Wei _et al._, 2018), for flows over tapered cylinders (Piccirillo & Van Atta, 1993; Techet _et al._, 1998; Valles _et al._, 2002), and for separated wakes over tapered plates (Narasimhamurthy _et al._, 2008). For wing planforms with continuously variable chord-length over the wingspan, the delta wings have also received substantial attention (Rockwell, 1993; Gursul _et al._, 2005; Taira & Colonius, 2009). For laminar post-stall flows, wing taper was studied using trapezoidal plates (Huang _et al._, 2015). Nonetheless, there still is a lack of fundamental studies to understand the role of taper ratio, and how it interplays with leading edge (LE) and trailing edge (TE) sweep angle effects for massively separated laminar flows.
For laminar separated flows, the effect of wing taper remains elusive, while the combined effect of taper and sweep on post-stall flows remains unexplored for low-Reynolds number flows. The present work reveals the effects of taper in the laminar wake dynamics and the influence of LE and TE sweep angles on the vortical interactions through a comprehensive campaign of direct numerical simulations of 3-D flows over finite NACA 0015 wings with different taper ratios and sweep angles. We characterize the stalled wakes of wings with backward-swept LE and forward-swept TE, identifying the combined effects of taper and sweep angle. Our work is organized as follows. In section 2, we present our wing planform geometry definitions and the setup for direct numerical simulations. In section 3, we offer a detailed analysis and classification of the wake structures, highlighting the effects of taper and sweep. Finally, we conclude our study by summarizing our findings in section 4.
## 2 Problem setup
We consider laminar flows over tapered wings with a NACA 0015 cross-sectional profile. The spatial coordinates of streamwise, transverse, and spanwise directions are denoted by \((x,y,z)\), respectively. The origin is placed at the LE of the wing root, as shown in figure 1. The NACA 0015 profile is defined on the \((x,y)\) plane, which is extruded from the wing root in the spanwise direction to form the 3-D wing. Wing taper is defined by the taper ratio \(\lambda=c_{\text{tip}}/c_{\text{root}}\), where \(c_{\text{tip}}\) and \(c_{\text{root}}\) are tip and root chord-lengths, respectively, as shown in figure 1(\(a\)). For all wings considered herein, the chord length decreases linearly from root to tip. The non-dimensional mean chord length \(c\) at the spanwise location of \(z=b/2\) is taken to be the characteristic length used to non-dimensionalize all spatial variables.
The semi aspect ratio of the wings is set as \(sAR=b/c=1\) and \(2\), where \(b\) is the half-span length, as shown in figure 1(\(d\)). We consider half-span wing models with symmetry imposed at the root. The angles of attack, \(\alpha=14^{\circ}\), \(18^{\circ}\), and \(22^{\circ}\), are defined between the airfoil chord line and the streamwise direction. The present wing geometries have sharp trailing edge and straight-cut wing tip. The mean-chord-based Reynolds number is set to \(Re_{c}=600\) and freestream Mach number \(M_{\infty}=0.1\) for the present study. Taper changes the local Reynolds number \(Re_{L_{c}}\), defined as a function of the spanwise location (Traub _et al._, 2015). For the present study, the difference between \(c_{\text{tip}}\) and \(c_{\text{root}}\) accounts for a maximum variation of \(60\%\) on \(Re_{L_{c}}\) along the span, from \(\min(Re_{L_{c}})=250\) and \(\max(Re_{L_{c}})=950\) at the lowest taper ratio.
The wings considered in the present work have varied taper ratio and wing sweep. For tapered swept wings, the 3-D computational setup is sheared in the chordwise direction and \(\Lambda_{\text{LE}}\) is defined between the \(z\)-direction and the LE. Tapered wings have different \(\Lambda_{\text{LE}}\) and \(\Lambda_{\text{TE}}\) respectively, as shown in figure 1(\(a\)). In this work, we explore the combined effects of the LE and TE sweep angles on the wake dynamics, defining LE sweep angles between \(0\leq\Lambda_{\text{LE}}\leq 50^{\circ}\) and TE sweep angles between \(-30^{\circ}\leq\Lambda_{\text{TE}}\leq 50^{\circ}\). Herein, negative sweep angles indicate a forward sweep, as
shown in figure 1(\(a\)), while a positive sweep angle representing a backward sweep. Through the aforementioned \(\Lambda_{\rm LE}\) and \(\Lambda_{\rm TE}\), taper ratios are analyzed between \(0.27\leq\lambda\leq 1\).
Traditionally in aeronautics, tapered swept wings have wing sweep angle observed with respect to the quarter-chord line (Anderson, 2010, 1936; Falkner, 1950) denoted by \(\Lambda_{c/4}\), as shown in figure 1(\(a\)). Anderson (1999) considered the half-chord sweep angle \(\Lambda_{c/2}\), such that aerodynamic load distribution becomes independent of the taper ratio. Straight tapered wings, those with \(\Lambda_{c/4}=0^{\circ}\) where studied by Traub _et al._ (2015). On the other hand, Irving (1937) considered the effect of the LE and TE sweep angles. For the present laminar separated flows, due to the crucial role played by the LE vortex in defining the wake characteristics (Videler _et al._, 2004; Eldredge & Jones, 2019), we focus on \(\Lambda_{\rm LE}\) and \(\Lambda_{\rm TE}\) as the main independent parameters for our analysis and describe their influence on the wake dynamics. We note, however, that it is also possible to translate the findings reported herein with respect to the traditional quarter-chord and half-chord sweep angles, \(\Lambda_{c/4}\) and \(\Lambda_{c/2}\), respectively.
Figure 1: Problem setup for tapered wings. (\(a\)) Geometrical parameters shown in a wing platform with \(sAR=b/c=2\), \(\alpha=18^{\circ}\), \(\lambda=0.27\), and \(\Lambda_{\rm LE}=18.4^{\circ}\). (\(b\)) Computational domain and (\(c\), \(d\)) grids are shown with 2-D planes at \(z/c=1\) and \(y/c=-0.5\), respectively.
### Direct numerical simulations
We conduct direct numerical simulations with a compressible flow solver _CharLES_(Khalighi _et al._, 2011; Bres _et al._, 2017), which uses a second-order accurate finite-volume method in space with a third-order accurate total-variation diminishing Runge-Kutta scheme for time integration. The computational domain is discretized with a C-type grid with mesh refinement near the wing and in the wake. With the origin at the airfoil LE on the symmetry plane \((x/c,\ y/c,z/c)=(0,0,0)\), the computational domain extends over \((x/c,y/c,z/c)\in[-20,25]\times[-20,20]\times[0,20]\), which yields a maximum blockage ratio of \(0.8\%\) for the wing with \(\lambda=0.27\), \(sAR=2\), and \(\alpha=22^{\circ}\). The computational setup is shown in figure 1(_b_-_d_).
We have prescribed a Dirichlet boundary condition of \((\rho,u_{z},u_{y},u_{z},p)=(\rho_{\infty},U_{\infty},0,0,p_{\infty})\) at the inlet and farfield boundaries, where \(\rho\) is density, \(p\) is pressure, \(u_{x}\), \(u_{y}\), and \(u_{z}\) are velocity components in \(x\), \(y\), and \(z\) directions respectively. Symmetry boundary condition is prescribed along the root plane, \(z/c=0\). The subscript \(\infty\) denotes the freestream values. A no-slip adiabatic boundary condition is set on the airfoil surface. For vortical structures to convect out of the domain, sponge layer is applied over \(x/L_{c}\in[15,25]\) with the target state being the running time-averaged state over 5 convective time units (Freund, 1997). Simulations start from uniform flow and are performed with a constant acoustic Courant-Friedrichs-Lewy (CFL) number of 1 until transients are washed out of the computational domain. The time to flush out the transients varies depending on the wing planform and angle of attack, generally ranging from 50 to 300 convective time units. After the transients are washed out the domain, flows are simulated with a constant time step defined such that CFL is smaller than one. Flow statistics are collected for 100 to 300 convective time units, depending on the flow field characteristics and spectral content to ensure convergence. A detailed discussion on verification is provided in appendix A.
## 3 Results
### Overview of tapered wing wakes
Post-stall wakes around tapered wings exhibit a rich diversity of flow structures depending on the taper ratio, but they are also globally affected by aspect ratio, angles of attack and sweep, and, as shown in figure 2, through the combined effects of LE and TE sweep angles. Taper effects on laminar separated flows are entwined with the effects of LE and TE sweep angles. By studying straight taper, that is, wings with \(\Lambda_{c/2}\) and \(\Lambda_{c/4}\) approximately zero, we can separate the effects of taper from sweep and other geometrical parameters.
For instance, let us explore the flows over wings with \((\lambda,\Lambda_{\text{LE}})=(1,0^{\circ})\) and compare them to the wake structures around \((\lambda,\Lambda_{\text{LE}})=(0.27,10^{\circ})\) wings; these flows have \(\Lambda_{c/4}=0^{\circ}\) and \(1.8^{\circ}\), respectively. There is a strong reduction of the tip vortex length for tapered wings caused by the reduction in the tip chord length, but the downstream root shedding is similar, forming hairpin-like vortices in the wake. The near-wakes are different for these two flows. For the tapered wing, the root shedding presents spatial fluctuations over the spanwise vortex on the suction side. Such oscillations are absent in the vortical structure that forms over the untapered wing.
We can further explore the separate taper effects on the wake dynamics by considering wings with \(\Lambda_{c/2}\approx 0^{\circ}\), as shown for the similar flow patterns that develop at the root region for \((\lambda,\Lambda_{\text{LE}})=(1,0^{\circ})\) and \((0.27,18.4^{\circ})\) wings. Here, with a lower taper ratio, tip vortices are considerably weakened when compared to the untapered wing tip vortices. Over the tapered wing, vortex rolls are slanted and aligned with \(\Lambda_{\text{LE}}\), showing that the LE sweep angle is important to the near-wake shedding behavior.
For tapered wings, the backward-swept LE effect can be observed by fixing the \(\Lambda_{\text{TE}}=0^{\circ}\) while the LE is swept backwards with \(\Lambda_{\text{LE}}=18.4^{\circ}\) and \(30^{\circ}\) for \(\lambda=0.5\) and \(0.27\), respectively. Such taper causes the wake shedding structures to move towards the wing tip region. An opposite effect
is shown in the top row of figure 2, for flows over forward-swept TE wings. These planforms have fixed \(\Lambda_{\rm LE}=0^{\circ}\), while \(\Lambda_{\rm TE}=-18.4^{\circ}\) and \(-30^{\circ}\) for \(\lambda=0.5\) and \(0.27\), respectively. For these cases, we observe that taper reduces the tip vortex length and affects the topology of the root shedding structures. Let us further study the taper effect for highly swept wings, shown at the bottom row of figure 2, with a fixed \(\Lambda_{\rm LE}=30^{\circ}\), while \(\Lambda_{\rm TE}=11.6^{\circ}\) and \(0^{\circ}\) for \(\lambda=0.5\) and \(0.27\), respectively. Here, taper increases the amplitude of wake oscillations. We further detail the discussions on the effects of LE and TE sweep in section 3.3.
A variety of wake structures that appear around tapered wings, as seen in figure 2, calls for a proper characterization of the wake dynamics that associates its behavior with the wing planform geometry. The above discussions suggest that taper affects the location where unsteadiness emerges and the characteristics of the vortical structures. In the following section, we map the wake characteristics of tapered wings.
### Wake classification
We now classify the wake patterns associated with tapered wing planforms. Our criteria is based on the examination of the flow characteristics downstream the airfoil on a 2-D plane at \(x/c=4\), where we identify the spatial location of maximum time-averaged \(\overline{Q}\) and the maximum fluctuating component of \(Q^{\prime}=Q-\overline{Q}\), where \(Q\) is the second invariant of the velocity gradient tensor used to identify the vortical structures (Jeong & Hussain, 1995). Maximum \(\overline{Q}\) and \(Q^{\prime}\) located between \(0\leqslant z/(c\ sAR)<0.5\) are labeled root dominant, while points with maximum \(\overline{Q}\) or \(Q^{\prime}\) between \(0.5\leqslant z/(c\ sAR)\leqslant 1\) are named tip-dominant. We consider the flow as steady when the maximum fluctuating value of \(Q^{\prime}\) is smaller than \(0.1\) at \(x/c=4\). Using the root and tip locations of \(\overline{Q}\) and \(Q^{\prime}\), we classify their wakes into 3 unsteady and 2 steady regimes, as shown in figure 3, where the steady-unsteady threshold (black dotted line) is computed via biharmonic spline interpolation and shown at the contour lever of \(Q^{\prime}=0.1\). Instantaneous flow fields for
Figure 3: Classification map of laminar wakes over tapered wings with (\(a\)-\(c\)) \(sAR=1\) and (\(d\)-\(f\)) 2. Black dashed lines mark transition from steady to unsteady flows. (\(g\)-\(k\)) Five distinct wake patterns are shown for \(sAR=2\) wings visualized with time-averaged \(\overline{Q}=1\) in gray and instantaneous \(Q^{\prime}=0.2\) colored by \(u^{\prime}_{x}\).
all tapered wings shown in figure 3 are provided in the appendix B using isosurfaces of \(Q=1\) colored by streamwise velocity \(u_{x}\).
The first flow regime (\(\Delta\)) is composed of tapered wings wakes that have both maximum \(\overline{Q}\) and \(Q^{\prime}\) dominant over the root region. Such wakes appear for tapered wings with low \(\Lambda_{\mathrm{LE}}\). For such wings, the tip vortex is short in length and a \(\Lambda_{\mathrm{TE}}\leq 0^{\circ}\) effect concentrates shedding at the wing root. The second flow regime of unsteady wakes (\(\diamond\)) occurs when both maximum \(\overline{Q}\) and \(Q^{\prime}\) are found over the tip region. Such wakes are observed around tapered wings over a broad range of \(\lambda\) values, being strongly associated with high-\(\Lambda_{\mathrm{LE}}\). The flow over those wings often exhibits hairpin-type vortices in the wake.
The third flow regime of unsteady wakes (\(\sphericalangle\)) around tapered wings presents maximum \(\overline{Q}\) at the wing tip with maximum \(Q^{\prime}\) at the root. This wake characteristic is often seen for tapered wings with high \(\lambda\) and wings with low \(\Lambda_{\mathrm{LE}}\), as those allow for the formation of a strong tip vortex, near the maximum \(\overline{Q}\) location, and wake shedding near the root. Weak unsteady flow oscillations can appear over the tip vortex, as shown in figure 3, but the most energetic vortices are generally observed over the root region.
There are two distinct flow regimes of steady wakes identified herein, as shown in figure 3. The first one (\(\blacktriangledown\)) is comprised of wakes with a steady streamwise vortex that develops into the wake. Such flows are mainly exhibited around highly swept \(sAR=2\) wings with \(\lambda\geq 0.5\). The second steady wakes regime (\(\blacksquare\)) is comprised of flows with no significant wake structures, with maximum \(\overline{Q}\leq 0.1\) in the wake and are commonly observed for \(sAR=1\) wings and for \(sAR=2\) wings with high \(\Lambda_{\mathrm{LE}}\) and low \(\lambda\).
For \(sAR=2\) wings, the transition from steady to unsteady wakes is dependent of \(\lambda\) for each \(\Lambda_{\mathrm{LE}}\). Generally, wakes with lower taper ratios sustain unsteadiness for higher LE sweep angles than untapered wings. Let us now perform a detailed examination on the taper affects the wake unsteadiness and the separate influence of \(\Lambda_{\mathrm{LE}}\) and \(\Lambda_{\mathrm{TE}}\) on the wake dynamics in the following section.
### Wake characteristics
#### 3.3.1 Tapered wings with straight LE and forward-swept TE
Let us take a closer look at the effect of wing taper for straight LE wings with forward-swept TE, as it allows us to isolate the \(\Lambda_{\mathrm{TE}}\) effect on the wake dynamics. For tapered wings with \(\lambda=0.27,0.5,0.7\), and \(1\), the planforms we study in this section have \(\Lambda_{\mathrm{TE}}=-30^{\circ},-18.4^{\circ},-10^{\circ}\), and \(0^{\circ}\), respectively. The negative \(\Lambda_{\mathrm{TE}}\) indicates forward sweep. The LE is fixed with \(\Lambda_{\mathrm{LE}}=0^{\circ}\). For such wings, taper causes the wake shedding to concentrate near the root region, as shown in figure 4(\(a\)).
Tapered wings have a smaller tip chord-length. This weakens the tip vortices and decreases its length in the streamwise direction, which alleviates the inboard downwash over the wing. Such tip vortex attenuation and the aforementioned concentration of shedding over the root region are almost independent of the angle of attack, with only minor differences observed between the flows over tapered wings at \(\alpha=14^{\circ}\) and \(22^{\circ}\). The influence of the incidence angle appears on the formation of secondary vortices near the wing tip. For untapered wings at high incidence angle, the appearance of a secondary tip vortex emerging from the LE is known (DeVoria & Mohseni, 2017; Zhang _et al._, 2020_). For the tapered wings shown herein, at \(\alpha=22^{\circ}\), there is also a secondary steady vortex that emerges from the TE near the wing tip region. This structure is slanted towards the root region, suggesting that perturbations arising from the TE can be advected through this vortex towards the downstream wing root region.
To gain further insights on the wake unsteadiness characteristics, we study the unsteady flow behavior over the wingspan using probe measurements of velocity fluctuations over \(x/c\in[3,4]\), \(y/c\in[-1.5,0.5]\). The \(x/c\) location is arbitrary and does not affect significantly the results. The
\(y/c\) range encompasses the region where vortical structures appear. Over this region, we probe the norm of the root-mean-square (RMS) of the velocity, \(\|\mathbf{u}^{\prime}\|_{2}\). Such measurement is used as a metric to represent a spanwise distribution of flow unsteadiness, as shown in figure 4(\(b\)).
By examining at the spanwise \(\|\mathbf{u}^{\prime}\|_{2}\) distribution in figure 4(\(b\)) for untapered wings (blue), we notice that the flow unsteadiness peaks at \(z/c\approx 0.5\) and decays towards the wing tip, for both angles of attack. For tapered wings, the spanwise \(\|\mathbf{u}^{\prime}\|_{2}\) curves are independent of the taper ratio for \(\lambda\leq 0.7\). For such wings, taper yields an attenuation of the \(\|\mathbf{u}^{\prime}\|_{2}\) peak. The peak of \(\|\mathbf{u}^{\prime}\|_{2}\) also moves towards \(z/c\approx 0\), showing a concentration of unsteadiness towards the wing root of these tapered wings.
We further characterize the effect of taper by analyzing the spatial-temporal distribution of \(u_{y}\) from probes located at \((x,y)/c=(3,-0.5)\) over the spanwise direction, as it reveals how wing taper affects the shedding behavior. Herein, temporal frequency is characterized through the Strouhal number defined as \(St=f(c\sin\alpha/U_{\infty})\), where \(f\) is the frequency. For comparison, the wake spectra for the flow over an untapered wing is shown on the left of figure 4(\(c\)). For this wing, there is a narrow peak of oscillations at \(St\approx 0.14\). The wake spectra is clean with a vortex shedding pattern comprised of spanwise-dominated vorticity near the root, forming hairpin vortices and a steady streamwise vortex at the wing tip. For tapered wing, the spectra is broadband
as a result of the mixing of streamwise and spanwise vortices near the wing root. Even though the wake exhibits more mixing, the spanwise structures remain dominant, being related to the PSD peak at \(St\approx 0.13\). We note that the PSD peak occurs at a lower \(St\) than the one observed for the untapered wing, as the flow oscillations that populate the downstream wake arise from the root region of the wing, where the chord-length is enlarged.
#### 3.3.2 Tapered wings with backward-swept LE and straight TE
Next, let us analyze the taper effects of wings with backward-swept LE and fixed straight TE, to understand and separate the effects of the \(\Lambda_{\mathrm{LE}}\) on the global wake. For such wings with \(\lambda=0.27,0.5,0.7\), and \(1\), the planforms have \(\Lambda_{\mathrm{LE}}=30^{\circ},18.4^{\circ},10^{\circ}\), and \(0^{\circ}\), respectively. The positive \(\Lambda_{\mathrm{LE}}\) indicates backward sweep. The TE is fixed with \(\Lambda_{\mathrm{TE}}=0^{\circ}\). For such wings, taper yields an opposite effect on the wake characteristics, when compared those discussed in section 3.3.1. Herein, taper causes the wake unsteadiness to move towards the wing tip, as shown in figure 5(\(a\)).
Concurrently, the tip vortex weakens for tapered wings with the shortened \(c_{\mathrm{tip}}\), which alleviates of the inboard downwash near the tip, similar to what was observed for the wings in section 3.3.1. This increases the effective angle of attack near the tip and allows for the flow to detach from
Figure 5: Isosurfaces of flow fields around tapered wings with \(sAR=2\), \(\Lambda_{\mathrm{TE}}=0^{\circ}\), \(\lambda=0.27\) and \(0.7\), \(\alpha=14^{\circ}\) and \(22^{\circ}\). Time-averaged \(Q=1\) isosurface is shown in gray. Instantaneous \(Q^{\prime}=0.2\) isosurface is shown colored by \(u_{y}^{\prime}\). (\(b\)) Spanwise distribution of \(\|\mathbf{u}^{\prime}\|_{2}\) for different \(\lambda\) for \(\Lambda_{\mathrm{LE}}=0^{\circ}\) wings. (\(c\)) Spatial-temporal (top) and PSD (bottom) of \(u_{y}\) distribution over the spanwise direction from probes located at \((x,y)/c=(3,-0.5)\) for the \(\lambda=0.27\) and \(0.7\) tapered wings at \(\alpha=22^{\circ}\) shown above.
the wing surface and form wake shedding structures near \(z/c\approx 1\), as shown in figure 5(\(a\)). We quantify the effect of wing taper on flow unsteadiness through the winspan distribution of \(\|\mathbf{u}^{\prime}\|_{2}\), as shown in figure 5(\(b\)). For both angles of attack, taper affects the wake shedding distribution over the wingspan. For \(\lambda=0.27\) (purple), at \(\alpha=22^{\circ}\), the peak of \(\|\mathbf{u}^{\prime}\|_{2}\) appears near the quarter-span at \(z/c\approx 1.25\), with a gradual transition towards \(z/c\approx 0.5\) from \(\lambda=0.27\) to \(1\).
As seen in figure 5(\(b\)), tapered wings with forward-swept LE and straight TE exhibit unsteadiness over a larger spanwise length than untapered wings. For instance, let us observe the spanwise \(\|\mathbf{u}^{\prime}\|_{2}\) distribution for wings at \(\alpha=22^{\circ}\). For the untapered wing (blue), \(\mathbf{u}^{\prime}\geqslant 0.02\) over \(0\leqslant z/c\leqslant 1\), which is the region where significant unsteady wake structures appear. Now, for the tapered wing with \(\lambda=0.27\), \(\mathbf{u}^{\prime}\geqslant 0.02\) over \(0\leqslant z/c\leqslant 1.6\), hence large unsteady structures can be observed over a larger spanwise portion of the wake.
The spatial-temporal distribution of the transverse velocity \(u_{y}\) over the spanwise direction also shows that the wake of backward-swept LE and straight TE tapered wings exhibits 3-D vortical structures that result in a broadband wake spectrum, as shown in figure 5(\(c\)). The wake, however, is mainly dominated by large quasi-2-D spanwise aligned vortex rolls observed for all taper ratios. For \(\lambda=0.27\), as unsteadiness appears over a larger portion of the wingspan, the stronger shedding structures are hairpin-like vortices that appear between \(0.5\leqslant z/c\leqslant 1.5\), as shown on the right of figure 5(\(c\)).
Wing taper affects the tip vortex, which becomes consistently smaller than the tip vortex around untapered wings, as shown in figure 6(\(a\)). Tip vortices have high importance in terms of the aerodynamic characteristics of the wing (Francis & Kennedy, 1979; Green & Acosta, 1991; Devenport _et al._, 1996; Birch _et al._, 2004; Taira & Colonius, 2009; Zhang _et al._, 2020; Dong _et al._, 2020; Toppings & Yarusevych, 2021, 2022) and, in the case of tapered wings, due to the small \(c_{\text{tip}}\), tip vortices are attenuated, as a result of the reduced pressure differences between upper and
Figure 6: Streamwise circulation \(|\Gamma_{x}|\) of tip vortex around (\(a\)) an untapered wing and tapered wings with \(\lambda=0.5\) with (\(b\)) straight LE and forward-swept TE and (\(c\)) backward-swept LE and straight LE. Flow field visualized with grey-colored isosurfaces of \(\overline{\omega}_{x}=-2\) and 2-D slices with isolines of \(\overline{\omega}_{x}\) at specific \(x/c\) locations. The magnitude of \(|\Gamma_{x}|\) computed for the isocontour of (\(d\)) \(\overline{\omega_{x}}=-2\) for tapered swept wings with different planform configurations.
lower side of the wing near the tip. Beyond that, even for wings with the same \(\lambda\), the tip vortex behavior can be shifted in \(x\)-direction depending on how the wing is tapered, whether it has a backward-swept LE or a forward-swept TE, as shown in figure 6(\(b\),\(c\)).
We can observe how wing taper affects the strength of the tip vortex by analyzing the \(\overline{\omega_{x}}\) near the tip, as shown in figure 6(\(a\)-\(c\)). The isosurfaces of \(\overline{\omega_{x}}\) and the contour lines at representative \(x/c\) locations show the decay of vorticity magnitude for tapered wings with \(\lambda=0.5\). However, the effect of taper is not the same for both wings, even though they share the same taper ratio. This difference can be quantified as we compute the streamwise circulation \(\Gamma=\int_{C}=\mathbf{u}\cdot\mathrm{d}I\). Here, \(C\) is the isocontour of \(\overline{\omega_{x}}=-2\), as shown in figure 6(\(d\)). The choice of \(\overline{\omega_{x}}\) level is carefully chosen to isolate the tip vortex.
The tip vortex diffuses downstream of the wing, which makes the \(|\Gamma_{x}|\) profiles to decay slowly (Edstrand _et al._, 2018; Zhang _et al._, 2020). In general, for tapered wings the reduction in \(c_{\text{tip}}\) is the main cause of the tip vortex weakening, thus the \(|\Gamma_{x}|\) circulation decays with \(\lambda\) at any distance from the wing tip. The circulation \(|\Gamma_{x}|\) further reveals how different types of wing taper can affect the of the tip vortex, as shown in figure 6(\(d\)). For \(\lambda=0.5\), \(|\Gamma_{x}|\) is higher for the straight TE tapered wing at any streamwise distance from the wing. For \(\lambda=0.27\), however, the effects of LE and TE sweep are minor on the tip vortex and \(|\Gamma_{x}|\) decay is similar for both wings.
#### 3.3.3 Tapered wings with high LE sweep angles
Let us also examine how taper affects wings with high LE sweep. For the swept wings discussed herein, with \(\Lambda_{\text{LE}}>30^{\circ}\), wake oscillations are strongly attenuated. For laminar flows over untapered wings with high sweep angles at moderate angles of attack, the wake becomes steady, while at high angles of attack, unsteadiness may develop in the wing tip region (Zhang _et al._, 2020; Ribeiro _et al._, 2023). For highly swept and tapered wings, the flow exhibits wake shedding for small \(\lambda\), as shown in figure 7(\(a\)).
Here, we analyze wings with a fixed \(\Lambda_{\text{LE}}=40^{\circ}\), while the TE is swept with \(\Lambda_{\text{TE}}=10^{\circ},21.6^{\circ},30^{\circ}\), and \(40^{\circ}\) for \(\lambda=0.27,0.5,0.7\), and \(1\), respectively. The onset of shedding for highly swept and tapered wings results from the distinct effects of \(\Lambda_{\text{LE}}\) and \(\Lambda_{\text{TE}}\). For the present tapered swept wings, the vortical structures emerging from the TE promote unsteadiness in the wake near the wing tip. For lower taper ratios, wings have a low \(\Lambda_{\text{TE}}\), which causes wake oscillation to appear and become large towards the root. Such effects show that while the high \(\Lambda_{\text{LE}}\) has the effect of stabilizing wake oscillations for untapered wings, the combination of wing taper and sweep can promote wake unsteadiness.
For instance, at \(\alpha=18^{\circ}\) the wake is steady for \(\lambda=0.7\) with long steady streamwise vortices developing from both LE and TE. At \(\lambda=0.5\), unsteadiness appears with vortex rolls at the wing tip, with wake shedding appearing for \(\lambda=0.27\). We quantify the wing taper effect in figure 7(\(b\)). For instance, for the wings with \(\lambda\geq 0.7\) at \(\alpha=18^{\circ}\), the flow is steady and \(\|\mathbf{u}^{\prime}\|_{2}\) is negligible in the wake. At \(\alpha=22^{\circ}\), \(\|\mathbf{u}^{\prime}\|_{2}\) is small for untapered wings, increasing considerably in magnitude and spanwise length as the taper ratio decreases. For highly swept tapered wings, the flow fluctuations are exhibited at the tip, further appearing over the midspan for the lower taper ratios.
The unsteady vortices exhibited in the wakes of tapered wings with high LE sweep angles behave as vortex shedding structures, as shown by the probed \(u_{y}\) in the wake in figure 7(\(c\)). For \(\lambda=0.7\), the vortices appear as a consistent flow oscillation departing from the wing tip. For \(\lambda=0.27\), the wake is dominated by spanwise-aligned roll structures that occupy a large portion of the wingspan. As these structures develop from the wingspan region near the wing tip, that has a reduced chord length, their frequency \(St\approx 0.15\) is slightly higher than the shedding frequency of untapered wings. While the combination of taper and sweep increases wake unsteadiness, it may also have a positive effect on the aerodynamic performance of wings in post-stall laminar flow conditions, as we discuss in the following section.
### Aerodynamic forces
The difference in tapered wing planforms results in a variety of wake patterns that further affect the aerodynamic forces over the wing and their distribution over the wingspan. Herein, we report the aerodynamic forces through their lift and drag coefficients defined as
\[C_{L}=\frac{F_{y}}{\frac{1}{2}\rho U_{\infty}^{2}bc}\quad\text{and}\qquad C_{D}= \frac{F_{x}}{\frac{1}{2}\rho U_{\infty}^{2}bc}\, \tag{1}\]
where \(F_{x}\) and \(F_{y}\) are the \(x\) and \(y\) components, respectively, of the viscous and pressure forces integrated over the wing surface. We present the time-averaged \(\overline{C_{L}}\) and \(\overline{C_{L}/C_{D}}\) for the the representative wings discussed in sections 3.3.1, 3.3.2, and 3.3.3, as shown in figure 8. The blue symbols present the aerodynamic loads for tapered wings with straight LE and forward-swept TE. The red symbols show the results for tapered wings with backward-swept LE and straight TE, while the yellow symbols represent tapered wings with \(\Lambda_{\text{LE}}=40^{\circ}\).
The effect of wing taper in the aerodynamic loads is strongly dependent on how the wing planform is tapered. Whether it has a LE or a TE sweep is paramount to its overall lift and aerodynamic performance. Let us start from the untapered and unswept wings, marked by blue
downward-pointing triangles at \(\lambda=1\). At each angle of attack, for lower taper ratios, \(\overline{C_{L}}\) and \(\overline{C_{L}/C_{D}}\) are higher for tapered wings with backward-swept LE and straight TE than that of tapered wings with straight LE and forward-swept TE.
These results suggest that the backward-swept LE enhances the aerodynamic efficiency of the wing in post-stall laminar flow conditions. Untapered swept wings, however, present a lower \(\overline{C_{L}}\) for all angles of attack. It is the combination of a high LE sweep with wing taper that causes \(\overline{C_{L}}\) to increase, as seen by the yellow symbols in figure 8. In general, the aerodynamic performance \(\overline{C_{L}/C_{D}}\) of tapered wings with high LE sweep and lower taper ratios is also higher than that of other wing planforms. This shows that the combination of wing taper and sweep can be beneficial for laminar separated flows.
Moreover, we analyze the effect of wing taper on the sectional lift coefficients \(\overline{C_{l}}\), as shown in figure 9. The larger contribution to the overall lift comes from the wing root up to the quarter-span at \(b/2\) for all wings shown herein. For tapered wings with straight LE and forward-swept TE, shown in figure 9\((a)\), the contribution to lift from the root for \(\lambda=0.5\) and \(0.7\) increases. For \(\lambda=0.27\), the lift decreases considerably over the entire wingspan. For tapered wings with backward-swept LE and straight TE, shown in figure 9\((b)\), the lift contribution from tip decreases considerably, while the lift from the root increases for tapered wings.
For highly swept wings, with \(\Lambda_{\mathrm{LE}}\geq 40^{\circ}\), the largest contribution of lift comes from the wing root, decreasing over the wingspan towards the tip. For taper ratios \(\lambda=0.7\) and \(0.5\), lift increases mainly near the root when compared to the untapered wing load distribution. For wings with \(\lambda=0.27\), the effect of wing taper is to increase the overall contribution of lift over the entire wingspan. For this wing, the increase in lift near the wing root peaks at \(z/c\approx 0.2\). The overall increase in lift is also observed at the tip, supported by the emergence of unsteady shedding near the wing tip.
## 4 Conclusions
We have examined the influence of taper and sweep on the dynamics of wake structures for finite NACA 0015 wings with straight-cut tip at a Reynolds number of 600 and a Mach number
0.1. For this study, we have performed an extensive campaign of direct numerical simulations of flows over half-span wings with symmetry boundary condition imposed at the wing root. The present numerical study spans over a wide parameter space with angles of attack between \(14^{\circ}\leq\alpha\leq 22^{\circ}\), aspect ratios \(sAR=1\) and \(2\), leading edge sweep angles \(0^{\circ}\leq\Lambda_{\rm LE}\leq 50^{\circ}\), and taper ratios between \(0.27\leq\lambda\leq 1\). This parameter space was chosen to characterize the effects of wing taper as well as the LE and TE sweep angles on the wake dynamics.
Through direct numerical simulations, we observe that the flow over unswept and untapered wings forms a strong tip vortex, which interacts with the spanwise vortex detaching from the wing surface at the root region. This flow yields a three-dimensional and unsteady wake for all angles of attack considered herein. Untapered and swept wings are observed to advect the shedding region towards the wing tip for lower angles of sweep. At higher sweep angles, the wake oscillations are attenuated, yielding a steady wake around wings at lower angles of attack.
Wing taper has a strong influence on the wake dynamics. For tapered wings, LE and TE are not parallel and have an distinct influence in the flow structures within the stalled flow region. For tapered wings with fixed straight LE and forward-swept TE, taper concentrates shedding structures towards the wing root and yields a broadband spectral content downstream in the wake, as a result of increased mixing in that region. Beyond the unsteady wake shedding, the tip vortex is heavily affected by wing taper, reducing its length considerably for tapered wings, as the chord-length decreases towards the tip.
For tapered wings with backward-swept LE and straight TE, the spanwise length where wake unsteadiness is observed increases as shedding is promotes over a larger portion of the wingspan. For this type of tapered wing planform, in contrast with the forward-swept wing effect, the peak of wake unsteadiness moves towards the wing tip region for lower taper ratios. Moreover, for wings with high LE sweep, while we have observed that the flow is steady for \(\lambda=1\), our findings show that taper causes wake unsteadiness to appear. The wake oscillations develop near the wing tip for moderate taper ratios. For low-\(\lambda\), wings with high LE sweep angles exhibit strong wake shedding structures occupying a large portion of the wingspan.
Through the detailed analysis of the wake structures, we also provide a map that classifies the wake behavior of tapered wings associating its behavior with the wing planform geometry and its angle of attack. The map provides a unique description of the overall flow physics of the wakes around tapered wings and reveals, for each semi aspect ratio and angle of attack, how the steady-unsteady flow behavior is related to the wing taper and LE sweep angle. The present study shows the effect of taper, as well as the effects of LE and TE sweep and evaluates its impact on the formation of the wake structures.
Lastly, we show how the wing taper affects the aerodynamic forces over the wing. We show that wings with same taper ratio may present distinct overall lift and aerodynamic performance, as
those are also influenced by the LE sweep of the wing. Our findings show that the combination of wing taper and high LE sweep can considerably improved lift and the aerodynamic performance of the wing in laminar post-stall flows conditions. The present insights gained on the effect of wing taper in the absence of turbulence serve as a stepping stone for future efforts that aim to study, interpret and control higher Reynolds number post-stall flows over tapered wings.
## Appendix A Grid verification
We verify the convergence of grid resolution for the numerical results using a wing with \((sAR,\alpha,\Lambda_{\mathrm{LE}},\lambda)=(2,22^{\circ},40^{\circ},0.27)\). This planform combines a high leading-edge sweep angle and the lowest taper ratio considered in the present study. Herein, we report the aerodynamic forces through their lift coefficients \(C_{L}\). Two meshes are used for verification: a medium and a refined mesh. The medium mesh refinement is the one used throughout the present work. This mesh has 80 grid points on both pressure and suction sides of the wing and 48 grid points along the wingspan, with a total of approximately \(3.1\times 10^{6}\) control volumes. The refined mesh has 120 grid points on pressure and suction sides, with 64 grid points along the wingspan, resulting in approximately \(4.3\times 10^{6}\) control volumes in total. For the refined mesh we have increased the temporal resolution by setting the CFL to 0.5. The quality of our medium mesh is assessed through the forces exerted over the wing and the instantaneous vortical elements as shown in figure 10.
## Appendix B A portfolio of flow fields around tapered wings
In this appendix, we provide flow field visualizations of the wake structures around all tapered wings considered in the present study. Flows around \(sAR=1\) wings at \(\alpha=14^{\circ}\), \(18^{\circ}\), and \(22^{\circ}\) are shown in figures 11, 12, and 13, respectively. Similarly, flows around \(sAR=2\) wings \(\alpha=14^{\circ}\), \(18^{\circ}\), and \(22^{\circ}\) are shown in figures 14, 15, and 16, respectively. All flows are visualized using isosurfaces of \(Q=1\), colored by the streamwise velocity \(u_{x}\).
Figure 12: Instantaneous flow fields around wings of \(sAR=1\) at \(\alpha=18^{\circ}\) visualized using isosurfaces of \(Q=1\) colored by streamwise velocity \(u_{x}\).
Figure 14: Instantaneous flow fields around wings of \(sAR=2\) at \(\alpha=14^{\circ}\) visualized using isosurfaces of \(Q=1\) colored by streamwise velocity \(u_{x}\).
Figure 13: Instantaneous flow fields around wings of \(sAR=1\) at \(\alpha=22^{\circ}\) visualized using isosurfaces of \(Q=1\) colored by streamwise velocity \(u_{x}\).
Figure 16: Instantaneous flow fields around wings of \(sAR=2\) at \(\alpha=22^{\circ}\) visualized using isosurfaces of \(Q=1\) colored by streamwise velocity \(u_{x}\).
Figure 15: Instantaneous flow fields around wings of \(sAR=2\) at \(\alpha=18^{\circ}\) visualized using isosurfaces of \(Q=1\) colored by streamwise velocity \(u_{x}\).
## Declaration of interest
The authors report no conflict of interest.
|
2302.00166
|
Market-Based Coordination of Price-Responsive Demand Using Dantzig-Wolfe
Decomposition Method
|
With the increased share of Distributed Generation (DG) and Demand Responsive
(DR) loads in the power systems, new approaches based on the game theory
framework have been proposed to tackle the problem of coordination of Price
Responsive Devices (PRD). The PRDs are modeled as self-benefiting players who
try to optimize their consumption based on the price. In this paper, for the
first time, a new algorithm based on the Dantzig-Wolfe (DW) Decomposition
method to solve the coordination problem of self-benefiting PRDs in a
distributed fashion has been proposed. By utilizing the distributed nature of
Dantzig-Wolfe, the PRD's self-benefiting algorithms are modeled as sub-problems
of the DW, and the coordinator (or the grid operator) who collects energy
consumption of PRDs (their energy bid), solves the master problem of the DW and
calculate the price signal accordingly. The proposed algorithm is fast since
the subproblem in DW (which could be millions of PRDs) can be solved
simultaneously. Furthermore, based on the DW theory, if the PRDs subproblems
are convex, reaching the optimal point (Equal to Nash Equilibrium) in limited
iterations is guaranteed. A simulation with 8 participant households has been
conducted to evaluate the model. Each house is equipped with two types of
loads: an Electric Vehicle (EV) as a sample of interruptible loads and an
Electric Water Heater (EWH) as a sample of Thermostatically Control Loads
(TCA). The results show that when the algorithm reaches the optimal point, the
generation cost and the user payment (based on the marginal cost of generation)
decrease. Furthermore, the aggregate load's Peak to Average (PAR) reduces
significantly.
|
Foad Najafi, Matthias Fripp
|
2023-02-01T01:08:55Z
|
http://arxiv.org/abs/2302.00166v1
|
# Market-Based Coordination of Price-Responsive Demand Using Dantzig-Wolfe Decomposition Method
###### Abstract
With the increased share of Distributed Generation (DG) and Demand Responsive (DR) loads in the power systems, new approaches based on the game theory framework have been proposed to tackle the problem of coordination of Price Responsive Devices (PRD). The PRDs are modeled as self-benefiting players who try to optimize their consumption based on the price. In this paper, for the first time, a new algorithm based on the Dantzig-Wolfe (DW) Decomposition method to solve the coordination problem of self-benefiting PRDs in a distributed fashion has been proposed. By utilizing the distributed nature of Dantzig-Wolfe, the PRDs' self-benefiting algorithms are modeled as sub-problems of the DW, and the coordinator (or the grid operator) who collects energy consumption of PRDs (their energy bid), solves the master problem of the DW and calculate the price signal accordingly. The proposed algorithm is fast since the subproblem in DW (which could be millions of PRDs) can be solved simultaneously. Furthermore, based on the DW theory, if the PRDs subproblems are convex, reaching the optimal point (Equal to Nash Equilibrium) in limited iterations is guaranteed. A simulation with 8 participant households has been conducted to evaluate the model. Each house is equipped with two types of loads: an Electric Vehicle (EV) as a sample of interruptible loads and an Electric Water Heater (EWH) as a sample of Thermostatically Control Loads (TCA). The results show that when the algorithm reaches the optimal point, the generation cost and the user payment (based on the marginal cost of generation) decrease. Furthermore, the aggregate load's Peak to Average (PAR) reduces significantly.
Demand-side Management (DSM), distributed optimization, Dantzig-Wolfe algorithm, Price Responsive Devices (PRD), Appliance coordination
## 1 Introduction
New renewable and non-renewable Distributed Generation (DG) sources are being added to the power systems daily. Utility-scale storage systems and Demand Responsive (DR) loads are also added as well. Alongside the addition of these modern generation/consumption units, the emergence of fast and reliable communication technologies such as 5G provided a two-way communication path between the generation and consumption sides in power systems. These changes created unprecedented challenges and opportunities in power systems[1], [2]. To overcome these challenges and utilize the new opportunities, the joint management of the demand alongside the generation has been proposed. The co-management of demand/generation has numerous advantages for modern smartgrids, such as balancing supply and demand, integrating more significant portions of renewable energy, and adding auxiliary services such as frequency and voltage provisioning by using the DGs and DRs.
Numerous strategies have been proposed for the joint coordination of appliances in the literature[3], [4]. Among these strategies, the collective coordination of price-responsive devices (PRD) is gaining momentum. The PRD coordination is a market-based approach. On top of having the advantages mentioned above, a market-based approach can incentivize both the generation and consumption side to help achieve the smartgrid paradigm goals. In other words, PRD coordination can bring all the advantages of free market systems defined in economics to energy management systems. Some of these advantages are incorporating a larger share of renewables by creating a market for the excess energy, stabilizing the grid by providing ancillary services such as voltage and frequency provisioning peak shedding, and reducing generation cost, etc. [5], [6]. One of the strategies to coordinate PRDs is through central controllers[7], [8]. In these approaches, the controller sends the control signal to the appliances to achieve a specific objective, such as valley filling or generation cost minimization.
One primary division in PRD coordination is centralized vs. distributed coordination[9]. The central
control approaches were beneficial because there was no effective communication method nor price or demand-responsive appliances in the grid in the past. However, with the increased share of PRDs such as batteries, EVs, and TCAs, the central control approach will lose the chance of finding the optimal, and the chance to create a free market where supply and demand negotiate freely will be lost. To overcome these undesirable outcomes, many studies proposed distributed coordination of flexible demands [10], [11].
### The paradigm of this work
in this paper, we develop a distributed market-based appliance coordination algorithm. The PRDs are modeled as privately rational individuals who try to minimize local costs (solve their private optimization problem). The PRDs consist of Electric Vehicles (EVs) as a sample of interruptible loads and Electric Water Heaters (EWH) as a sample of Thermostatically Controlled Appliances (TCA). The appliances only communicate with an aggregator by sending their consumption plan (energy bid) and receiving the corresponding price signal. The appliances do not share any information with each other, and we assume they do not have market power individually or create a coalition to manipulate the algorithm in their favor.
The main novelties of this work are:
(a) factor in private preferences for good service vs. cost reductions, (b) coordinate devices like EVs and customer-sited batteries that may have an all-or-nothing response to prices (and are likely to be on the margin often, so they are essential to coordinate), and (c) can scale to incorporate any number of devices with any valid price response.
## 2 Related Works
This section reviews the methods that tackle appliance coordination problems where each individual response is essential in decision-making. These methods could have different objectives, such as generation cost minimization, valley filling, or peak shaving. While these objectives are different, the end results are closely correlated. I.e., pursuing an optimal answer for one objective will yield a near-optimal solution for the other objective. E.g., seeking the generation cost minimization [12] yields a flat load curve (valley filling or demand reduction) [13, 14, 15, 16, 17]. While grid operators can have different objectives which could be closely correlated, as mentioned in [18, 19], the demand-responsive loads can be modeled in two different ways: 1- following the operator objective [20] (direct load control (DLC)). 2- following self-benefiting optimization algorithms [12] in a market-based fashion to bid for energy based on the price signal.
While pursuing the grid operator objective can have direct benefits for the operator (reduced generation costs), it can also have _indirect_ benefits for the end-user as a form of discount or credit. However, while these indirect benefits could be beneficial, they may not be optimal for the end user. This need for more optimality to gain the maximum benefit can prevent end-users from joining such programs.
Therefore, the market-based approaches appeal more to the end-user since they give them the ultimate benefit and freedom of choice. One of the leading chains of literature when it comes to distributed market-based coordination of PRDs while they are modeled as self-benefiting and competing units is based on the game theory framework[12, 21, 22, 23, 24, 17, 25, 26].
[25, 26] are based on Mean Field Game (MFG) approach. In the MFG methods, the individual response of the agents is optional in decision-making. However, the critical factor for decision-making is based on all agents' overall (mean) response. Each appliance responds to the price signal in this method, and the appliance population is approximated as infinite. They showed that their model could reach \(\varepsilon\)-Nash equilibrium when a quadratic term is used in the cost function of appliances.
In [27], the authors propose a distributed method using Mean Field Game (MFG) to schedule a large population of thermostatically controlled loads (TCLs). The price-responsive rational agents plan their energy consumption and participate in the frequency provisioning market.
Gan et al. [17] proposed a decentralized optimization model for valley-filling using the elastic energy need of EVs. They formulated a scheduling algorithm for EVs whose objective is to fill the valley. The EVs respond to the control signal algorithm. The algorithm is solved iteratively until an optimal solution is
obtained. The issue with this method is that it needs to be market-based. i.e., the appliances (here EVs) are supposed to follow the grid objective (valley-filling), not their benefit, i.e., minimizing their own costs. Instead, the individual users follow the operator's objective, which is valley filling. While this could be advantageous for the users overall, it was not shown how each user would benefit from using this method, nor was it shown if the energy cost would decrease by using this approach.
Ref. [23] proposed a distributed method to reach Nash equilibrium (NE) between self-serving plug-in electric vehicles (PEVs). To reach NE, the authors proposed an iterative algorithm where the PEVs respond to a price signal generated based on the average bid of all other PEVs in each iteration. The goal is to make the load pattern of each EV close to all the other EVs. However, while it is assumed that EVs are cost-minimizing, it needs to be clarified how they minimize the generation cost or user payments.
Ref. [28] proposes a decentralized algorithm for PEV control that finds a balance between generation cost and the cost associated with overloading and battery degradation. It was shown that under mild conditions, the algorithm converges to a socially accepted coordination between PEVs. An instantaneous billing scheme to shift the peak-time load is proposed in [29]. The appliances are modeled as selfish units. The authors proposed different methods for different scenarios to reach NE. e.g., when loads are controlled, centralized, or distributed.
Rivera et al. [19] proposed a distributed optimization method for scheduling EVs based on the Alternating Direction of Multipliers (ADMM). They offered a generic multi-objective optimization platform where the cost function is a weighted sum of aggregator and local user objectives. Using this generic platform, they solve the EV coordination problems with two different objectives. i.e., valley-filling and cost minimization. The size of the problem expands linearly with the increment in the size of the EV's fleet. However, the proposed multi-objective is a weighted sum of the end-users and operator's goals. Therefore, based on the value of weights, an unfair bias could be enforced upon either the end-users or the operator.
Using the ADMM method proposed in [19], authors in [30] proposed a method for coordination of EV fleet where local objective optimally schedules EV's charging session and on a higher level, the Macro Load Area Aggregator (MLAA) provides the DER with generation profiles. The method reaches Walrasian equilibrium [31]at its optimum.
Depaola et. Al [12] proposed an iterative PRD coordination based on a game theory framework. They
Figure 1: (a) original unit commitment with PRD (solved in one step) (b) Dantzig-Wolfe decomposition of the original model (two iterations of the algorithm)
proved that their method reaches the optimal point (Nash Equilibrium) in theory. They demonstrated that at each iteration, the generation cost of electricity decreases. However, their method is very time-consuming since the PRDs solve the problem one at a time and update the system's operator (non-simultaneous). This approach could be time-consuming if many PRDs exist in the network. They proposed a faster semi-optimal algorithm to compensate for this issue, but the result slightly deviates from the equilibrium.
Mohsenian et al. [18] developed a games theorie approach between competitive resources. All the participants communicate with each other in the grid and are aware of each other's behavior (nonprivate). The goal is to reach Nash equilibrium between the users, which will be equivalent to reaching the global minimum when the objective is cost minimization. The issue with this method is that all users are connected to each other. This topology creates two issues:1) privacy: given that all users must know about everyone's behavior, each user's privacy is jeopardized. Also, it would open up more space for suspicious behavior, given that each user has access to more information. 2): The need for high bandwidth and more computation capacity. Sending each individual's data to all other users requires a costly infrastructure to send the data and a high bandwidth. At the same time, a delay on each connection link could reduce the speed of each iteration in the real world.
## 3 Proposed Method
### Original Method (OM)
While the above approaches try to find the Nash equilibrium (optimal point) through a game theory framework, an alternative would be a comprehensive optimization problem containing the cost of electricity generation and the end-user PRD's local objective (that part of its cost of using electricity) as shown in Figure 1 (a). It must be noted that this is different from multi-objective optimization. The problem in Figure 1 (a) describes the joint optimization of the Unit commitment problem and PRDs self-benefiting problem. The goal is to find a price signal and energy bid (decision variables) that minimize the user payment and generation of electricity cost.
In a case where the operator is completely aware of the preference of each individual appliance. (Note: in this paper, we assume PRDs are rational agents, and their preference would be obtained from solving a personal optimization problem) Therefore by knowing how they solve an optimization problem with a given price, there would be an optimal answer where the generation costs and end-users local objectives were minimized simultaneously. In such an optimum situation, the grid operator and the end-users wanted to keep their generation/consumption (because they are at the optimal point or Nash- Equilibrium). However, such a problem in this form will be impossible to solve for two reasons: first, it will be a massive problem. Second, the grid operator needs to have all the information regarding the preference of each and every appliance at each house. In the next section, we propose a method to solve such a problem using Dantzig-Wolfe (DW) decomposition method.
### Dantzig-Wolfe Representation of the OM
DW [32] was first proposed to solve large-scale linear programming problems. It breaks the large problem into one master problem and some subproblems. The subproblems are solved independently and simultaneously in a separate computer and report their preliminary results (at each iteration) to a computer with the master problem. At the same time, the Block-Angular structure of the OM highly spars. I.e., the problem of device \(J\) is independent of device _J_+_1_. The problems are connected only through one linking constraint (the constraint that forces the model to equal the overall generation with overall consumption. The abovementioned structure of the OM makes it ideal for solving with DW.
Figure 1 (b) shows the summary of the main structure of this algorithm. It is an iterative algorithm until the desired accuracy is achieved or no further improvement happens in the results.
In the proposed method, the PRD's objectives are modeled as the subproblem of the DW, and the unit commitment objective act as the master problem of the DW. The solution to the master problem (unit commitment) would be the proposed load profile (energy bid) for each user and the corresponding price based on the marginal cost of electricity generation. The result of PRD's local optimization is load profiles based on the proposed price from the master problem (the energy profile they are willing to consume under the suggested price from unit commitment). This iterative procedure repeats until a point where the grid
operator (which solved the unit commitment master problem) and the local PRDs (which solved subproblems) will not change their behavior based on the other side's proposal (energy bid from the PRDs side and price from unit commitment side) since the algorithm reached the optimal point.
Using DW to solve such a universal problem has numerous advantages 1) it is optimal. While it looks obvious, many appliance coordination algorithms proposed above can not find an optimal schedule for both sides of the market. One side needs to sacrifice for the other side. 2) it is fast. Since most of the optimization is done locally and simultaneously on the PRDs side, each iteration is rapid. Furthermore, the only information needed is the price signal and energy bid from unit commitment and PRDs sides, respectively. The simultaneous update is way faster than the algorithm proposed in[12, 18], that one appliance solves the problem at a time. 3) it is private. The appliances only share information regarding their consumption plan with the grid operator and only with the unit commitment side. This approach also prevents a distrustful coalition between the appliances from manipulating the master problem result based on their benefits. This method contrasts with the method proposed in [82, 84], where all appliances must share information. 4) it is a market-based algorithm. The utmost goal of this algorithm is to find the balance between supply and demand through a market. This paves the way for \(\;\)more significant incorporation of distributed generations (including renewables) into the power system.
The chapter's structure is as follows: in capture 4 the fundamental principle of DW is explained. In capture 5.1, a simplified united commitment model for the grid operator and self-benefiting algorithms for appliances (EVs and EWHs) are proposed. Capture 6 discussed the results of implementing this algorithm.
## 4 Joint Unit Commitment and PRD cost minimization problem
### Power System Operation with PRDs
Some key elements of the unit commitment problem for electric power systems with PRDs can be written as
\[\min_{\mathbf{g}_{i},\mathbf{d}_{j},\mathbf{z}_{j}\in\mathbf{c},j \in D} \sum_{l\in G}c_{i}(\mathbf{g}_{i})+(\text{other costs})-\sum_{j\in D}b_{j}( \mathbf{d}_{j},\mathbf{z}_{j}) \tag{1}\] \[\text{s.t.}\;\sum_{l\in G}\mathbf{g}_{i}+\mathbf{g}_{0}=\sum_{j \in D}\mathbf{d}_{j}+\mathbf{d}_{0}\] \[\text{(and other system-level constraints)}\] \[\text{(and device-level constraints)}\]
In this model, \(G\) is the set of all controllable power plants, and \(D\) is the set of price-responsive devices. The main decision variables are the power output from plant \(i\) (\(\mathbf{g}_{i}\)), and the amount of power to be consumed by device \(j\) (\(\mathbf{d}_{j}\)). These are vectors with one element for each hour of the next day. \(\mathbf{z}_{j}\) is a vector containing all the other decisions that must be made by device j, e.g., temperature or charge levels for each hour. Together, \(\mathbf{d}_{j}\) and \(\mathbf{z}_{j}\) constitute a complete operating plan for device \(j\) for the next day. The function ci shows the cost of producing power in plant \(i\). The function \(b_{j}\) shows the benefits to the owner of device \(j\) when it follows the specified operating plan, converted to dollar form, i.e., the most the customer would be willing to pay to follow this plan. The only constraint shown is that the total generation scheduled for the next day, plus the noncontrollable generation \(\mathbf{g}_{0}\) (e.g., distributed solar power), must equal the total demand from price-responsive devices plus the non-price-responsive market \(\mathbf{d}_{0}\).
For brevity, we have omitted numerous additional terms: costs for starting, stopping, and running power plants; constraints on minimum and maximum output levels for each plant; minimum up- and down-times; and transmission line loading during regular operation or contingencies [34, 35, 36, 37, 38, 39, 40]. These could be included in future work.
If we split problem (1) into a power production cost function \(C(\mathbf{D})\) and a consumption benefit function B(\(\mathbf{D}\)) as follows:
\[\begin{split} C(\mathbf{D})&=\min_{\mathbf{g}_{i},l \in G\times}\sum_{l\in G}c_{i}(\mathbf{g}_{i})+(\text{other system-level costs})\\ &\qquad\text{such that}\;\sum_{l\in G}\mathbf{g}_{i}+\mathbf{g}_{0 }=\mathbf{D}+\mathbf{d}_{0}\\ &\qquad\text{(and other system-level constraints)}\end{split} \tag{2}\] \[\begin{split} B(\mathbf{D})&=\max_{\mathbf{d}_{j},j\in D}\sum_{j\in D}b_{j}\big{(}\mathbf{d}_{j},\mathbf{z}_{j}\big{)}\\ &\qquad\text{such that}\;\sum_{j\in D}\mathbf{d}_{j}=\mathbf{D}\\ &\qquad\text{(device-level constraints)}\end{split} \tag{3}\]
then problem (1) becomes simply
\[\min_{\mathbf{D}}C(\mathbf{D})-B(\mathbf{D}) \tag{4}\]
In other words, the system operator must find a consumption plan \(\mathbf{D}\) that can be served at the lowest net cost, where net cost is defined as the system's production cost function \(C(\mathbf{D})\) minus the customers' benefit \(\mathrm{B}(\mathbf{D})\).
## 5 Dantzig-Wolfe-Based Coordination Mechanism
For large power systems, it may be challenging to convey the details of the PRDs' benefit problems to a central coordinator. Even if the central coordinator had those details, the entire unit commitment problem with millions of PRDs would likely be too large to solve on a single computer. In this paper, we instead present a distributed, iterative coordination mechanism based on the Dantzig-Wolfe decomposition of the unit commitment problem, which relies on PRDs to calculate their own parts of this problem.
The unit commitment problem with PRDs has two properties that make this possible. First, the PRDs' local optimization problems can be separated from the main unit commitment problem, except for the requirement that total production equals total consumption. Second, the PRDs' private optimization problems are generally convex (often linear), e.g., choosing hourly consumption levels to recharge a battery overnight fully. These properties are true for the individual PRDs' optimization problems and also for the overall consumption benefit problem, which is the sum of all the independent problems for each PRD.
Dantzig-Wolfe decomposition is a well-known iterative technique for solving optimization problems of this form that are too large to represent directly on a single computer [32, 41, 42, 43, 33]. As such, it is a good choice for the unit commitment problem with millions of PRDs. Dantzig-Wolfe decomposition also comes with attractive guarantees: it provides a better solution with each iteration; convergence occurs in finite time with linear subproblems; at each iteration the difference between primal and dual objective values provides a measure of the distance from optimality; and if interrupted before convergence, it will always provide a feasible solution.
Dantzig-Wolfe decomposition relies on two facts about convex programs: (1) any feasible solution can be expressed as a convex combination (weighted average) of the extreme points of the program (i.e., the feasible region is all the space inside the convex hull of a collection of points known as the "extreme points" of the problem), and (2) the program's objective function (net cost) can always be expressed as a convex combination of the objective value at the extreme points (i.e., the objective function is the convex hull of the objective values at those same points).
These facts allow us to reframe the consumption benefit function in terms of weights are applied to the extreme points of the consumption problem:
\[\begin{split} B(\mathbf{D})&=\max_{w_{k}}\sum_{k\in K }w_{k}b_{k}\\ &\text{s.t.}\ \ \mathbf{D}=\sum_{k\in K}w_{k}\mathbf{d}_{k}\\ & w_{k}\geq 0,\ \ \forall\ k\in K\\ &\sum_{k\in K}w_{k}=1\end{split} \tag{5}\]
where \(K\) is the set of all feasible points, \(\mathbf{k}\) is an index in \(K\), \(\mathbf{d}_{k}\) is the demand at point \(k\), \(b_{k}\) is the value of the benefit function at that point, i.e., \(b_{k}=B(\mathbf{d}_{k})\), and \(w_{k}\) is the weight given to point \(k\). With this in mind, we can rewrite problem (4) as the Dantzig-Wolfe master problem:
\[\begin{split}\min_{w_{k}:k\in K}C(\mathbf{D})-B(\mathbf{D})& =C\left(\sum_{k\in K}w_{k}\mathbf{d}_{k}\right)-\sum_{k\in K}w_{ k}b_{k}\\ &\text{s.t.}\sum_{k\in K}w_{k}=1\end{split}. \tag{6}\]
In this formulation, the master problem consists of constructing a consumption plan \(\mathbf{D}\)--a convex combination of extreme points of the consumption problem--that can be served at the lowest net cost, where net cost is defined as the system's production cost function \(C(\mathbf{D})\) minus the customers' benefit \(\mathrm{B}(\mathbf{D})\).
It is not usually possible to enumerate all the members of the set \(K\). Instead, Dantzig-Wolfe decomposition builds up a subset \(\mathcal{K}\) that defines the feasible space near an optimal solution [32]. This subset is created by iterating between the master problem, as shown in (6), and the demand-side sub-problem:
\[\min_{\mathbf{D}}\mathbf{p}^{T}\mathbf{D}-B(\mathbf{D}) \tag{7}\]
Here, \(\mathbf{p}\) is a vector of hourly electricity prices. These are set equal to the marginal cost of serving additional load in the master problem (6) (e.g., dual values of the load-serving constraint) during the previous iteration. When the consumption subproblem (7) is solved with
these prices, it finds the best possible consumption plan \(\mathbf{D}\) at these prices. This solution must be at an extreme point of the consumption problem, so this provides another point \(\mathbf{d}_{k}\) and benefit value \(b_{k}=B(\mathbf{d}_{k})\) to add to the set of extreme points \(K\).
This continues until there is no new extreme point that can improve the solution to the problem, at which point the issue has converged on the optimal consumption plan [32]. If \(B(\mathbf{D})\) is a linear program (or equivalent to one), it has a finite number of extreme points so that convergence will be achieved in finite time. On the other hand, nonlinear, convex programs are equivalent to linear programs with infinitely many extreme points. In this case, the solution will improve at each step, asymptotically approaching the optimum.
Consequently, the unit commitment problem with PRDs can be solved as follows. In each iteration, the master problem (6) chooses the best combination of all the previously reported demand vectors and offers tentative prices to the consumption subproblem; then consumption subproblem offers a demand vector that can be served at equal or lower net cost (cost minus benefit). Specifically,
1. The central coordinator makes an initial estimate of prices \(\mathbf{p}_{(0)}\).
2. The consumption subproblem (7) is solved on a distributed basis: 1. The central coordinator passes the price vector \(\mathbf{p}_{k}\) to the PRDs. 2. All PRDs solve their individual subproblems and return \(\mathbf{d}_{k}\) and \(b_{k}\). These are their consumption plan and an estimate of the benefit of this plan (i.e., the amount they'd be willing to pay to follow this schedule). For simplicity or privacy protection, the benefit can be specified with an arbitrary offset that is constant over all iterations.2 Footnote 2: For devices where the direct benefit does not vary among all feasible operating plans (e.g., an electric vehicle that always charges fully regardless of prices), the benefit of any feasible plan could be reported as zero. For devices where the benefits vary from one plan to another, the value of the benefit function should change by the same amount.
3. The central coordinator adds the new demand profile \(\mathbf{d}_{k}\) with benefit value \(b_{k}\) to the set of extreme points \(K\) and solves the master problem (6), choosing a new combination of previously reported extreme points. If an optimality threshold is reached or an iteration limit has been reached, then the process terminates. Otherwise, the objective and pricing have been slightly improved, and the central coordinator sets new prices \(\mathbf{p}_{k+1}\) equal to the dual values of the load-serving constraints. Then the process returns to step 2.
4. At the end of the iterations, the system coordinator reports the final nonzero weights \(w_{k}\) to the PRDs (or possibly to the home or feeder coordinator). These are then multiplied by the load bids previously received from individual PRDs. These load levels are then assigned as the amount of power to be used by each PRD. If the problem converges, these load vectors are optimal for individual PRDs and the whole system at the final prices. If the process stops short of convergence, then the final price and load vectors are still known to be feasible and an improvement over the starting point for both the system and the individual PRDs.
Importantly, this mechanism can choose any mixture of previous demand bids by the PRDs, rather than being confined to the exact load profiles that have been bid. With highly price-sensitive devices, this has significant advantages over approaches that attempt to find just the right price to induce the right demand: (1) If very price-sensitive PRDs such as EVs or customer-sited batteries are on the margin in multiple hours, they will arbitrage away any price differences; this occurs because even infinitesimal price differences between hours would cause large swings of load between those hours. Consequently, the efficient prices may be flat over the period when these devices are on the margin, and will fail to send a signal to achieve a particular shape of load (e.g., valley-filling or renewable-following). Instead, a quantity signal is also needed, such as the one provided here. (2) In practice, price-based coordination mechanisms often work by adjusting prices in small steps until supply and demand are balanced. However, that is not possible with infinitely price-sensitive devices, since even
infinitesimal variations will cause all the load to swing in one direction or the other; this causes those methods to oscillate without converging on a single right price vector (and as noted above, even if they found the right price vector, that alone would not induce the right consumption vector). In our work, described below, the Dantzig-Wolfe mechanism performs well for highly-price-sensitive devices, choosing a mix of demand vectors from either side of the equilibrium point and quickly moving toward convergence.
It should also be noted that the consumption subproblem can naturally be divided into separate subproblems for each device. Then the main subproblem can be solved by passing the prices to all the individual devices, letting them solve their portion of the subproblem, and summing the load vectors and benefit values that they return. This could allow for highly parallelized computation and efficient communication in a real-world implementation. For example, the master problem could be solved by a central coordinator, while nodal coordinators manage each distribution circuit, and home coordinators manage the individual PRDs within each home. Devices at each level of the hierarchy simply announce price vectors to the devices or coordinators below them and then sum the demand vectors and benefits that they receive back. This allows data to be condensed by several orders of magnitude at each level, so very little bandwidth is needed at each level (just enough to communicate with a few dozen elements at the next lower level). It is also worth noting that this technique can incorporate highly heterogenous PRDs without any modification, i.e., any device that has a convex (rational) response to prices.
### Price Responsive Device (PRD) Models
In the literature, loads are categorized as non-shiftable, shiftable and thermostatically controlled appliances (TCA) [44]. non-shiftable (uninterruptible) are loads that are the one that energy must be delivered to them instantly. TVs or stoves are from the non-shiftable group. The interruptible are appliances that instant energy usage is not needed since either they have batteries or are needed once per day or week while urgent usage is not needed. EVs, dishwashers and driers are from this category. The TCAs are the ones where the energy consumption is a function of temperature (ambient temperature or user's desired temperature). The working temperature (desired) could be affected by the ambient temperature, or it could be affected by input/output of the energy through consumption or added energy from the heating/cooling elements respectively. In many works such as [18] only non-interruptible and interruptible loads are considered for the task of energy management. However, since non-shiftable loads have no response to any control command (either from direct load control or optimization), they could be removed from the problem without losing generality. Since it is not possible to show the dynamic response of all types of loads in one paper and the fact that their behavior is similar to some extent, EVs from shiftable loads and EWHs from TCAs are selected to represent a sample of load profile. This section covers the self-benefiting model of PRDs that participate in the market bidding. i.e., the EWH and EV.
#### 5.1.1 Electric Water Heater Model
The following shows a self-benefiting model of EWH that aims to minimize its cost. It is a day-ahead optimization implemented as a linear program based on the model developed in Section 1 and [5]. It is dual-objective, i.e., the model tries to find a balance between cost saving and discomfort. For this study, we assume perfect foresight of hot water requirements and optimize power consumption accordingly. In future work, we could consider an EWH that buys vectors of power as shown here, based on a stochastic forecast of hot water usage, as discussed in Section 1 and [5].
Equation (8) shows the dual-objective cost function which minimizes the cost from use of electricity and the discomfort cost (underheated water) that could happen from using this program.
\[\min_{E^{\mathsf{in}}\in\mathbb{R}^{24}}\sum_{h\in l}p_{h}^{\mathsf{p}}E_{h}^{ \mathsf{in}}+p^{\mathsf{s}}E_{h}^{\mathsf{short}} \tag{8}\]
Equation (9) models the thermodynamic behavior of EWH.
\[T_{d,h}^{\mathsf{tank}}=T_{h-1}^{\mathsf{tank}}+\frac{\left(E_{h}^{\mathsf{in }}-E_{h}^{\mathsf{w}}\right)}{c^{\mathsf{tank}}}-r^{\mathsf{loss}}\cdot\left(T _{h}^{\mathsf{tank}}-t_{h}^{\mathsf{amb}}\right) \tag{9}\]
In (10), the maximum energy that can be drawn from the grid is limited to the EWH energy input capacity.
\[E_{h}^{\mathsf{in}}\leq e^{\mathsf{max}} \tag{10}\]
Equation (11) and (12) model the temperature and energy shortfall respectively. They calculate the underheated water that would be caused by the plan.
\[T_{h}^{\mathsf{short}}\geq t^{\mathsf{min}}-T_{h}^{\mathsf{tank}} \tag{11}\]
\[E_{h}^{\rm short}=e_{h}^{\rm des}\frac{T_{h}^{\rm short}}{\left(t^{\rm min}-t_{h} ^{\rm in}\right)} \tag{12}\]
After solving this problem, PRD number j reports its consumption vector during iteration k as \({\bf d}_{j,k}=[E_{0}^{\rm in},E_{1}^{\rm in},E_{2}^{\rm in},...,E_{23}^{\rm in}]\). It also calculates the benefit \(b_{j,k}\) as being equal to the shortfall penalty times the quantity of hot water consumed (this may vary from one iteration to the next, as the plan adapts to different electricity prices). By definition, this is the amount that the customer would be willing to pay for the hot water they will receive, and by extension, for the power that will be used to produce this hot water.
#### 5.1.2 Electric Vehicle Model
The electric vehicles follow a simple cost-minimizing algorithm. The user defines the hours that the EV can be charged (parking hours) in advance. These hours are expressed through the maximum charging rate per hour. i.e., \(e_{h}^{max}\) for hour \(h\). The \(e^{des}\) is the energy needed by the end of the charging window (i.e., the total energy required for the day). Then, the role of the following cost-minimizing programming is to find the cheapest hours to pick to charge the vehicle.
The cost function is shown in equation (13). The objective is simply to minimize the user's electricity cost.
\[\begin{split}\text{the}\underset{E^{\rm in}\in\mathbb{R}^{24}}{ \text{annmin}}\sum_{h\in I}p_{h}^{\rm p}E_{h}^{\rm in}\end{split} \tag{13}\]
In this equation, \(I\) is the set of all hours in the coming day and \(p_{h}^{\rm p}\) is the (tentative) price for power during each of those hours, and \(E_{h}^{in}\) is the decision variable showing the amount of energy to use for charging during each hour.
The charging window is modeled through the equation (14):
\[E_{h}^{in}\leq e_{h}^{max} \tag{14}\]
The total energy added to the battery through the charging window must be equal to the total energy needed. Equation (15) shows this constraint.
\[\sum_{h\in I}E_{h}^{\rm in}=e^{\rm des} \tag{15}\]
We use a simple algorithm to solve this linear program:
* Select hours of the day when charging is possible (\(e_{h}^{max}>0\)).
* Sort these hours from lowest to the highest price, then from early to late as a tie-breaker.
* During each hour, starting with the first in the list, add enough charge to reach \(e^{\rm des}\), or charge at rate \(e_{h}^{max}\), whichever is less.
After choosing the optimal charging plan, the EV model reports back the consumption vector **the \({\bf d}_{j,k}=[E_{0}^{\rm in},E_{1}^{\rm in},E_{2}^{\rm in},...,E_{23}^{\rm in}]\)**. When using this algorithm, the EV always obtains a full charge, so the benefit to the EV is the same regardless of the charging plan. So the benefit \(b_{j,k}\) is reported as zero for each EV during each iteration.
### Aggregated Consumption Benefit Problem
For the Dantzig-Wolfe iteration described in this paper, the consumption subproblem (7) is regarded as a single problem encompassing all PRDs in the system. That is achieved simply by offering the same price vector \({\bf p}^{p}={\bf p}_{k}\) to all the water heater and electric vehicle models, solving the model for each individual PRD, then summing the consumption plans for all the PRDs:
\[{\bf d}_{k}=\sum_{j\in J}{\bf d}_{j,k}\ \,and\ \ \ b_{k}=\sum_{j\in J}b_{j,k} \tag{16}\]
These are then reported to the central coordinator.
After the last iteration of the master problem, the final weights \(w\kappa\) from the master problem (6) are assigned back to the PRDs, based on each device's prior bids:
\[{\bf d}_{j}^{*}=\sum_{k=1}^{N}w_{k}{\bf d}_{j,k} \tag{17}\]
This could be done by reporting the weights to the devices themselves or by having the home or circuit coordinator calculate \({\bf d}_{j}^{*}\) and report it to device \(j\) as the amount of power it has purchased.
With this assignment, the total power consumed by all PRDs matches the total expected/constructed in the system-level master problem:
\[\mathbf{D} = \sum_{k=1}^{N}w_{k}\mathbf{d}_{k}=\sum_{k=1}^{N}w_{k}\sum_{j=0} \mathbf{d}_{j,k} \tag{18}\] \[= \sum_{j\in D}\sum_{k=1}^{N}w_{k}\mathbf{d}_{j(k)}=\sum_{j\in D} \mathbf{d}_{j}^{*}\]
This framework assumes that the PRDs all have convex responses to price. It foregoes the option of assigning different weights to the extreme points (bids) elicited from the individual devices. In future work, we could incorporate devices with integer decisions (e.g., dishwashers or clothes washers that must run an entire cycle once they start). One promising option for this would be to fine-tune the weights for individual PRDs at the feeder (neighborhood) level, requiring integer weights for the integer PRDs, and adjusting the weights for all the PRDs on the feeder so that the total for the feeder matches the total load requested by the system as closely as possible.
### Power System Model
For this paper, we use a simple electricity production model. We ignore non-dispatchable generation \(\mathbf{g}_{0}\) and non-price-responsive loads \(\mathbf{d}_{0}\), so the load-balancing constraint of equation (2) becomes
\[\sum_{t\in G}\mathbf{g}_{t}=\mathbf{D}=\sum_{k\in K}w_{k}\mathbf{d}_{k} \tag{19}\]
We also assume that the system has a quadratic production cost so that the minimum cost of producing an amount of power \(\mathbf{D}\) from the available generators is
\[\mathcal{C}(\mathbf{D})=\sum_{h=1}^{24}aD_{h}^{2} \tag{20}\]
where \(D_{h}\) is the element of the demand vector \(\mathbf{D}\) for hour \(h\).
These assumptions create a simple system with easily calculated marginal costs (\(2aD_{h}\)) each hour. However, the approach shown in this paper is general enough to apply to any convex supply side, e.g., a linear program where marginal costs are the dual values of the hourly load-balancing constraints.
### Optimality Gap
By step \(N\) of the Dantzig-Wolfe iteration, the PRDs in the system have revealed the following extreme points (feasible consumption quantity vectors):
\[\mathbf{d}_{k},\hskip 14.226378ptk\in K^{\prime}=1..N \tag{21}\]
Where \(N\) is the number of iterations/bids we've done, and \(\mathbf{d}_{k}\) is the total consumption bid from all PRDs during step \(k\). The benefit to the customers for each extreme point (the maximum they would be willing to pay for that vector, possibly shifted by a constant offset) is
\[b_{k},\hskip 14.226378ptk\in K^{\prime}=1..N \tag{22}\]
where \(b_{k}\) is a scalar, the sum of all the benefits reported by the individual PRDs.
Then, any weighted combination of these bids is possible because the feasible regions for the EV and EWH models are convex. So, the master problem constructs a consumption plan (load vector) that is a weighted sum of the previous bids:
\[\mathbf{D}=\sum_{k=1}^{N}w_{k}\mathbf{d}_{k} \tag{23}\]
where \(\mathbf{D}\) is the constructed bid (vector, one element for each hour \(h\)).
The total benefit of this constructed bid is \(B(\mathbf{D})\), where
\[B(\mathbf{D})\geq B^{\prime}(\mathbf{D})=\sum_{l=1}^{N}w_{k}b_{k} \tag{24}\]
The central coordinator doesn't know \(B(\mathbf{D})\) directly, but over multiple iterations, the reduced-form representation \(B^{\prime}(\mathbf{D})\) will converge to the actual value of \(B(\mathbf{D})\)[32]. As noted previously, this will occur in finite time if the PRD subproblems are linear programs (as they are for this paper), and will occur asymptotically if the PRD problems are convex but nonlinear [108].
Dantzig-Wolfe decomposition gives an optimality gap at each iteration, which can be seen as the difference between two measures: \(S^{\text{best}}\) (the best possible value of the objective function) and \(S^{\text{known}}\) (the best objective value found so far).
During round \(\mathbf{k}\), the master problem minimizes \(\mathcal{C}(\mathbf{D})-B^{\prime}(\mathbf{D})\). The value of the objective function at this point is
\[S^{\text{known}}=\mathcal{C}(\mathbf{D}_{k})-B^{\prime}(\mathbf{D}_{k}) \tag{25}\]
where \(\mathbf{D}_{k}\) indicates the value of \(\mathbf{D}\) selected by the master problem in round \(\mathbf{k}\).
\(S^{\text{known}}\) is a lower bound on the objective function because it is known to be achievable. \(B^{\prime}(\mathbf{D}_{k})\) in the
master problem is conservative. It is a weighted combination of points on the true \(B(\textbf{D})\) function, so it must be equal to or below the true \(B(\textbf{D}_{k})\) with loads \(\textbf{D}_{k}\), since \(B(\textbf{D}_{k})\) is convex. \(C(\textbf{D}_{k})\) is also achievable because it is the actual production cost function.
After solving the master problem in round \(k\), the central coordinator offers tentative prices \(\textbf{p}_{k}\) to the customers, and they propose to buy a total amount of power each hour \(\textbf{d}_{k}\). They also report that this power gives them the benefit of \(b_{k}\).
The subproblem solution \(\textbf{d}_{k}\) (with benefit \(b_{k}\)) is optimal, which means that all other values of **d** would give equal or worse objective values, i.e.,
\[B(\textbf{d})-\textbf{p}_{k}\cdot\textbf{d}\leq b_{k}-\textbf{p}_{k}\cdot \textbf{d}_{k} \tag{26}\]
for all **d**, so
\[B(\textbf{d})\leq b_{k}-\textbf{p}_{k}\cdot\textbf{d}_{k}+\textbf{p}_{k}\cdot \textbf{d} \tag{27}\]
The master problem has solution \(\textbf{D}_{k}\), with production cost \(C(\textbf{D}_{k})\) and incremental production cost \(\textbf{p}_{k}\), i.e., \(\textbf{p}_{k}\) is the gradient of \(C(\textbf{d})\) at \(\textbf{D}_{k}\). Since \(C(\textbf{d})\) is convex downward, it must lie above the plane that is tangent at this point, so
\[C(\textbf{d})\geq C(\textbf{D}_{k})+\textbf{p}_{k}\cdot(\textbf{d}-\textbf{D} _{k}) \tag{28}\]
for all possible demand vectors d.
Subtracting equation (27) from (28) gives
\[C(\textbf{D})-B(\textbf{D})\geq C(\textbf{D}_{k})-\textbf{p}_{k}\cdot\textbf{ D}_{k}-b_{k}+\textbf{p}_{k}\cdot\textbf{d}_{k} \tag{29}\]
so
\[S^{\rm best}=C(\textbf{D}_{k})-\textbf{p}_{k}\cdot\textbf{D}_{k}-b_{k}+ \textbf{p}_{k}\cdot\textbf{d}_{k} \tag{30}\]
Then the optimality gap is
\[\begin{array}{l}S^{\rm known}-S^{\rm best}=C(\textbf{D}_{k})-B^{\prime}( \textbf{D}_{k})\\ -C(\textbf{D}_{k})+\textbf{p}_{k}\cdot\textbf{D}_{k}+b_{k}-\textbf{p}_{k} \cdot\textbf{d}_{k}\\ =[b_{k}-\textbf{p}_{k}\cdot\textbf{d}_{k}]-[B^{\prime}(\textbf{D}_{k})-\textbf {p}_{k}\cdot\textbf{D}_{k}]\end{array} \tag{31}\]
For this paper, we always performed 24 iterations and only used the optimality gap as an informative diagnostic. However, in future work, we could iterate until the gap is below a fixed limit or a fixed fraction of the direct costs \(C(\textbf{D}_{k})\).
## 6 Results
### Network structure and problem assumption
The structure of the smart grid is shown in Figure 2. This network serves eight households (a total of 16 appliances). All the loads are connected to the grid operator through a common line. Without losing the generality, it is assumed that all units are being fed through one distributed generator (DG) unit that feeds all the loads. Since the only energy source is one dispatchable unit and no renewable energy that can affect generation cost, the marginal cost for all hours is the same. Also, power quality constraints such as voltage and frequency regulations are not considered in this unit commitment problem, and the focus is on economic dispatch.
### Load type:
In this work, it is assumed that the generator belongs to all consumers. This means the operator acts only as a coordinator and does not seek its benefit. Each customer pays the operational cost of using the generator unit, which is equal to their consumption profile times the price proposed by the coordinator. Each household ONLY sends its energy bid to the coordinator, and no neighbor is aware of other individuals' consumption patterns.
Each household responds to the day-ahead price signal and sends the aggregate energy consumption of all devices in its network (EV and EWH). As mentioned above, the loads that can participate in the bidding process are both shiftable and TCA loads. To narrow the scope, EVs are selected among the shiftable loads, and electric water heaters are selected from the TCA loads. Therefore, in this work, each household responds to price signals to optimize the performance
Figure 2: Grid structure
of its electric vehicle and electric water heater. All other loads (including other non-shiftable, shiftable, and TCAs) are removed from the load profile to simplify the problem.
In this hypothetical scenario where the generation cost is equal for all hours (since no renewable energy generation exists) and only demand-responsive loads exist, the ideal aggregate load and price signal would be flat for all intervals. However, the current model could easily be changed to include non-interruptible loads of renewable energy sources.
### Individual appliances respond to the price signal and their benefit
As mentioned in the previous section, only two types of appliances are considered for this simulation. Also, it was mentioned that loads are categorized into three main types: 1-non-shiftable, 2-shiftable, and 3-TCAs. Non-shiftable loads don't respond to price, and their loads are fixed. Therefore, to simplify and focus to the claims of this paper, non-shiftable loads are removed from the load profile. In this section, we focus on the performance of an individual EV and EWH and show how they selfishly try to maximize their benefit by responding to the price signal by running the optimizations that are proposed in section 5.1.
#### 6.3.1 EWH Response
Figure 3 shows the EWH response to the price signal sent from the coordinator for one round of optimization. Based on this price and prediction of user energy consumption, the EWH finds the optimal answer (self-benefiting). The energy usage that is the decision variable of EWH is sent to the aggregator as the energy bid on behalf of the EWH. The price signal is shown in black in the third section of the figure. At the same time, the EWH has a prediction for the energy withdrawal for the user. This prediction can come from analyzing historical data and using the machine learning method to obtain this prediction. The prediction in this work is considered perfect (not probabilistic) and is shown in the middle section of the figure with dotted black lines. The main variables that resulted from solving the optimization problem that was explained in section 5.1.1. Section one shows the actual temperature of the water heater based on the predictions. The pink dotted line in the middle section represents the energy input that EWH plans to use. This is the variable that is sent to the coordinator after each round of solving the problem. A more detailed discussion of the EWH optimization model is provided in [5].
Figure 4: EV response to broadcasted price
Figure 3: EWH response to the broadcasted price
#### 6.3.2 EV Response
Figure 4 shows the response of the electric vehicle to the price signal. The gray bars show the energy the EV can absorb at each hour. The times that availability is zero means that the vehicle is operational and can't be charged at those intervals. The blue bars show how much energy EV will absorb based on the optimization result explained in section 6.3.2. Similar to the EWH response, the energy input, which is the decision variable of EVs (blue bars), is reported to the coordinator as the energy bid of the EV. For example, this particular EV absorbed energy at full capacity in hours 1, 4, 21, and 24 and worked with one-third of its capacity at hour 5.
### Price and Aggregate Demand Profile
in a problem where marginal cost is the same for all hours (since there is no renewable energy source for this problem) and there is no fixed load, the ultimate ideal scenario for aggregate load and price profile is a flat line. i.e., the Peak to Average Ratio (PAR) is one. Therefore, to evaluate the performance of the DW algorithm, we must see how far from the flat curve the resulting aggregate load and price profile are from the flat lines.
Figure 5 and Figure 6 show the price signal and aggregated load (in response to that price) at step 0 and the last step, respectively. i.e., each appliance solves its local self-benefiting optimizations in response to this price and reports back its scheduled energy usage as an energy bid. As can be observed, the initial demand is accumulated only during specific hours (as mentioned before, only EVs and EWHs are used in this work). So, it means that the price generated by the expected marginal cost of electricity generation (based on energy consumption) will no longer represent the marginal cost of generation. The reason is that the PRDs change their behavior to maximize their benefit
Figure 7: Gap & Generation cost & user payment
Figure 8: PAR & STD for price and aggregate demand
(which means they will shift their consumption to cheap hours). This accumulation causes a high generation cost for the hours that were initially expected to be sparsely used. As it can be seen, the initial price is high between 3-6 AM. This sparks a low energy consumption in these expensive hours. While a 54 KWh is recorded as a peak hour that causes a high. So, this response means that if the coordinator price generation is only based on the current expected energy consumption, the algorithm never converges, and a flip-flop response is observed.
However, this point is where the importance DW algorithm shows up. The DW finds the optimal point between the previously reported bits at each optimization step. For example, in the third step of the optimization, by using the interpolation between the first bid and the second bit. This guarantees that the result of the third step is more optimal than the first two steps. This process is valid for other steps as well. For example, in the nth step, the algorithm has access to the model's behavior for the n-1 points. This means it can find an answer for the nth step that is at least as good as the previous step or better. This algorithm is repeated until no further improvement happens in the system.
Figure 6 shows the price and aggregate demand when the iterative algorithm finishes the optimization. The PAR reduces significantly, and the peak load is reduced from 54 KW to 26 KW. It must be noted that all loads are price responsive in this simulation. In a more realistic scenario, where part of the loads is non-responsive to the price, the amount of PAR will be less than the time when all loads are price responsive. As shown in Figure 6, the PAR for the price signal reduces significantly from 4.7 to 2.6.
### Progress through iteration
As mentioned before, in each iteration, the coordinator sends the price signal to each appliance in each house. Then each device maximizes its benefit by solving an optimization problem locally given the broadcasted price signal. When the optimization reaches a certain level, when the price signal is broadcasted, none of the appliances change their bid. This shows that the network reached the global optimum or the Nash equilibrium. Figure 5 shows the progress of the DW through iterations. In the initial iteration of iterations, the variation is high for both the price and aggregated demand. The PAR for the price and aggregate demand is around 4.7 at this point. As the iterations progress, the PAR and standard deviation (STD) reduce. At iteration 5 and after, the PAR reduces to approximately 1.1. however, this variation is significant enough to induce a response to the loads to change their consumption. At iteration 15, no changes happen to the reaction of loads to the price. i.e., the algorithm reaches a point where no appliance wants to change its consumption plan (definition of Nash Equilibrium). This shows that the algorithm reached its primary goal: propose a price/energy bid to each individual user where they don't want to change their behavior. The optimality of the algorithm is discussed in the next section.
### User Payment & Generation Cost & Optimality Gap
The generation cost and aggregate user payments are shown in Figure 7 in red and black, respectively. User payment is the sum of electricity prices times the aggregate consumption for that interval. The generation cost is calculated using the formula (20). As shown in Figure 7, generation cost and user payment are both high at the start of the algorithm. As the negotiation (iterations of the algorithm) goes forward, the cost of generation and user payment decreases. When the price signal is announced, none of the appliances change their consumption plans (they reach the Nash Equilibrium if we talk in terms of game theory). The gap, defined through formula (28), is shown in blue in Figure 7. The gap shows the distance between the best possible answer and the best-known answer. As the algorithm progress, the algorithm updates the value of these two variables. At the later part of the optimization (iteration 15 for this specific simulation), the distance between these two variables (Gap) stays fixed, and no further improvement happens (optimal point). Alternatively, as was mentioned in the previous section, the algorithm reaches the Nash equilibrium at iteration 15.
## 7 Conclusion
With the increased share of renewables and distributed generation in power systems, there is more need for a coordination algorithm to find an equilibrium between generation and consumption. One effective way to promote the goals in the smartgrid paradigm is creating a market for these services. Therefore, in this paper, a market-based coordination algorithm was proposed. However, coordination of PRDs could be very challenging due to the rapid change of PRDs energy consumption to even infinitesimal price signal
|
2303.17733
|
Physics of Automated-Driving Vehicular Traffic
|
We have found that a variety of phase transitions occurring between three
traffic phases (free flow (F), synchronized flow (S), and wide moving jam (J))
determine the spatiotemporal dynamics of traffic consisting of 100%
automated-driving vehicles moving on a two-lane road with an on-ramp
bottleneck. This means that three-phase traffic theory is a common framework
for the description of traffic states independent of whether human-driving or
automated-driving vehicles move in vehicular traffic. To prove this, we have
studied automated-driving vehicular traffic with the use of classical Helly's
model (1959) widely applied for automated vehicle motion. Although dynamic
rules of the motion of automated-driving vehicles in a road lane are
qualitatively different from those of human-driving vehicles, we have revealed
that a free-flow-to-synchronized-flow transition (F$\rightarrow$S transition)
exhibits the nucleation nature, which was observed in empirical field data
measured in traffic consisting of 100% human-driving vehicles. The physics of
the nucleation nature of the F$\rightarrow$S transition in automated-driving
traffic is associated with a discontinuity in the rate of lane-changing that
causes the discontinuity in the rate of over-acceleration. This discontinuous
character of over-acceleration leads to both the existence and self-maintaining
of synchronized flow at the bottleneck in automated-driving vehicular traffic
as well as to the existence at any time instant of a range of highway
capacities between some minimum and maximum capacities. Within the capacity
range, an F$\rightarrow$S transition can be induced; however, when the maximum
capacity is exceeded, then after some time-delay a spontaneous F$\rightarrow$S
transition occurs at the bottleneck. The phases F, S, and J can coexist each
other in space and time.
|
Boris S. Kerner
|
2023-03-30T22:22:32Z
|
http://arxiv.org/abs/2303.17733v1
|
# Physics of Automated-Driving Vehicular Traffic
###### Abstract
We have found that a variety of phase transitions occurring between three traffic phases (free flow (F), synchronized flow (S), and wide moving jam (J)) determine the spatiotemporal dynamics of traffic consisting of 100% automated-driving vehicles moving on a two-lane road with an on-ramp bottleneck. This means that three-phase traffic theory is a common framework for the description of traffic states independent of whether human-driving or automated-driving vehicles move in vehicular traffic. To prove this, we have studied automated-driving vehicular traffic with the use of classical Helly's model (1959) widely applied for automated vehicle motion. Although dynamic rules of the motion of automated-driving vehicles in a road lane are qualitatively different from those of human-driving vehicles, we have revealed that a free-flow-to-synchronized-flow transition (F\(\rightarrow\)S transition) exhibits the nucleation nature, which was observed in empirical field data measured in traffic consisting of 100% human-driving vehicles. The physics of the nucleation nature of the F\(\rightarrow\)S transition in automated-driving traffic is associated with a discontinuity in the rate of lane-changing that causes the discontinuity in the rate of over-acceleration. This discontinuous character of over-acceleration leads to both the existence and self-maintaining of synchronized flow at the bottleneck in automated-driving vehicular traffic as well as to the existence at any time instant of a range of highway capacities between some minimum and maximum capacities. Within the capacity range, an F\(\rightarrow\)S transition can be induced; however, when the maximum capacity is exceeded, then after some time-delay a spontaneous F\(\rightarrow\)S transition occurs at the bottleneck. The phases F, S, and J can coexist each other in space and time.
pacs: 89.40.-a, 47.54.-r, 64.60.Cn, 05.65.+b
## I Introduction
In traffic of human-driving vehicles, traffic breakdown that is a transition from free flow to congested traffic occurs mostly at bottlenecks. Already in 1950s-1960s two classes of models for traffic breakdown were introduced:
(i) In the classical Lighthill-Whitham-Richards (LWR) model [1; 2], it is assumed that there is a fundamental diagram for traffic flow at a highway bottleneck; the maximum flow rate at the fundamental diagram is equal to highway capacity: If the flow rate upstream of a bottleneck exceeds the capacity, traffic breakdown occurs; otherwise, no traffic breakdown can occur at the bottleneck (see, e.g., [3; 4; 5; 6; 7]).
(ii) In 1958, Herman, Gazis, Montroll, Potts, Rothery and Chandler from General Motors (GM) Company [8; 9; 10; 11] as well as by Kometani and Sasaki [12; 13; 14; 15] assumed that traffic breakdown occurs due to traffic flow instability in vehicular traffic. This classical traffic instability was incorporated into a number of traffic flow models (e.g., papers, reviews, and books [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]). As found in [34], the classic traffic instability leads to a phase transition from free flow (F) to a wide moving jam (J) called an F\(\rightarrow\)J transition.
It is commonly assumed that in future vehicular traffic automated-driving vehicles [automated vehicle (AV)] will play a decisive role (see, e.g., [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49]). Automated-driving is realized through the use of an automated system in a vehicle that controls over the vehicle in traffic flow as well as through the use of cooperative driving realized through vehicle-to-vehicle communication or/and through vehicle-to-infrastructure communication (see, e.g., [50; 51; 52; 53; 54]). In most studies of the effect of automated vehicles on mixed traffic consisting of random distribution of automated-driving and human-driving vehicles (e.g., see [55; 56; 57; 58; 59]), motion of human-driving vehicles is described with the use of the above-mentioned standard traffic flow models [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33].
However, from a study of empirical field traffic data it was found that real traffic breakdown is a transition from free flow (F) to synchronized flow (S) called an F\(\rightarrow\)S transition that occurs in metastable free flow with respect to an F\(\rightarrow\)S transition at a bottleneck [90; 91; 92] (see for a review [93; 94; 95; 96; 97]): The F\(\rightarrow\)S transition (traffic breakdown) exhibits the empirical nucleation nature (Fig. 1). The LWR theory [1; 2; 3; 4; 5; 6; 7] cannot explain the nucleation nature of real traffic breakdown. The classical traffic instability [6; 7; 8; 9; 9; 9; 9; 9; 9; 9; 9; 9; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 34] that leads to the F\(\rightarrow\)J transition [21; 22; 23; 24; 25; 26; 27; 34] cannot also explain real traffic breakdown at highway bottlenecks [130].
To explain the empirical nucleation nature of traffic breakdown (F\(\rightarrow\)S transition), the author introduced three-phase traffic theory [90; 91; 92; 93; 94; 95; 96; 97]). The three-phase traffic theory is a framework for the description of empirical traffic data in three phases: Free flow (F), synchronized flow (S) and wide moving jam (J); the traffic phases S and J belong to congested traffic. The first implementations of the three-phase traffic theory in mathematical traffic flow models have been made in [100; 101]. These stochastic models have been further developed for different applications (see, e.g., [102]). Over time, other traffic flow models, which incorporate hypotheses of the three-phases traffic theory, have also been developed (see, e.g., [103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124]). With the use of a
microscopic three-phase traffic model for human-driving vehicles, the effect of a small share of automated vehicles on traffic breakdown in mixed traffic at bottlenecks has been studied in [125].
A basic hypothesis of the three-phase traffic theory is that in some traffic situations vehicle acceleration called _over-acceleration_ exhibits a _discontinuous character_ (Fig. 2): In synchronized flow, the probability of over-acceleration is considerably lower than it is in free flow [90; 91; 93][131]. It has been shown that the discontinuous character of over-acceleration causes a metastability of free flow with respect to the F\(\rightarrow\)S transition; in its turn, this metastability explains the empirical nucleation nature of traffic breakdown observed in measured field traffic data. The three-phase traffic theory has been initially created for the description of empirical _human-driving_ vehicular traffic [90; 91; 92; 93; 94; 95; 96].
The objective of this paper is to show that the spatiotemporal dynamics of traffic consisting of 100% automated-driving vehicles is described in the framework of three-phase traffic theory. It should be emphasized that dynamic rules of motion of automated vehicles in a road lane can be developed that are totally different from the real dynamic behavior of human-driving vehicles. Therefore, a question can arise:
* Why should the three-phase traffic theory describe spatiotemporal phase transitions in traffic flow consisting of 100% of automated vehicles whose dynamics rules of motion in road lane can be totally different from the real dynamic behavior of human-driving vehicles?
To answer this question, we should recall that one of the mechanisms of over-acceleration exhibiting the discontinuous character (Fig. 2) is vehicle acceleration through lane-changing to a faster lane on a multi-lane road [90; 91; 93][132]. Either a human-driving or automated-driving vehicle changes to a neighborhood target lane if (i) some _incentive conditions_ for lane changing (like the vehicle can pass the preceding vehicle or/and move faster in the target lane) _and_ (ii) some _safety conditions_ for lane-changing are satisfied, at which no collisions between vehicles can occur. Thus, if the discontinuous character of over-acceleration due to lane-changing to a faster lane (Fig. 2) is realized for human-driving vehicles, it should be also for automated vehicles: The discontinuous character of over-acceleration can be assumed to be an universal
Figure 1: Empirical nucleation nature of traffic breakdown (F\(\rightarrow\)S transition) at bottlenecks in human-driving vehicular traffic; traffic data were measured with road detectors installed along road sections [90; 91; 93; 98]: (a) Speed data in space and time presented with averaging method of [99]: A moving synchronized flow pattern (MSP) that has emerged at downstream bottleneck (B-down) while propagating upstream induces F\(\rightarrow\)S transition (induced traffic breakdown) at upstream on-ramp bottleneck (B). (b) One of the empirical waves (black colored waves) of decrease in the average speed caused by slow moving vehicles (moving bottleneck) while propagating downstream in free flow acts as a nucleus for spontaneous F\(\rightarrow\)S transition (spontaneous traffic breakdown) at bottleneck (B) when the speed wave propagates through bottleneck B1. Adapted from [96].
Figure 2: Discontinuous character of over-acceleration [90; 91; 93]: (a) Qualitative presentation of over-acceleration probability during a given time interval. Equivalent presentation of (a) as a discontinuous dependence of the mean time delay in over-acceleration on the flow rate; F and S are states of free flow and synchronized flow, respectively. Adapted from [93; 96].
physical feature of vehicular traffic.
The paper is organized as follows: In Sec. II, we consider a microscopic model of automated-driving vehicular traffic on a two-lane road and study the physics of the nucleation nature of the F\(\rightarrow\)S transition at a bottleneck. The existence of a range of highway capacities at any time instant is the subject of Sec. III. In Sec. IV, a generalization of nucleation features of the F\(\rightarrow\)S transition in automated-driving traffic is made. Transitions between the three phases F, S, and J in automated-driving traffic are studied in Sec. V. In discussion (Sec. VI), we show that the basic result about the nucleation nature of the F\(\rightarrow\)S transition at the bottleneck remains for string-unstable automated-driving traffic and even if a different model for automated-driving vehicles is used.
II Physics of metastability of automated-driving vehicular traffic at bottleneck with respect to F\(\rightarrow\)S transition
### Model of automated-driving vehicular traffic on two-lane road with on-ramp bottleneck
We study a model of vehicular traffic consisting of 100% identical automated vehicles moving on a two-lane road with an on-ramp bottleneck. We assume that the control over an automated vehicle moving in a road lane is realized through an adaptive cruise control system (ACC) that is described by a classical model in which the acceleration (deceleration) \(a\) of the automated vehicle is determined by the space gap to the preceding vehicle \(g=x_{\ell}-x-d\) and the relative speed \(\Delta v=v_{\ell}-v\) measured by the automated vehicle as well as by some optimal space gap \(g_{\rm opt}\) between the automated vehicle and the preceding automated vehicle (see, e.g., [36; 37; 38; 39; 40; 46; 47; 48]):
\[a=K_{1}(g-g_{\rm opt})+K_{2}\Delta v, \tag{1}\]
where \(x\) and \(v\) are the coordinate and the speed of the automated vehicle, \(x_{\ell}\) and \(v_{\ell}\) are the coordinate and the speed of the preceding automated vehicle, \(d\) is the vehicle length; here and below \(v\), \(v_{\ell}\), and \(g\) are time-functions; \(K_{1}\) and \(K_{2}\) are constant coefficients of vehicle adaptation;
\[g_{\rm opt}=v\tau_{\rm d}, \tag{2}\]
\(\tau_{\rm d}\) is a desired time headway of the automated vehicle to the preceding automated vehicle. The classical model (1), (2) that is currently used in most studied of automated-driving in a road lane [36; 37; 38; 39; 40; 46; 47; 48] is related to Helly's car-following model [126]. The motion of the automated vehicle in a road lane is found under conditions \(0\leq v\leq v_{\rm free}\) from the solution of equations \(dv/dt=a\), \(dx/dt=v\)[133], where the maximum speed (in free flow) \(v_{\rm free}\) is a constant. There can be string instability of a long enough platoon of automated vehicles (1), (2) [36; 37; 38; 39; 40; 46; 47; 48]. As found by Liang and Peng [38], coefficients \(K_{2}\) and \(K_{1}\) in (1) can be chosen to satisfy condition for string stability
\[K_{2}>(2-K_{1}\tau_{\rm d}^{2})/2\tau_{\rm d}. \tag{3}\]
In the main text of the paper (Secs. II-V), we consider only automated vehicles whose parameters satisfy condition (3) for string stability[134].
We use incentive lane changing rules from the right to left lane R\(\rightarrow\)L (4) and from the left to right lane L\(\rightarrow\)R (5) as well as safety conditions (6) known for human-driving vehicles (see, e.g., [127])
\[R\to L:v^{+}(t)\geq v_{\ell}(t)+\delta_{1}\ {\rm and}\ v(t)\geq v _{\ell}(t), \tag{4}\] \[L\to R:v^{+}(t)\geq v_{\ell}(t)+\delta_{2}\ {\rm or}\ v^{+}(t)\geq v (t)+\delta_{2},\] (5) \[g^{+}(t)\geq\ v(t)\tau_{2},\quad g^{-}(t)\geq\ v^{-}(t)\tau_{1}, \tag{6}\]
at which the automated vehicle changes to the faster target lane with the objective to pass a slower automated vehicle in the current lane if time headway to preceding and following vehicles in the target lane are not shorter than some given safety time headway \(\tau_{1}\) and \(\tau_{2}\). In (4)-(6), superscripts \(+\) and \(-\) denote, respectively, the preceding and the following vehicles in the target lane; \(\tau_{1}\), \(\tau_{2}\), \(\delta_{1}\), \(\delta_{2}\) are positive constants.
Open boundary conditions are applied. At the beginning of the two-lane road \(x=0\) vehicles are generated one after another in each of the lanes of the road at time instants \(t^{(k)}=k\tau_{\rm in}\), \(k=1,2,\ldots\), where \(\tau_{\rm in}=1/q_{\rm in}\), \(q_{\rm in}\) is a given time-independent flow rate per road lane. The initial vehicle speed is equal to \(v_{\rm free}\). After a vehicle has reached the end of the road \(x=L\) it is removed. Before this occurs, the farthest downstream vehicle maintains its speed and lane.
In the on-ramp model, there is a merging region of length \(L_{\rm m}\) in the right road lane that begins at road location \(x=x_{\rm on}\) within which automated vehicles can merge from the on-ramp. Vehicles are generated at the on-ramp one after another at time instants \(t^{(m)}=m\tau_{\rm on}\), \(m=1,2,\ldots\), where \(\tau_{\rm on}=1/q_{\rm on}\), \(q_{\rm on}\) is the on-ramp inflow rate. To reduce a local speed decrease occurring through the vehicle merging at the on-ramp bottleneck, as assumed for many known cooperative automated driving scenarios, automated vehicles merge with the speed of the preceding vehicle \(v^{+}\) at a middle location \(x=(x^{+}+x^{-})/2\) between the preceding and following vehicles in the right lane, when the space gap between the vehicles exceeds some safety value \(g_{\rm target}^{\rm(min)}=\lambda_{\rm b}v^{+}+d\), i.e., some safety condition \(x^{+}-x^{-}-d>g_{\rm target}^{\rm(min)}\) should be satisfied. In accordance with these merging conditions, the space gap for a vehicle merging between each pair of consecutive vehicles in the right road lane is checked, starting from the upstream boundary of the merging region. If there is such a pair of consecutive vehicles, the vehicle merges onto the right road lane; if there is no pair of consecutive vehicles, for which the safety condition is satisfied at the current time step, the procedure is repeated at the next time step, and so on.
When free flow is realized at the bottleneck, we have found a known result that due to R\(\rightarrow\)L lane-changing the on-ramp inflow is distributed between two lanes that causes the occurrence of local speed decreases in both the right and left road lanes at the bottleneck (Fig. 3).
### Free flow metastability at bottleneck
As mentioned, rules of vehicle motion of Sec. II.1 as well as the occurrence of local speed decreases in both road lanes at the bottleneck in free flow are known in vehicular traffic theory. Nevertheless, we have revealed that the free flow state at the bottleneck shown in Fig. 3 is in a metastable state with respect to an F\(\rightarrow\)S transition.
To prove this result, at a time instant \(T_{\rm ind}\) we have disturbed the free flow state at the bottleneck shown in Fig. 3 through the application of a time-limited on-ramp inflow impulse \(\Delta q_{\rm on}\) of some duration \(\Delta t\) (Fig. 4): (i) At time interval \(0\leq t<T_{\rm ind}\), the on-ramp inflow rate \(q_{\rm on}\) is the same as that in Fig. 3 and, therefore, the same free flow state is realized; (ii) during the impulse \(T_{\rm ind}\leq t\leq T_{\rm ind}+\Delta t\) the on-ramp inflow rate has increased to a large enough value \(q_{\rm on}+\Delta q_{\rm on}\) at which traffic congestion is realized at the bottleneck; (iii) at time \(t>T_{\rm ind}+\Delta t\), although the on-ramp inflow rate has reduced to its initial value \(q_{\rm on}\), rather the free flow state returns at the bottleneck, congested traffic persists at the bottleneck. The downstream front of induced congested traffic is fixed at the bottleneck while the upstream front of congested traffic is continuously propagate upstream (Fig. 4). In accordance with the phase definitions made in three-phase traffic theory [93], the induced congested traffic belongs to the synchronized flow phase of automated-driving vehicular traffic. Thus, at the same on-ramp inflow rate \(q_{\rm on}\) there can be either a free flow state or a synchronized flow state at the bottleneck, i.e., free flow in Fig. 3 is indeed in a metastable state with respect to an F\(\rightarrow\)S transition at the bottleneck.
### Discontinuity in the rate of over-acceleration through lane-changing
To explain the physics of the free flow metastability with respect to the F\(\rightarrow\)S transition (Sec. II.4), we should first explain here that there is a _discontinuity_ in the rate of R\(\rightarrow\)L lane-changing denoted by \(R_{\rm RL}\)[135]. The discontinuity in the rate of R\(\rightarrow\)L lane-changing is realized due to the F\(\rightarrow\)S transition, i.e., when free flow transforms into synchronized flow. Examples of R\(\rightarrow\)L lane-changing in free flow and synchronized flow are shown, respectively, in Figs. 5 and 6 through the use of dashed vertical lines R\(\rightarrow\)L. In free flow occurring during time \(0\leq t<T_{\rm ind}\), we have found \(R_{\rm RL}\approx 6.1\) min\({}^{-1}\), whereas in synchronized flow that occurs at the bottleneck at \(t>T_{\rm ind}+\Delta t\), we have found that R\(\rightarrow\)L
Figure 3: Simulations with model of Sec.II.1 of the occurrence of local speed decrease in free flow on two-lane road at bottleneck: Speed in space and time in the right lane (a) and the left lane (b). \(q_{\rm in}=2571\) (vehicles/h)/lane, \(q_{\rm on}=720\) vehicles/h. Parameters of automated vehicles: \(\tau_{\rm d}=1\) s, \(K_{1}=0.3\)\(s^{-2}\), \(K_{2}=0.9\)\(s^{-1}\), \(v_{\rm free}=120\) km/h, \(d=7.5\) m. Lane-changing parameters: \(\delta_{1}=1\) m/s, \(\delta_{2}=5\) m/s, \(\tau_{1}=0.6\) s, \(\tau_{2}=0.2\) s. Road and on-ramp parameters: road length \(L=8\) km, \(x_{\rm on}=6\) km, \(L_{\rm m}=0.3\) km, \(\lambda_{\rm b}=0.3\) s.
Figure 4: Proof of the metastability of free flow state shown in Fig. 3 in automated-driving vehicular traffic moving on two-lane road with bottleneck: Speed in space and time in the right lane (a) and left lane (b). Parameters of on-ramp inflow-rate impulse inducing F\(\rightarrow\)S transition at bottleneck: \(T_{\rm ind}=30\) min, \(\Delta q_{\rm on}=180\) vehicles/h, \(\Delta t=2\) min. Other model parameters are the same as those in Fig. 3.
lane-changing rate \(R_{\rm RL}\) reduces sharply to \(R_{\rm RL}\approx 2.8\) min\({}^{-1}\). To explain the abrupt reduction of the rate of R\(\rightarrow\)L lane-changing \(R_{\rm RL}\) occurring due to the F\(\rightarrow\)S transition, we mention that in synchronized flow, the mean time headway between vehicles \(\tau_{\rm mean}^{\rm(syn)}\) reduces while becoming close to \(\tau_{\rm d}=1\) s. At this short time headway, safety conditions for lane changing (6) is more difficult to satisfy in comparison with free flow for which \(\tau_{\rm mean}^{\rm(free)}\approx 1.175\) s [136]. The difference in values of \(R_{\rm RL}\) in free flow and synchronized flow can already be seen from a comparison of two fragments of vehicle trajectories in the vicinity of the on-ramp merging region shown for free flow in Fig. 5 [137] and for synchronized flow in Fig. 6.
R\(\rightarrow\)L lane-changing of a vehicle that has initially decelerated in the right lane (for example, vehicle 2-right in Figs. 5 (a, c) and vehicle 6-right in Figs. 6 (a, c) have decelerated before R\(\rightarrow\)L lane-changing) leads to the acceleration of the vehicle in the left lane. Indeed, in free flow, vehicle 2-left in Figs. 5 (b, d) accelerates after R\(\rightarrow\)L lane-changing. In synchronized flow, vehicle 6-left in Figs. 6 (b, d) also accelerates after R\(\rightarrow\)L lane-changing. The vehicle acceleration under consideration is solely determined by R\(\rightarrow\)L lane-changing of the vehicle. Thus, the rate of the vehicle acceleration denoted by \(R_{\rm OA}\), which is caused by R\(\rightarrow\)L lane-changing, is given by formula
\[R_{\rm OA}=R_{\rm RL}. \tag{7}\]
Thus, vehicle acceleration caused by R\(\rightarrow\)L lane-changing
Figure 5: Continuation of Figs. 3 and 4. (a, b) Simulated vehicle trajectories within local speed decrease in free flow at bottleneck in the right lane (a) and left lane (b) at time \(t<T_{\rm ind}+\Delta t\). (c, d) Location-functions of speed of vehicle 2 labeled by “2-right” in the right lane (c) and by “2-left” in left lane (d) in (a, b). R\(\rightarrow\)L lane-changing of vehicle 2 is marked by dashed vertical lines R\(\rightarrow\)L.
Figure 6: Continuation of Fig. 4. (a, b) Simulated vehicle trajectories in synchronized flow at bottleneck in the right lane (a) and left lane (b) at time \(t>T_{\rm ind}+\Delta t\). (c, d) Location-functions of speed of vehicle 6 labeled by “6-right” in the right lane (c) and by “6-left” in left lane (d) in (a, b). R\(\rightarrow\)L lane-changing of vehicle 6 is marked by dashed vertical lines R\(\rightarrow\)L.
exhibits the discontinuous character: In accordance with (7), there is the discontinuity in the rate of vehicle acceleration \(R_{\rm OA}\) when free flow transforms into synchronized flow.
In next Sec. II.4, we explain that the discontinuity in the rate of vehicle acceleration \(R_{\rm OA}\) caused by R\(\rightarrow\)L lane-changing leads to the free flow metastability with respect to the F\(\rightarrow\)S transition (Fig. 4); in three-phase traffic theory, such vehicle acceleration has been called _over-acceleration_[93; 96]. Therefore, the acceleration of vehicle 2-left in free flow in Figs. 5 (b, d) as well as the acceleration of vehicle 6-left in synchronized flow (Figs. 6 (b, d)) are examples of over-acceleration; this explains the use of the term _over-acceleration_ in Figs. 5 and 6. We consider also the mean time delay in over-acceleration denoted by \(T_{\rm OA}\) that is equal to \(1/R_{\rm OA}\); in free flow \(T_{\rm OA}\approx 9.84\) s, whereas in synchronized flow \(T_{\rm OA}\approx 21.4\) s. The discontinuities in the rate \(R_{\rm OA}\) and mean time delay \(T_{\rm OA}\) of over-acceleration, i.e., the discontinuous character of over-acceleration found here for automated-driving vehicular traffic is in agreement with three-phase traffic theory for human-driving traffic (Fig. 2).
### Spatiotemporal competition of speed adaptation with over-acceleration
There is a spatiotemporal competition between over-acceleration and speed adaptation. In this competition, there are a tendency to free flow and the opposite tendency to synchronized flow. The tendency to free flow is through over-acceleration. The opposite tendency to synchronized flow is through speed adaptation.
Speed adaptation is vehicle deceleration occurring when a vehicle approaches a slower moving preceding vehicle and the following vehicle cannot pass it. We should distinguish speed adaptation in the right lane and speed adaptation in the left lane. This is because speed adaptation in the left lane is caused by a dual role of R\(\rightarrow\)L lane-changing.
#### ii.4.1 Tendency to free flow through over-acceleration
In free flow (Fig. 5) and synchronized flow (Fig. 6), over-acceleration through vehicle lane-changing to the left lane permits the following vehicle remaining in the right lane to accelerate. When free flow is currently at the bottleneck, the tendency to free flow is through over-acceleration that maintains the free flow state. Indeed, due to over-acceleration of vehicle 2 through its changing to the left lane ("2-left (over-acceleration)" in Fig. 7 (a)), the following vehicle 3 remaining in the right lane that trajectory is shown in Fig. 5 (a) accelerates (labeled by "3, acceleration" in Fig. 7 (a)). When synchronized flow is currently at the bottleneck, the tendency caused by over-acceleration tries to transform synchronized flow to a free flow state. For example, due to over-acceleration of vehicle 6 through its changing to the left lane ("6-left (over-acceleration)" in Fig. 7 (b)) the following vehicle 7 remaining in the right lane that trajectory is shown in Fig. 6 (a) accelerates (labeled by "7, acceleration" in Fig. 7 (b)).
#### ii.4.2 Tendency to synchronized flow through speed adaptation in the right lane
When free flow is at the bottleneck, the tendency caused by speed adaptation tries to transform free flow to a synchronized flow (Fig. 7 (c)). A vehicle merging from the on-ramp (vehicle "on" in Figs. 5 (a) and 7 (c)), forces the following vehicle 1 that trajectory is shown in Fig. 5
Figure 7: Simulations of spatiotemporal competition between over-acceleration and speed adaptation. Time-functions of speed for vehicle trajectories presented in Figs. 5 (a, b) and 6 (a, b) labeled by the same numbers, respectively: (a, b) Tendency to free flow. (c, d) Tendency to synchronized flow.
(a) to decelerate ("1, speed adaptation" in Fig. 7 (c)) while adapting the speed to the slower merging vehicle "on".
If synchronized flow is at the bottleneck, the tendency caused by speed adaptation tries to maintain the synchronized flow state (Fig. 7 (d)). A vehicle merging from the on-ramp (vehicle "on-2" in Figs. 6 (a) and 7 (d)), forces the following vehicle 5 that trajectory is shown in Fig. 6 (a) to decelerate (labeled by "5, speed adaptation" in Fig. 7 (d)).
iii.4.3 Tendency to synchronized flow through speed adaptation in the left lane: Dual role of lane-changing
There is a dual role of lane-changing that is as follows. In free flow, lane-changing of vehicle 2 leads to over-acceleration ("2-left (over-acceleration)" in Figs. 5 (a, d)). Contrarily, the same lane-changing of vehicle 2 causes speed adaptation in the left lane. Indeed, the following vehicle 4 in the left lane that trajectory is shown in Fig. 6 (b) must decelerate ("4, speed adaptation" in Fig. 7 (d)), while adapting its speed to the speed of slower vehicle 2 that has just changed from the right lane to left lane.
Speed adaptation caused by a dual role of lane-changing occurs also in synchronized flow. An example is lane-changing of vehicle 6 ("6-left (over-acceleration)" in Fig. 7 (d)) that forces the following vehicle 9 in the left lane that trajectory is shown in Fig. 6 (b) to decelerate ("9, speed adaptation" in Fig. 7 (d)).
#### iii.4.4 Two possible results of competition between over-acceleration and speed adaptation
In Fig. 5, free flow persists at the bottleneck. This means that at the over-acceleration rate \(R_{\rm OA}\approx 6.1\) min\({}^{-1}\) the tendency to free flow through over-acceleration overcomes the tendency to synchronized flow through speed adaptation. The result of the competition between over-acceleration and speed adaptation is the occurrence of the local speed decrease at the bottleneck without the emergence of synchronized flow (Figs. 3 and 4).
Contrarily, in Fig. 6 synchronized flow persists at the bottleneck (labeled by "synchronized flow"). This means that the tendency to synchronized flow through speed adaptation overcomes the tendency to free flow through over-acceleration. This is because the over-acceleration rate \(R_{\rm OA}\approx 2.8\) min\({}^{-1}\) becomes too small in synchronized flow. The competition between speed adaptation and over-acceleration determines the speed in synchronized flow. However, due to the small rate of over-acceleration in synchronized flow this competition cannot cause a return transition from synchronized flow to free flow.
Thus, the cause of the free flow metastability with respect to the F\(\rightarrow\)S transition (Fig. 4) is a spatiotemporal competition between over-acceleration, which exhibits the discontinuous character, and speed adaptation.
### Synchronized flow characteristics
#### iii.5.1 Synchronization of velocities of upstream fronts of synchronized flow in road lanes
The speed in synchronized flow in the right lane (vehicle 10 in Fig. 8) is less that the speed in synchronized flow in the left lane (vehicle 11). However, this speed difference does not lead to different velocities of the upstream fronts of synchronized flow in the left and right lanes: These upstream front velocities are synchronized (upstream fronts of synchronized flow are labeled by dashed curves "S-up" in Fig. 8 (a, b)).
The physics of this synchronization effect is associated with R\(\rightarrow\)L lane-changing that occur in the vicinity of the upstream synchronized flow front in the right lane (Fig. 9). While approaching the upstream front of
Figure 8: Continuation of Fig. 6. Features of synchronized flow: (a, b) Vehicle trajectories at \(t>T_{\rm ind}+\Delta t\), i.e., after F\(\rightarrow\)S transition has occurred at the bottleneck. (c, d) Location-functions of speeds for vehicles 10 and 11 in (a, b).
synchronized flow in the right lane, vehicles decelerate (e.g., vehicle 12 in Fig. 9 (a, c)). When the upstream front of synchronized flow in the left lane comes even slightly downstream of the upstream front of synchronized flow in the right lane, free flow is realized in the left lane between these upstream synchronized flow fronts; then, between the fronts lane-changing rate \(R_{\rm RL}\) increases. This causes R\(\rightarrow\)L lane-changing of a vehicle decelerating to a synchronized flow speed in the vicinity of the upstream front of synchronized flow in the right lane (example of R\(\rightarrow\)L lane-changing for vehicle 13 is marked by dashed vertical lines labeled by R\(\rightarrow\)L in Fig. 9). Due to the lane-changing of a slow moving vehicle 13-right to the left lane (vehicle 13-left), the following vehicle 11 in the left lane begins to decelerate stronger than it has been before lane-changing (Fig. 9 (d)). This leads to the synchronization of the upstream front velocities.
#### iii.2.2 Effect of discontinuity in lane-changing rate on flow-rate distribution
In the initial free flow state existing at the bottleneck at \(0\leq t<T_{\rm ind}\) (Fig. 4), R\(\rightarrow\)L lane-changing leads to the nearly fully equalization of the flow rates and densities between the road lanes downstream of the bottleneck (left column in Fig. 10 at \(t<T_{\rm ind}=30\) min). After the F\(\rightarrow\)S transition has occurred, the lane-changing rate in synchronized flow at the bottleneck decreases sharply (discontinuity in the lane-changing rate) and, therefore, the flow rates and densities between lanes cannot be equalized. This explains why in free flow downstream of the bottleneck both the density and flow rate are smaller in the left lane than they are, respectively, in the right lane (left column in Fig. 10 at \(t\geq T_{\rm ind}\)).
The discontinuity in the lane-changing rate is also responsible for differences in the averaged speeds, densities, and flow rates in synchronized flow in the right and left lanes upstream of the bottleneck (right column in Fig. 10 at \(t\geq T_{\rm ind}\)).
Figure 10: Continuation of Fig. 4: Time-functions of automated vehicle speed (first line), density (second line), and flow rate (third line) at road location \(x=7\) km [downstream of the bottleneck] (left column) and road location \(x=5.4\) km [upstream of the bottleneck] (right column); curves 1 – right lane, curves 2 – left lane. 10 min averaging time interval at virtual detectors.
Figure 9: Continuation of Fig. 8: Synchronization of velocities of upstream fronts of synchronized flow in the right and left lanes. (a, b) Vehicle trajectories taken from Fig. 8 (a, b). (c, d) Time-functions of speeds along trajectories marked in (a, b) by the same numbers, respectively.
### Three-phase traffic theory as common framework for human-driving and automated-driving traffic
Simulations of automated-driving vehicular traffic (Figs. 11 (a, b)) show the empirical nucleation features of the F\(\rightarrow\)S transition found in measurements of real human-driving traffic (Figs. 1 (a, b)). Thus, three-phase traffic theory can indeed be considered a common framework for the analysis of the dynamics of human-driving and automated-driving traffic.
A moving synchronized flow pattern (MSP) in Fig. 11 (a) has been induced through the use of an on-ramp inflow impulse at a downstream bottleneck (B-down in Fig. 12). While propagating upstream, the MSP induces the F\(\rightarrow\)S transition at the upstream bottleneck.
To simulate a moving bottleneck (MB) in Fig. 11 (b), we have assumed that there is a single automated vehicle moving in the right lane at a maximum free flow speed \(v_{\rm MB}\) that is less than \(v_{\rm free}\). Already at \(v_{\rm MB}=110\) km/h that is only 10 km/h less than \(v_{\rm free}\), the slower vehicle acts as the MB (Figs. 11 (b) and 13). We have also assumed that through the use of cooperative-driving automated vehicles receive the information about the location and speed of the MB. Within a MB merging region of length \(L_{\rm M}\), each vehicle moving in the right lane changes to the left lane to pass the MB if safety conditions (6) are satisfied (e.g., see vehicle 1 in Figs. 13 (a, b)); lane-changing rules (4), (5) are not applied within the MB merging region [138]. Other vehicles for which conditions (6) are not satisfied have to move at the velocity \(v_{\rm MB}\) behind the MB (trajectories of these vehicles are within a region between the MB trajectory and a dashed-dotted line in Fig. 13 (a)). Some vehicles moving in the left lane, after they have passed the MB location, change back to the right lane where they can move at the speed \(v_{\rm free}\) (vehicle 2 in Figs. 13 (a, b)).
The MB causes a speed decrease localized at the MB that moves at the speed \(v_{\rm MB}\) (Fig. 13 (c)). As in human-driving traffic (Fig. 1 (b)), when the local speed decrease at the MB reaches other local speed decrease at the on-ramp bottleneck, an additional short-time local speed decrease occurs at the bottleneck; this acts as a nucleus for traffic breakdown (F\(\rightarrow\)S transition) at the bottleneck (Figs. 11 (b) and 13).
## III Range of Highway capacities at any time instant
### Minimum and maximum highway capacities
We have found that at any time instant the metastability of free flow in automated-driving vehicular traffic on two-lane road with the bottleneck is realized within a flow rate range
\[C_{\rm min}\leq q_{\rm sum}<C_{\rm max}, \tag{8}\]
where \(q_{\rm sum}=2q_{\rm fin}+q_{\rm on}\) is the total flow rate across the road in free flow; \(C_{\rm min}\) and \(C_{\rm max}\) are, respectively, minimum and maximum highway capacities. The physics of the capacity range (8) is that within this capacity range an F\(\rightarrow\)S transition can be induced at the bottleneck. This result is in accordance with the three-phase traffic theory of human-driving traffic.
The minimum capacity \(C_{\rm min}\) is explained in Fig. 14:
Figure 11: Simulations of automated-driving vehicular traffic that reproduce empirical breakdown nucleation features measured in real human-driving traffic (Fig. 1). Speed data averaged across two-lane road are presented in space and time in free flow (green) and synchronized flow (yellow): (a) A moving synchronized flow pattern (MSP) induced at the downstream bottleneck (B-down) propagates upstream; reaching the upstream on-ramp bottleneck (B) the MSP induces the F\(\rightarrow\)S transition at the bottleneck. (b) A slow moving vehicle (moving bottleneck – MB) while propagating downstream in free flow acts as a nucleus for empirical spontaneous F\(\rightarrow\)S transition at bottleneck B when the MB propagates through bottleneck B. Both bottleneck B-down and bottleneck B are identical with the on-ramp bottleneck used above (Figs. 3–10); more details of simulations are in Figs. 12 and 13.
At a given \(q_{\rm in}\), there is a minimum on-ramp inflow rate denoted by \(q_{\rm on}=q_{\rm on,min}\) at which in an initial free flow at the bottleneck (Fig. 14 (a)) an F\(\rightarrow\)S transition can still be induced (Fig. 14 (b)); the minimum capacity is equal to \(C_{\rm min}=2q_{\rm in}+q_{\rm on,min}\). At the model parameters, the F\(\rightarrow\)S transition leads to the formation of a localized synchronized flow pattern (LSP) at the bottleneck (Fig. 14 (b)). Contrarily, if
\[q_{\rm sum}<C_{\rm min}, \tag{9}\]
no F\(\rightarrow\)S transition can be induced at the bottleneck: synchronized flow induced at the bottleneck dissolves over time (labeled by "dissolving synchronized flow" in Fig. 14 (c)).
When the flow rate \(q_{\rm sum}\) increases, a maximum highway capacity \(C_{\rm max}\) can be reached. The maximum capacity \(C_{\rm max}\) is a total flow rate \(q_{\rm sum}\) that separates two qualitatively different phenomena: (i) When condition (8) is satisfied, then free flow is in a metastable state with respect to the F\(\rightarrow\)S transition at the bottleneck (Figs. 4 and 14 (b)). (ii) When condition
\[q_{\rm sum}>C_{\rm max} \tag{10}\]
is satisfied, then free flow is in an unstable state with respect to a _spontaneous_ F\(\rightarrow\)S transition at the bottleneck (Fig. 15). At a given flow rate \(q_{\rm in}\), the increase in \(q_{\rm sum}\) is achieved through the increase in \(q_{\rm on}\). In this case, the maximum capacity \(C_{\rm max}\) is reached, when the on-ramp inflow rate \(q_{\rm on}\) is equal to some critical value denoted by \(q_{\rm on}=q_{\rm on,max}\), i.e., \(C_{\rm max}=2q_{\rm in}+q_{\rm on,max}\).
### Time delay of spontaneous traffic breakdown (spontaneous F\(\rightarrow\)S transition)
There is a time delay of the spontaneous F\(\rightarrow\)S transition at the bottleneck denoted by \(T^{\rm(B)}\) (Figs. 15 and 16): Under condition (10), it has been found that the less the difference \(q_{\rm sum}-C_{\rm max}\), the longer the time delay \(T^{\rm(B)}\) is (Fig. 16). In the time-delay-flow-rate plane, condition \(q_{\rm sum}=C_{\rm max}\) determines an asymptote (dashed vertical line in Fig. 16) that separates metastable free flow (left of the asymptote) and unstable free flow with respect to the F\(\rightarrow\)S transition (right of the asymptote) [139].
### Range of discontinuity of over-acceleration rate
Within the flow-rate range (8) there can be either a free flow state or a synchronized flow state at the bottleneck.
Figure 12: Simulations of F\(\rightarrow\)S transition through upstream propagation of MSP to upstream bottleneck that in simplified version is shown in Fig. 11 (a): Speed in space and time in the right lane (a) and left lane (b). \(q_{\rm in}=2571\) (vehicles/h)/lane. Two-lane road with two bottlenecks: Parameters of upstream bottleneck (B) are \(x_{\rm on}=6\) km, \(L_{\rm m}=0.3\) km, \(q_{\rm on}=720\) vehicles/h; parameters of downstream bottleneck (B-down) are \(x_{\rm on}^{\rm(down)}=9\) km, \(L_{\rm m}^{\rm(down)}=0.3\) km, \(q_{\rm on}^{\rm(down)}=0\); road length \(L=10\) km. Parameters of on-ramp inflow impulse at downstream bottleneck B-down applied at \(T_{\rm in}^{\rm(down)}=5\) min are \(\Delta q_{\rm on}^{\rm(down)}=900\) vehicles/h, \(\Delta t^{\rm(down)}=1\) min. Other model parameters are the same as those in Fig. 3.
Figure 13: Simulations of F\(\rightarrow\)S transition occurring due to downstream propagation of MB through the bottleneck that simplified version is shown in Fig. 11 (b): (a, b) Vehicle trajectories in the vicinity of MB in the right lane (a) and left lane (b). (c, d) Speed in space and time in the right lane (c) and left lane (d). \(q_{\rm in}=2571\) (vehicles/h)/lane, \(q_{\rm on}=720\) vehicles/h, \(L_{\rm M}=0.3\) km. Other model parameters are the same as those in Fig. 3.
Under condition \(q_{\rm in}=\)const, the range (8) is equivalent to the on-ramp inflow-rate range (Fig. 17)
\[q_{\rm on,min}\leq q_{\rm on}<q_{\rm on,max}. \tag{11}\]
When the initial state is free flow and \(q_{\rm on}\) increases, then at \(q_{\rm on}>q_{\rm on,max}\) a spontaneous F\(\rightarrow\)S transition occurs with a delay time \(T^{\rm(B)}\) (Figs. 15 and 16). The emergent synchronized flow persists due to the discontinuity in the over-acceleration rate (Sec. II.4): The over-acceleration rate decreases sharply (down-arrow in Fig. 17 (a)), respectively, the mean time delay in over-acceleration increases sharply (up-arrow in Fig. 17 (b)).
When \(q_{\rm on}\) decreases, synchronized flow exists in the range (11). Only when \(q_{\rm on}\) becomes less than \(q_{\rm on,min}\), a return spontaneous S\(\rightarrow\)F transition occurs at the bottleneck; respectively, free flow recovers at the bottleneck. Thus, there is a Z-characteristic for traffic breakdown at the bottleneck that shows stable, metastable, and unstable states of free flow with respect to the F\(\rightarrow\)S transition at the bottleneck (Fig. 17).
Figure 16: Continuation of Fig. 15. Dependence of the time delay \(T^{\rm(B)}\) of spontaneous F\(\rightarrow\)S transition at bottleneck on on-ramp inflow rate \(q_{\rm on}\) at the given flow rate \(q_{\rm in}=\) 2571 (vehicles/h)/lane. Calculated values \(q_{\rm on,max}=\) 726 vehicles/h, \(C_{\rm max}=2q_{\rm in}+q_{\rm on,max}=\) 5868 vehicles/h.
Figure 14: Simulations of minimum capacity \(C_{\rm min}=2q_{\rm in}+q_{\rm on,min}\) of free flow at bottleneck. Speed in space and time in the right lane (left column) and in left lane (right column) at different \(q_{\rm on}\) at the same value \(q_{\rm in}=\) 2571 (vehicles/h)/lane as that in Fig. 3: (a) Free flow. (b) Induced traffic breakdown in (a). (c) Induced dissolving synchronized flow. In (a, b), \(q_{\rm on}=q_{\rm on,min}=\) 650 vehicles/h, i.e., \(C_{\rm min}=\) 5792 vehicles/h; in (c), \(q_{\rm on}=\) 630 vehicles/h. In (b, c), as explained in Sec. II.4, on-ramp inflow rate impulse has been applied; parameters of the impulse inducing either F\(\rightarrow\)S transition (induced traffic breakdown) (b) or dissolving synchronized flow (c) at bottleneck are: \(T_{\rm ind}=\) 30 min, \(\Delta t=\) 5 min; \(\Delta q_{\rm on}=\) 250 vehicles/h in (b) and \(\Delta q_{\rm on}=\) 270 vehicles/h in (c). Other model parameters are the same as those in Fig. 3.
Figure 15: Simulations of spontaneous F\(\rightarrow\)S transition at bottleneck. Speed in space and time in the right lane (left column) and in left lane (right column) at different \(q_{\rm on}\) at the same value \(q_{\rm in}=\) 2571 (vehicles/h)/lane as that in Fig. 3: (a) \(q_{\rm on}=\) 729 vehicles/h, \(T^{\rm(B)}=\) 51 min. (b) \(q_{\rm on}=\) 740 vehicles/h, \(T^{\rm(B)}=\) 19.8 min. (c) \(q_{\rm on}=\) 760 vehicles/h, \(T^{\rm(B)}=\) 10 min. (d) \(q_{\rm on}=\) 780 vehicles/h, \(T^{\rm(B)}=\) 5 min. Other model parameters are the same as those in Fig. 3.
### Physics of spontaneous traffic breakdown
The spontaneous F\(\rightarrow\)S transition occurs at \(t=T^{\rm(B)}\) (Sec. III.2) when a _sequence of two R\(\rightarrow\)L lane-changing_ occurs: One of them is realized at the upstream front of the local speed decrease (vehicle 1 in Figs. 18 (a, b)) and another occurs at its downstream front (vehicle 2 in Figs. 18 (a, b)). Simultaneously, a drop in the over-acceleration rate \(R_{\rm OA}\) (Fig. 18 (c)) and, respectively, a jump in the mean time delay in over-acceleration \(T_{\rm OA}\) are realized (Fig. 18 (d)). As explained in Sec. II.4, this discontinuous behavior of over-acceleration causes the abrupt transformation of the local speed decrease in free flow at the bottleneck into synchronized flow. The boundaries of synchronized flow are given by the upstream synchronized flow front propagating upstream (dashed curve "S-up" in Fig. 18 (a, b)) and the downstream synchronized flow front (dashed-dotted curve "S-down") fixed at the bottleneck.
The physics of the maximum capacity \(C_{\rm max}\) and time delay \(T^{\rm(B)}\) of spontaneous traffic breakdown is as follows. As found, at \(q_{\rm on}<q_{\rm on,max}\) the minimum speed within the local speed decrease in free flow at the bottleneck does not almost depend on time (Fig. 3). Contrarily, at \(q_{\rm on}>q_{\rm on,max}\) (Fig. 15), the minimum speed within the local speed decrease in free flow grows continuously over time. Indeed, at \(t\ll T^{\rm(B)}\) (Fig. 19) minimum speeds of vehicles 3 and 4 are considerably larger than minimum speeds, respectively, of vehicles 5 and 7 moving in free flow at time that is only about 30 s less than \(t=T^{\rm(B)}\) (Figs. 20 (c, d)). Thus, the maximum capacity \(C_{\rm max}\) separates free flow states at \(q_{\rm on}<q_{\rm on,max}\), in which the local speed decrease at the bottleneck does not growth over time, from free flow states at \(q_{\rm on}>q_{\rm on,max}\), in which the local speed decrease does continuously grow over time.
The continuous reduction of the minimum speed within the local speed decrease in free flow at the bottleneck over time has to have a limit that can be considered a critical minimum speed: After vehicle "on-3" has merged from the on-ramp, minimum speeds of vehicles 8 and 9 moving in the right lane become low enough; this causes the sequence of two R\(\rightarrow\)L lane-changing of vehicles 1 and
Figure 17: Simulated range of the discontinuity in over-acceleration rate (a) and in mean time-delay in over-acceleration (b) as functions of the on-ramp inflow rate \(q_{\rm on}\) at given flow rate \(q_{\rm on}=2571\) (vehicles/h)/lane: Z-characteristics of the F\(\rightarrow\)S transition in automated-driving vehicular traffic on two-lane road with bottleneck. Other model parameters are the same as those in Fig. 3.
Figure 18: Continuation of Fig. 15 (b). Features of spontaneous traffic breakdown: (a, b) Vehicle trajectories in the right lane (a) and left lane (b). (c, d) Time-dependencies of the averaged over-acceleration rate \(R_{\rm OA}\) (c) and the mean time delay in over-acceleration \(T_{\rm OA}\) (d).
2 (Figs. 21 (a, b)). Slow vehicles 1-left and 2-left (Figs. 21 (c, d)) force the following vehicles 11 and 12 moving in the left lane to decelerate strongly. At so low speed in the left lane the over-acceleration rate \(R_{\rm OA}\) drops and, respectively, the mean time delay in over-acceleration increases sharply (Figs. 18 (c, d)); as a result, the speed adaptation overcomes the over-acceleration.
It takes some time for the continuous reduction of the minimum speed within the local speed decrease to the critical speed in free flow at which traffic breakdown occurs at the bottleneck. This time interval determines time delay \(T^{\rm(B)}\) of traffic breakdown (F\(\rightarrow\)S transition). We have found that the more the on-ramp inflow rate \(q_{\rm on}\) exceeds the critical value \(q_{\rm on,max}\), the quicker the critical minimum speed in free flow at the bottleneck is reached. This explains the decreasing character of function \(T^{\rm(B)}(q_{\rm on})\) (Fig. 16).
IV Generalization of nucleation features of F\(\rightarrow\)S transition in automated-driving traffic
Up to now we have used only one chosen set of model parameters, to demonstrate that automated-driving traffic does exhibit the basic feature of the three-phase traffic theory - the nucleation character of an F\(\rightarrow\)S transition at the bottleneck. To disclose the physics of this F\(\rightarrow\)S transition, we have studied its features under a change in the on-ramp inflow rate \(q_{\rm on}\) at the bottleneck (Secs. II and III). However, do basic results of this paper about the nucleation character of the F\(\rightarrow\)S transition at the bottleneck and the existence of a range of highway capacities remain in automated-driving vehicular traffic, when model parameters are changed?
### Effect of lane-changing model parameters on F\(\rightarrow\)S transition
We have found that as long as new model parameters in lane-changing rules (4)-(6) enable a distribution of on-ramp inflow between road lanes in free flow, all qualitative results presented above remain the same
Figure 19: Continuation of Fig. 15 (b): Vehicle trajectories in free flow at the bottleneck at \(t\ll T^{\rm(B)}\).
Figure 20: Continuation of Fig. 15 (b). (a, b) Vehicle trajectories “on-2”, 5–7 are in free flow at time that is about 30 s less than \(t=T^{\rm(B)}\); trajectories “on-3”, 8–12 are related to the time of traffic breakdown (F\(\rightarrow\)S transition) \(t=T^{\rm(B)}\). (c, d) Comparison of location-functions of speeds for vehicles 3 and 4 taken from Fig. 19 with speeds on trajectories 5 and 7 from (a, b). Vehicles 1 and 2 are, respectively, the same as that in Fig. 18 (a, b). Sequence of two R\(\rightarrow\)L lane-changing effects of vehicles 1 and 2 that causes spontaneous traffic breakdown are labeled by R\(\rightarrow\)L-down and R\(\rightarrow\)L-up, respectively.
ones. Examples are shown in Fig. 22 for symmetric lane-changing parameters \(\delta_{1}=\delta_{2}\) in (4), (5) (Fig. 22 (a)) and for symmetric safety parameters \(\tau_{1}=\tau_{2}\) in (6).
### Diagrams of F\(\rightarrow\)S transition at bottleneck
To understand the nucleation nature of the F\(\rightarrow\)S transition in automated-driving traffic, up to now we have used only one given flow rate in free flow upstream of the bottleneck \(q_{\rm in}=2571\) (vehicles/h)/lane. We have found that the nucleation nature of the F\(\rightarrow\)S transition at the bottleneck remains when \(q_{\rm in}\) changes (Fig. 23). In particular, maximum capacity \(C_{\rm max}\) does not almost depend on \(q_{\rm on}\), whereas minimum capacity \(C_{\rm min}\) is a decreasing function of \(q_{\rm on}\): the larger the on-ramp inflow rate \(q_{\rm on}\), the larger the capacity range \(C_{\rm max}-C_{\rm min}\) (Fig. 23 (c)). When the flow rate \(q_{\rm on}\) increases, the flow-rate range \(q_{\rm on,max}-q_{\rm on,min}\), within which free flow is metastable with respect to the F\(\rightarrow\)S transition at the bottleneck, increases (Fig. 23 (d)).
At any value \(q_{\rm in}\), at which the F\(\rightarrow\)S transition can occur, the physics of the F\(\rightarrow\)S transition is qualitatively the same as that disclosed in Secs. II and III. In particular, the nature of the F\(\rightarrow\)S transition is caused by the discontinuity of the over-acceleration rate (Fig. 24 (a, b)) as well as its competition with speed adaptation. Features of synchronized flow occurring due to the F\(\rightarrow\)S transition (Sec. II.5) remain also the same when \(q_{\rm in}\) changes. The speeds in synchronized flow in the right and left lanes at the bottleneck are decreasing functions of the on-ramp inflow rate (Fig. 24 (c)).
### Lane-asymmetric nucleation of F\(\rightarrow\)S transition
Because the nucleation nature of F\(\rightarrow\)S transition in automated-driving traffic at the bottleneck is determined by the existence of the discontinuity in R\(\rightarrow\)L lane-changing rate (Sec. II), a question can arise: Does the nucleation nature of F\(\rightarrow\)S transition remain if the lane-changing rules are changed qualitatively? Indeed, as known, cooperative driving in automated-driving traffic could permit the realization of different lane-changing rules that enable a distribution of on-ramp inflow between road lanes in free flow as done through lane-changing rules (4)-(6). In (4)-(6), at a large speed difference between lanes no speed limitation for lane-changing has been assumed. When a vehicle moving at a slow speed \(v(t)\) changes from the right lane to the left lane, the vehicle can force the following vehicle moving
Figure 21: Continuation of Fig. 20: Location-functions of speed for some vehicles whose numbers are the same as that in Fig. 20, respectively; vehicles 1 and 2 are, respectively, the same as that in Figs. 18 (a, b).
Figure 22: Speed in space and time in the right lane (left column) and in left lane (right column) at the same flow rate \(q_{\rm in}=2571\) (vehicles/h)/lane as that in Fig. 3. (a, b) Induced S\(\rightarrow\)J transition that has been simulated as that in Fig. 4. (a) Symmetric lane-changing parameters \(\delta_{1}=\delta_{2}=1\) m/s in (4), (5), \(q_{\rm on}=720\) vehicles/h, \(\Delta q_{\rm on}=180\) vehicles/h. (b) Symmetric safety parameters \(\tau_{1}=\tau_{2}=0.4\) s in (6), \(q_{\rm on}=700\) vehicles/h, \(\Delta q_{\rm on}=200\) vehicles/h. \(T_{\rm ind}=30\) min, \(\Delta t=2\) min. Other model parameters are the same as those in Fig. 3.
at a larger speed \(v^{-}(t)\) to decelerate strongly. This can considerably decrease comfortable driving and sometimes traffic safety.
Cooperative driving can solve this problem through some safety condition
\[g^{-}(t)+(v(t)-v^{-}(t))T_{\rm p}>g_{\rm p} \tag{12}\]
used in addition to (6). Safety condition (12), in which \(T_{\rm p}\) and \(g_{\rm p}\) are constant parameters, limits R\(\rightarrow\)L lane-changing, when speed difference \(v(t)-v^{-}(t)\) is large enough, whereas the space gap between these vehicles is not large enough for comfortable driving.
#### iv.2.1 Characteristics of lane-asymmetric nucleation of F\(\rightarrow\)S transition
Condition (12) does not affect on R\(\rightarrow\)L lane-changing in _free_ flow (Fig. 25 (a)): The same lane-changing rate is realized and the same local speed decrease appears at the bottleneck as that in Fig. 3. There is free flow metastability with respect to the F\(\rightarrow\)S transition at the bottleneck as found in Secs. II and III; condition (8) is also valid. Moreover, values \(q_{\rm on,max}\) and, respectively, \(C_{\rm max}=2q_{\rm in}+q_{\rm on,max}\), which separate metastable free flow from unstable free flow with respect to the F\(\rightarrow\)S transition at the bottleneck remains almost the same (Figs. 25-27).
Figure 23: Simulations of the nucleation nature of the F\(\rightarrow\)S transition at the bottleneck for different values \(q_{\rm in}\): (a, b) Speed in space and time in the right lane (left column) and in left lane (right column) for spontaneous F\(\rightarrow\)S transition: (a) \(q_{\rm in}=2449\) (vehicles/h)/lane, \(q_{\rm on}=980\) vehicles/h, \(T^{\rm(B)}=26\) min. (b) \(q_{\rm in}=2769\) (vehicles/h)/lane, \(q_{\rm on}=340\) vehicles/h, \(T^{\rm(B)}=24\) min. (c) Dependencies of minimum highway capacity \(C_{\rm min}\) and maximum highway capacity \(C_{\rm max}\) on \(q_{\rm on}\). (d) Dependencies \(q_{\rm in}(q_{\rm on})\) related to \(C_{\rm min}(q_{\rm on})\) (curve denoted by \(q_{\rm on,min}\)) and \(C_{\rm max}(q_{\rm on})\) (curve denoted by \(q_{\rm on,max}\)), respectively. Other model parameters are the same as those in Fig. 3.
Figure 24: Characteristics of spontaneous F\(\rightarrow\)S transition at different flow rates \(q_{\rm in}\) in free flow upstream of bottleneck. (a, b) The discontinuity of the over-acceleration rate: On-ramp inflow-rate dependence of the over-acceleration rate \(R_{\rm OA}\) (a) and the mean time delay in over-acceleration \(T_{\rm OA}\) (b) in initial free flow (curves “free flow”) and in synchronized flow (curves “synchronized flow”) that has occurred due to F\(\rightarrow\)S transition. (c) Synchronized flow speeds occurring at the bottleneck after the F\(\rightarrow\)S transition in the right and left lanes. In (a, b), on-ramp inflow-rates \(q_{\rm on}\) on the x-axis are slightly exceed corresponding values \(q_{\rm on,max}\) (we have used \(q_{\rm on}=q_{\rm on,max}+\delta q\), where parameter \(\delta q=14\) vehicles/h) calculated by different values \(q_{\rm in}\); for explanations of (a, b), a dashed vertical line related to \(q_{\rm on}=740\) vehicles/h has been drawn to show, respectively, the same values \(R_{\rm OA}\) and \(T_{\rm OA}\) at curves \(R_{\rm OA}(q_{\rm on})\) and \(T_{\rm OA}(q_{\rm on})\) as those in Figs. 18 (c, d) for \(q_{\rm in}=2571\) (vehicles/h)/lane. Other model parameters are the same as those in Fig. 3.
However, the use of condition (12) changes basically the result of the F\(\rightarrow\)S transition at the bottleneck: In Sec. II, after the F\(\rightarrow\)S transition has occurred, synchronized flow emerges both in the right and in left lanes (Figs. 4 and 15). Contrarily, under condition (12) the F\(\rightarrow\)S transition causes synchronized flow emergence in the right lane _only_ (Figs. 25 (b-e)). For this reason, we can call the F\(\rightarrow\)S transition as an _asymmetric_ F\(\rightarrow\)S transition at the bottleneck.
Moreover, after the asymmetric F\(\rightarrow\)S transition has occurred _no_ local speed decrease remains in free flow in the left lane at the bottleneck (right column in Figs. 25 (b-e)). The disappearance of the local speed decrease in free flow in the left lane at the bottleneck is explained by the drop in the R\(\rightarrow\)L lane-changing rate to zero during the asymmetric F\(\rightarrow\)S transition (Fig. 26): No R\(\rightarrow\)L lane-changing is realized at \(t>T^{\rm(B)}\), i.e., after the asymmetric F\(\rightarrow\)S transition has occurred at \(t=T^{\rm(B)}\) (Fig. 26 (a, b)). Respectively, there is a drop in the over-acceleration rate \(R_{\rm OA}\) from the rate \(R_{\rm OA}\) in free flow to \(R_{\rm OA}=0\) in synchronized flow (Fig. 26 (c); one of these R\(\rightarrow\)L lane-changing at \(t<T^{\rm(B)}\) is marked by dashed vertical line R\(\rightarrow\)L in Figs. 26 (a, b) [140]. The physics of this effect is as follows. When synchronized flow begins to emerge in the right lane, the speed difference \(v(t)-v^{-}(t)\) in (12) becomes large enough. This prevents the R\(\rightarrow\)L lane-changing.
We have found that as in Fig. 16, time delay \(T^{\rm(B)}\) of spontaneous asymmetric F\(\rightarrow\)S transition that occurs at \(q_{\rm on}>q_{\rm on,max}\) is also a strongly falling on-ramp
Figure 26: Continuation of Fig. 25 (e): Features of asymmetric spontaneous traffic breakdown: (a, b) Vehicle trajectories in the right lane (a) and left lane (b). (c) Time-dependencies of the averaged over-acceleration rate \(R_{\rm OA}\).
Figure 25: Simulations of asymmetric F\(\rightarrow\)S transition at bottleneck that occurs in model of Sec. II.1, when, in addition to safety conditions (6), condition (12) is used. Speed in space and time in the right lane (left column) and in left lane (right column) at different \(q_{\rm on}\) at the same flow rate \(q_{\rm in}=2571\) (vehicles/h)/lane as that in Fig. 3: (a) Local speed decrease at bottleneck in free flow, \(q_{\rm on}=720\) vehicles/h. (b) Induced F\(\rightarrow\)S transition in free flow of (a); parameters of on-ramp inflow impulse: \(T_{\rm ind}=10\) min, \(\Delta q_{\rm on}=180\) vehicles/h, \(\Delta t=1\) min. (c) Induced F\(\rightarrow\)S transition at \(q_{\rm on}=q_{\rm on,min}=360\) vehicles/h; \(T_{\rm ind}=10\) min, \(\Delta q_{\rm on}=540\) vehicles/h, \(\Delta t=2\) min. (d) Dissolving synchronized flow at \(q_{\rm on}=350\) vehicles/h that is less than \(q_{\rm on,min}\); \(T_{\rm ind}=10\) min, \(\Delta q_{\rm on}=550\) vehicles/h, \(\Delta t=2\) min. (e) Spontaneous F\(\rightarrow\)S transition at \(q_{\rm on}=727\) vehicles/h that is larger than \(q_{\rm on,max}=724\) vehicles/h; \(T^{\rm(B)}=11.5\) min. In (12), \(T_{\rm p}=3.3\) s, \(g_{\rm p}=2\) m. Other model parameters are the same as those in Fig. 3.
inflow-rate function (Fig. 27 (a)). However, under the asymmetric F\(\rightarrow\)S transition there is a considerable reduction in values \(q_{\text{on,min}}\) and, respectively, \(C_{\text{min}}=2q_{\text{in}}+q_{\text{on,min}}\) (Figs. 27 (b, c)) in comparison with these values found in Sec. III (Fig. 14). Other peculiarities of the asymmetric F\(\rightarrow\)S transition have been found when \(q_{\text{on}}\) decreases: (i) The value \(R_{\text{OA}}\) decreases strongly (Fig. 27 (b)). (ii) The discontinuity in the over-acceleration rate \(R_{\text{OA}}\) remains until some inflow-rate denoted by \(q_{\text{on,max}}^{\text{(1-lane)}}\) that slightly exceeds \(q_{\text{on,min}}\) (Fig. 27 (d)). (iii) Although within a range \(q_{\text{on,min}}\leq q_{\text{on}}<q_{\text{on,max}}^{\text{(1-lane)}}\) free flow is still metastable with respect to the asymmetric F\(\rightarrow\)S transition, nevertheless, the discontinuity in the over-acceleration rate \(R_{\text{OA}}\) does not exist any more: there is no lane-changing within the inflow-rate range \(q_{\text{on,min}}\leq q_{\text{on}}<q_{\text{on,max}}^{\text{(1-lane)}}\) at all. To understand this result, we consider in Sec. IV.3.2 automated-driving traffic on a single-lane road with the same bottleneck.
#### iv.3.2 Over-acceleration in automated-driving traffic on single-lane road
After the asymmetric F\(\rightarrow\)S transition has occurred, _no_ effect of the bottleneck on the vehicle motion in the left lane is realized any more (right column in Figs. 25 (b-e)); therefore, each of the road lanes could be considered as two different (and not connected) single-lane roads.
We have found that although no R\(\rightarrow\)L lane-changing is possible on the single-lane road, within the range \(q_{\text{on,min}}\leq q_{\text{on}}<q_{\text{on,max}}^{\text{(1-lane)}}\) free flow is indeed in a metastable state with respect to the F\(\rightarrow\)S transition at the bottleneck (Fig. 28 (a-c)). The maximum on-ramp inflow-rate \(q_{\text{on,max}}^{\text{(1-lane)}}\) determines the maximum capacity
Figure 28: Simulations of F\(\rightarrow\)S transition on single-lane road with bottleneck model of Sec. II.1. Speed in space and time at different \(q_{\text{on}}\) at the same value \(q_{\text{in}}=2571\) vehicles/h as that in Fig. 3: (a) Induced traffic breakdown in metastable free flow at \(q_{\text{on}}=365\) vehicles/h, \(T_{\text{ind}}=60\) min, \(\Delta q_{\text{on}}=135\) vehicles/h, \(\Delta t=1\) min; (b) Induced traffic breakdown in metastable free flow at \(q_{\text{on}}=360\) vehicles/h, \(T_{\text{ind}}=60\) min, \(\Delta q_{\text{on}}=540\) vehicles/h, \(\Delta t=2\) min; (c) Dissolving synchronized flow at \(q_{\text{on}}=350\) vehicles/h, \(T_{\text{ind}}=60\) min, \(\Delta q_{\text{on}}=550\) vehicles/h, \(\Delta t=2\) min; (d) Time-delayed spontaneous traffic breakdown at \(q_{\text{on}}=366\) vehicles/h, \(T^{\text{(B)}}=30\) min. \(q_{\text{on,min}}=360\) vehicles/h, \(q_{\text{on,max}}^{\text{(1-lane)}}=365.2\) vehicles/h. Other model parameters are the same as those in Fig. 3.
Figure 27: Simulated characteristics of asymmetric F\(\rightarrow\)S transition on two-lane road with bottleneck at the same value \(q_{\text{in}}=2571\) (vehicles/h)/lane as that in Figs. 3–10. (a) Dependence of time delay \(T^{\text{(B)}}\) of spontaneous traffic breakdown on \(q_{\text{on}}\); \(q_{\text{on,max}}=724\) vehicles/h, \(C_{\text{max}}=2q_{\text{in}}+q_{\text{on,max}}=5868\) vehicles/h. (b, c) Simulated \(Z\)-characteristics of the asymmetric F\(\rightarrow\)S transition: The discontinuity in over-acceleration rate (b) and speed (c) as functions of \(q_{\text{on}}\). (d) A small part of (b) in a large scale in vicinity of \(q_{\text{on}}=q_{\text{on,min}}\). Other model parameters are the same as those in Fig. 25.
of automated-driving traffic on the single-lane road with the bottleneck: \(C_{\max}=q_{\rm in}+q_{\rm on,max}^{(1-\rm lane)}\): At a given \(q_{\rm in}\), when \(q_{\rm on}>q_{\rm on,max}^{(1-\rm lane)}\) after a time delay \(T^{(\rm B)}\), which is a decreasing on-ramp inflow-rate function, the F\(\rightarrow\)S transition occurs spontaneously at the bottleneck (Fig. 28 (d)).
Thus, the minimum on-ramp inflow-rate \(q_{\rm on,min}\) of free flow metastability with respect to the asymmetric F\(\rightarrow\)S transition at the bottleneck on two-lane road is determined by the minimum on-ramp inflow-rate \(q_{\rm on,min}\) of free flow metastability on single-lane road with the same bottleneck. To explain this result, we should recall that in three-phase traffic theory [93; 94; 95], the term _over-acceleration_ determines driver acceleration behaviors associated with a time delay in acceleration that causes free flow metastability with respect to an F\(\rightarrow\)S transition at a bottleneck. In Helly's model (1), (2) there is a time delay in acceleration. For this reason, it is not surprising that Helly's model (1), (2) shows over-acceleration on the single-lane road. However, the effect of this over-acceleration is practically insignificant: the range of the free flow metastability on single-lane road is only \(q_{\rm on,max}^{(1-\rm lane)}-q_{\rm on,min}=5\) vehicles/h (Fig. 27 (d)) [141].
### Effect of desired time headway of automated vehicles
The basic result about the metastability of free flow with respect to the F\(\rightarrow\)S transition at the bottleneck remains under a wide range of the desired time headway \(\tau_{\rm d}\) of automated vehicles. However, as shown in Fig. 29, the increase in \(\tau_{\rm d}\) to 1.5 s leads to a considerable decrease in the flow rate \(q_{\rm in}\) at which the metastability of free flow is realized.
## V Transitions between the three phases in automated-driving vehicular traffic
Wide moving jams can emerge in synchronized flow. We have found that features of the jams are qualitatively almost the same as well-known for human-driving traffic. Thus, we present a simplified analysis of wide moving jams for model of Sec. IV.3, when due to the use of condition (12) synchronized flow and wide moving jams can emerge in the right road lane only.
For a study of very low speed states in automated-driving vehicular traffic, we should note
Figure 30: Simulations of spontaneous S\(\rightarrow\)J transition in model of Sec. IV.3 with the use of (13). Speed in space and time in the right lane (left column) and in left lane (right column) at different \(q_{\rm on}\) at the same flow rate \(q_{\rm in}=2571\) (vehicles/h)/lane as that in Fig. 3: (a) \(q_{\rm on}=900\) vehicles/h; (b) \(q_{\rm on}=940\) vehicles/h. (c) Vehicle trajectories in the right lane for a part of (b). (d) Time-functions of speeds of vehicles 1, 2, and 3 shown in (c). In (13), \(v_{\rm min}=36\) km/h, \(g_{\rm min}=3\) m. Other model parameters are the same as those in Fig. 25.
Figure 29: Simulations of the effect of the increase in desired time headway \(\tau_{\rm d}\) of automated vehicles on nucleation features of F\(\rightarrow\)S transition with the model of Sec. II.1 on two-lane road with bottleneck: Speed in space and time in the right lane (left) and in left lane (right). Induced traffic breakdown under condition (8). \(\tau_{\rm d}=1.5\) s with \(K_{1}=0.3\)\(s^{-2}\), \(K_{2}=0.6\)\(s^{-1}\) in (1), (2), \(\tau_{1}=\tau_{2}=0.9\) s in (6), \(q_{\rm in}=1714\) (vehicles/h)/lane, \(q_{\rm on}=710\) vehicles/h, \(T_{\rm in}=30\) min, \(\Delta q_{\rm on}=190\) vehicles/h, \(\Delta t=2\) min. Other model parameters are the same as those in Fig. 3.
that in Eq. (1), when the speed \(v\to 0\), the optimal gap between vehicles \(g_{\rm opt}\) (2) tends also to zero: \(g_{\rm opt}\to 0\). However, even when all vehicles are in standstill, the space gap between vehicles should be larger than zero. Therefore, when the vehicle speed decreases below some low speed denoted by \(v_{\rm min}\), in formula (2) we should add some additional space gap denoted by \(g_{\rm min}\) to which the space gap \(g\) between automated vehicles tends when the speed \(v\to 0\); therefore, formula (2) is replaced by a known formula
\[\begin{split}& g_{\rm opt}=\\ &=\left\{\begin{array}{ll}v\tau_{\rm d}&\mbox{at $v\geq v_{\rm min }$},\\ g_{\rm min}+v\left(\tau_{\rm d}-\tau_{\rm min}\right)&\mbox{at $v<v_{\rm min}$}, \end{array}\right.\end{split} \tag{13}\]
where \(\tau_{\rm min}=g_{\rm min}/v_{\rm min}\); \(g_{\rm min}\) and \(v_{\rm min}\) are constants.
We have found that in automated-driving traffic either a spontaneous S\(\rightarrow\)J transition (Fig. 30) or induced S\(\rightarrow\)J transition (Fig. 31) can be realized. Vehicle
Figure 31: Simulations of coexistence of the three phases F, S, and J with the use of a sequence of induced F\(\rightarrow\)S and S\(\rightarrow\)J transitions in model of Sec. IV.3 with the use of (13) at the same flow rate \(q_{\rm in}=2571\) (vehicles/h)/lane as that in Fig. 3. (a) Speed data in space and time in the right lane presented by regions with variable shades of gray [shades of gray vary from white to black when the speed decreases from 120 km/h (white) to 0 km/h (black)]. \(q_{\rm on}=400\) vehicles/h; for induced F\(\rightarrow\)S transition, \(T_{\rm ind}=3\) min, \(\Delta q_{\rm on}=500\) vehicles/h, \(\Delta t=1\) min; for induced S\(\rightarrow\)J transition, \(T_{\rm ind}=120\) min, \(\Delta q_{\rm on}=800\) vehicles/h, \(\Delta t=2\) min. (b) Vehicle trajectories in the right lane for a part of (a). (c) Time-function of speed of vehicle 4 in (b). Wide moving jam (J) is marked by “jam 2”, F – free flow, S – synchronized flow. Other model parameters are the same as those in Fig. 30.
Figure 32: Simulations of induced F\(\rightarrow\)J transition in model of Sec. IV.3 with condition (13). Speed in space and time in the right lane (left column) and in left lane (right column) at \(q_{\rm in}=2880\) (vehicles/h)/lane and \(q_{\rm on}=0\); for induced F\(\rightarrow\)J transition, \(T_{\rm ind}=3\) min, \(\Delta q_{\rm on}=1200\) vehicles/h, \(\Delta t=2\) min. Other model parameters are the same as those in Fig. 30.
trajectories 1, 2, and 3 in Figs. 30 (c, d) show a typical example of a time-development of an emergent wide moving jam (marked by "jam 1") during the spontaneous S\(\rightarrow\)J transition. The dynamics of the induced S\(\rightarrow\)J transition (Fig. 31 (a, b)) as well as a time-dependence of the speed of vehicle 4 propagating through the induced wide moving jam (marked by "jam 2") show a possible coexistence of all three phases F, S, and J in automated-driving traffic (Fig. 31 (c)) that is qualitatively very similar to that known for human-driving traffic. In addition with S\(\rightarrow\)J transitions, a wide moving jam can be induced in free flow (induced F\(\rightarrow\)J transition) (Fig. 32).
As in human-driving traffic, there are characteristics parameters of the downstream front propagation of a wide moving jam in automated-driving traffic that do not depend on initial conditions. The characteristic jam parameters presented by a line J in Fig. 33 are: (i) the velocity of the upstream propagation of the downstream jam front \(v_{\rm g}\), (ii) the flow rate \(q_{\rm out}\) and (iii) the density \(\rho_{\rm min}\) in the jam outflow (when free flow is built in this jam outflow) as well as (iv) the density within the jam \(\rho_{\rm max}\).
States of free flow, synchronized flow, and wide moving jams build together a double-Z (ZZ)-characteristic for phase transitions in automated-driving traffic (Fig. 34). At a given \(q_{\rm in}\), there is some maximum on-ramp inflow-rate \(q_{\rm on}\) denoted by \(q_{\rm on,max}^{\rm(J)}\) (Fig. 34). The condition \(q_{\rm on}=q_{\rm on,max}^{\rm(J)}\) separates metastable synchronized flow at \(q_{\rm on}\leq q_{\rm on,max}^{\rm(J)}\) and unstable synchronized flow at \(q_{\rm on}>q_{\rm on,max}^{\rm(J)}\), when after a time delay \(T_{\rm J}\) a spontaneous S\(\rightarrow\)J transition is realized (Fig. 30). The larger the difference \(q_{\rm on,max}^{\rm(J)}-q_{\rm on}\), the shorter the time delay \(T_{\rm J}\) of the S\(\rightarrow\)J transition (Figs. 30 (a, b)).
The 2Z-characteristic shows (Fig. 34) that any phase transitions between the three phases F, S, and J are possible in a broad range of the flow rate in automated-driving vehicular traffic on two-lane road at the bottleneck.
## VI Discussion
We have shown that traffic on a two-lane road with a bottleneck that consists of 100% string-stable automated vehicles moving in a road lane in accordance with the classical Helly's model [126] is described in the framework of the three-phase traffic theory in which traffic breakdown is an F\(\rightarrow\)S transition that exhibits the nucleation nature. Does this basic paper result remain when vehicle platoons are string-unstable (Sec. VI.1) or when a qualitatively different model for automated-driving vehicles is used (Sec. VI.2)?
F\(\rightarrow\)S transition at bottleneck in automated-driving vehicular traffic under string-unstable conditions
The basic paper result about the nucleation nature of traffic breakdown (F\(\rightarrow\)S transition) of the three-phase traffic theory is valid for both string-stable and string-unstable automated-driving vehicular traffic (Fig. 35). In free flow, when the speed is equal to \(v_{\rm free}\), the mean time headway between vehicles is longer than the desired value \(\tau_{\rm d}\) in (1), (2). Therefore, no long enough vehicle platoons in which automated vehicles moves at time headway \(\tau_{\rm d}\) can be built in free flow at the bottleneck: No string instability does occur in free flow. This explains why basic features of the free flow metastability with respect to the F\(\rightarrow\)S transition at the bottleneck remain qualitatively the same as those found in Secs. II-V for string-stable automated vehicles.
Contrary to free flow, in synchronized flow resulting from the F\(\rightarrow\)S transition at the bottleneck very long vehicle platoons in which automated vehicles moves at time headway \(\tau_{\rm d}\) can be built. For this reason, in synchronized flow the string instability is realized (Fig. 36). However, a detailed study of the development of the string instability in synchronized flow that could be an interesting subject of scientific investigations is out of scope of this paper.
### Automated-driving traffic based on three-phase adaptive cruise control (TPACC)
The basic result of the paper about the nucleation nature of traffic breakdown (F\(\rightarrow\)S transition) of the three-phase traffic theory remains when a qualitatively different model for automated-driving vehicles is used. In Fig. 37, automated-driving traffic based on three-phase adaptive cruise control (TPACC) is simulated. The
Figure 34: Double Z (2Z)-characteristic for transitions between the three phases F, S, and J: On-ramp inflow-rate function of average speed within the phases F, S, and J. Model of Sec. IV.3 with the use of (13) at \(q_{\rm in}\) = 2571 (vehicles/h)/lane of Fig. 3. \(q_{\rm on,max}^{\rm(J)}\) = 880 vehicles/h. Other model parameters are the same as those in Figs. 27 and 30.
TPACC-model reads as follows [125; 129]:
\[a^{(\rm TPACC)}= \tag{14}\] \[=\left\{\begin{array}{ll}K_{\Delta\rm v}\Delta v&\mbox{at $g_{\rm safe}\leq g\leq G$},\\ K_{1}(g-g_{\rm opt})+K_{2}\Delta v&\mbox{at $g<g_{\rm safe}$ or $g>G$}\end{array},\right.\]
where \(K_{\Delta\rm v}\) is a constant dynamic coefficient (\(K_{\Delta\rm v}>0\)), \(g_{\rm opt}\) is given by (13) when in this formula \(\tau_{\rm d}\) is replaced by model parameter \(\tau_{\rm p}\) that satisfies condition \(\tau_{\rm p}<\tau_{\rm G}\), \(\tau_{\rm G}\) is a synchronization space gap; \(g_{\rm safe}=v\tau_{\rm safe}\), \(\tau_{\rm safe}\) is a safe time headway. In contrast with the model (1), (13), in the TPACC-model (14) there is an indifference zone for car-following when time headway is between \(\tau_{\rm safe}\) and \(\tau_{\rm G}\), i.e., there is no fixed desired time headway between vehicles in TPACC-vehicle platoons. For this reason, as shown in [125], there is no string-instability in TPACC-vehicle platoons.
Simulations show that nucleation features of the F\(\rightarrow\)S transition in automated-driving based on the TPACC-model (14) are qualitatively the same as those found in Secs. II-V for string-stable automated vehicular traffic with the use of Helly's model (1), (2). However, there are some qualitative differences in
Figure 37: Nucleation features of F\(\rightarrow\)S transition in automated-driving traffic consisting of 100% TPACC-vehicles (14) under the use of lane-changing and bottleneck models of Sec. II.1. Speed in space and time in the right lane (left column) and in left lane (right column). \(\tau_{\rm p}=1.3\) s, \(\tau_{\rm cg}=1.4\) s, \(\tau_{\rm safe}=1\) s, \(K_{1}=0.3\)\(s^{-2}\), \(K_{\Delta\rm v}=K_{2}=0.6\)\(s^{-1}\), \(\tau_{1}=\tau_{2}=0.5\) s, \(g_{\rm na}=2000\) (vehicles/h)/lane, \(q_{\rm on}=700\) vehicles/h, \(T_{\rm ind}=30\) min, \(\Delta q_{\rm on}=200\) vehicles/h, \(\Delta t=2\) min. Other parameters are the same as those in Fig. 3.
Figure 36: Continuation of Fig. 35. String instability of synchronized flow: (a, b) Simulated vehicle trajectories in synchronized flow in the right lane (a) and left lane (b) at time \(t>T_{\rm ind}+\Delta t\). (c) Time-functions of speeds of vehicle 1 in the right lane and vehicle 2 in the left lane marked by the same numbers in (a, b).
Figure 35: Simulations of F\(\rightarrow\)S transition on two-lane road with bottleneck of model of Sec. II.1 with the use of (13), however, when condition (3) for string stability is _not_ satisfied: Speed in space and time in the right lane (left) and in left lane (right) at \(q_{\rm na}=2571\) (vehicles/h)/lane of Fig. 3. Induced F\(\rightarrow\)S transition simulated as in Fig. 4. \(\tau_{\rm d}=1\) s, \(K_{1}=0.3\)\(s^{-2}\), \(K_{2}=0.75\)\(s^{-1}\), \(q_{\rm on}=650\) vehicles/h, \(T_{\rm ind}=30\) min, \(\Delta q_{\rm on}=250\) vehicles/h, \(\Delta t=2\) min. Other model parameters are the same as those in Fig. 3.
synchronized flow behavior caused by the indifference zone for car-following in the TPACC-model (14). For example, while the velocity of the upstream synchronized flow front for Helly's model (1), (2) is almost time-independent (Fig. 4), this velocity can depend on time in the TPACC-model (14) (Fig. 37). A more detailed consideration of three-phase traffic theory for automated-driving traffic based on the TPACC-model that could be an interesting subject of scientific investigations is out of scope of this paper.
### Conclusions
1. The nucleation nature of traffic breakdown (F\(\rightarrow\)S transition) at a highway bottleneck, which is the basic feature of the three-phase traffic theory for human-driving traffic, has been revealed for vehicular traffic consisting of 100% of automated-driving vehicles moving on a two-lane road with an on-ramp bottleneck. As long as lane changing in free flow ensures a distribution of on-ramp inflow between road lanes, this basic result remains in a broad range of model parameters of automated-driving vehicles.
2. We have found that there is a discontinuity in the rate of lane-changing from the right lane (neighborhood lane to on-ramp) to the left lane (passing lane) (denoted as R\(\rightarrow\)L lane-changing). In its turn, this causes the discontinuity in the over-acceleration rate: The rate of over-acceleration in free flow is larger than it is in synchronized flow.
3. The cause of the nucleation nature of traffic breakdown (F\(\rightarrow\)S transition) in automated-driving vehicular traffic at a bottleneck is the discontinuity in the over-acceleration rate together with the spatiotemporal competition between over-acceleration and speed adaptation. A larger rate of over-acceleration in free flow causes the maintenance of free flow at the bottleneck; contrarily, a lower rate of over-acceleration in synchronized flow causes the maintenance of synchronized flow at the bottleneck.
4. Through the spatiotemporal competition between over-acceleration and speed adaptation caused by lane-changing, at any time instant there is a range of highway capacities between some minimum and maximum capacities; within the capacity range, an F\(\rightarrow\)S transition can be induced; however, when the maximum capacity is exceeded, then after some time-delay a spontaneous F\(\rightarrow\)S transition occurs at the bottleneck. All three-phases F (free flow), synchronized flow (S), and wide moving jam (J) can coexist each other in automated-driving traffic. A diverse variety of phase transitions, which can occur between the phases F, S, and J, determine the spatiotemporal dynamics of automated-driving vehicular traffic.
5. The discontinuous character of over-acceleration caused by lane-changing is the universal physical feature of vehicular traffic. The three-phase traffic theory is the framework for both human-driving and automated-driving vehicular traffic. Therefore, we can assume that the three-phase traffic theory is also the framework for a mixed traffic consisting of a random distribution of human-driving and automated-driving vehicles. Three-phase traffic theory of mixed traffic that is out of the scope of this paper could be a very interesting task for further traffic studies.
###### Acknowledgements.
I would like to thank Sergey Klenov for help in simulations and useful suggestions. I thank our partners for their support in the project "LUKAS - Lokales Umfeldmodell fur das Kooperative, Automatisierte Fahren in komplexen Verkehrssituationen" funded by the German Federal Ministry for Economic Affairs and Climate Action.
|
2309.02403
|
Substitution-based Semantic Change Detection using Contextual Embeddings
|
Measuring semantic change has thus far remained a task where methods using
contextual embeddings have struggled to improve upon simpler techniques relying
only on static word vectors. Moreover, many of the previously proposed
approaches suffer from downsides related to scalability and ease of
interpretation. We present a simplified approach to measuring semantic change
using contextual embeddings, relying only on the most probable substitutes for
masked terms. Not only is this approach directly interpretable, it is also far
more efficient in terms of storage, achieves superior average performance
across the most frequently cited datasets for this task, and allows for more
nuanced investigation of change than is possible with static word vectors.
|
Dallas Card
|
2023-09-05T17:33:59Z
|
http://arxiv.org/abs/2309.02403v2
|
# Substitution-based Semantic Change Detection using Contextual Embeddings
###### Abstract
Measuring semantic change has thus far remained a task where methods using contextual embeddings have struggled to improve upon simpler techniques relying only on static word vectors. Moreover, many of the previously proposed approaches suffer from downsides related to scalability and ease of interpretation. We present a simplified approach to measuring semantic change using contextual embeddings, relying only on the most probable substitutes for masked terms. Not only is this approach directly interpretable, it is also far more efficient in terms of storage, achieves superior average performance across the most frequently cited datasets for this task, and allows for more nuanced investigation of change than is possible with static word vectors.
## 1 Introduction
Measuring semantic change is one of the few areas of NLP where contextual embeddings have not yet led to a definitive improvement over previous methods. In particular, the commonly used approach of aligning static embeddings trained on different time periods Hamilton et al. (2016) continues to be a surprisingly hard to beat baseline.
Given that contextual embeddings provide a representation for each occurrence of a word in context, they would seem to be ideally suited to a more nuanced investigation of semantic change. Most attempts to leverage them for this purpose, however, produce quantitatively worse results, while being less interpretable and requiring more resources.
Here, we present a simplified and improved approach to scalable, interpretable, semantic change detection using contextual embeddings. Inspired by Eyal et al. (2022), we work only with the most probable replacements for masked words, and measure semantic change in terms of the distributions of replacements in each time period. Not only does this better match human judgements, it is highly space efficient, works seamlessly for out-of-vocabulary words, and helps intuitively characterize meaning change and variation.
## 2 Background
Measuring semantic change involves a set of tasks related to determining if and how a term's meaning has changed over time. Here, we focus on the task of measuring the amount of change that has occurred from one time period to another Gulordava and Baroni (2011); Schlechtweg et al. (2020).1
Footnote 1: For surveys of computational approaches to lexical semantic change detection, see Kutuzov et al. (2018), Tang (2018), and Tahmasebi et al. (2021).
Existing approaches to this task are mostly of two types. The first is associating each term with a single vector per time period and measuring the distance between vectors, of which we take Hamilton et al. (2016) to be representative. As a variation on this, several authors have proposed averaging the output of contextual embedding models to get a single vector per term in each time period, but this has generally not led to an improvement over using static vectors Martinez et al. (2020); Kurvigit et al. (2021); Liu et al. (2021). A related approach is to represent words in terms of their nearest neighbors using static word vectors Hamilton et al. (2016); Gonen et al. (2020), but this does not show a clear improvement over other static embedding methods Montariol et al. (2021).
A second type of approach begins with various methods for word sense induction, then measures change in terms of the relative prevalence of a term's different senses Frermann and Lapata (2016); Hu et al. (2019); Arefyev and Zhikov (2020); Arefyev and Bykov (2021). In some cases, authors simply cluster contextual representations for each term, and measure differences in the distributions of clusters between two time periods, rather than dealing with explicit word senses Giulianelli et al. (2020); Martinc et al. (2020); Montariol et al. (2021).
Despite the additional information provided by contextual embedding models, methods using type embeddings (as opposed to token), continue to be competitive. For example, on the recent SemEval multilingual semantic change detection task, none of the top four systems used token embeddings (Schlechtweg et al., 2020). Methods using contextual embeddings have done better on some more recent mono-lingual shared tasks (Kutuzov and Pivovarova, 2021; Zamora-Reina et al., 2022), but have not yet been evaluated with a consistent setup across multiple languages.
## 3 Methods
Building on Eyal et al. (2022), we represent each token in the corpus (or a sufficiently large sample of them) by a small set of probable replacement terms from a contextual embedding model. However, whereas Eyal et al. (2022) did this for the purpose of word sense disambiguation, we do so for the purpose of measuring semantic change.
For each sampled occurrence of each term, we mask the term of interest, feed the masked context through a model, and obtain the predicted token probabilities corresponding to the mask token.2 From these, we save only the top-\(k\) most probable words (excluding stopwords and partial word pieces), and discard the rest.
Footnote 2: Words that get tokenized into multiple word pieces are replaced by a single mask token.
For a given term in a particular time period, we then count how many times each word in the model vocabulary has appeared as a top-\(k\) replacement for that term, and normalize this by its sum, giving us a distribution over replacements. To obtain a raw score of semantic change between two time periods, we compute the Jensen-Shannon Divergence (JSD) between the two distributions representing the same term in different time periods. However, as we show below, the raw JSD scores are strongly correlated with term frequency. Thus, to obtain a scaled metric, we convert the raw JSD scores into a quantile, comparing the raw score for a term of interest to other terms with similar frequency.
Compared to saving the full output vector per token, this approach only requires a miniscule amount of storage per token, and thus does not require the kind of heuristic dropping of tokens employed by Montariol et al. (2021). In addition, the dominant meanings of a word in each context can be summarized by the terms which occur most frequently among the top-\(k\) replacements. Although such replacements are limited to the terms which exist in the model vocabulary, in practice this is sufficient to represent a nuanced set of meanings, and works even for words which get tokenized into multiple word pieces, as we show below.
More formally, given two corpora C1 and C2, let the count of token \(v\) as a top-\(k\) replacement for term \(t\) in corpus \(c\) be:
\[\text{count}(v,t,c)=\Sigma_{i=1}^{N_{c}(t)}\mathbb{I}[v\in R(t,i,k)], \tag{1}\]
where \(R(t,i,k)\) is the set of top-\(k\) most probable replacements for occurrence \(i\) of term \(t\) (excluding stopwords and partial word pieces in the model vocabulary), and \(N_{c}(t)\) is the number of sampled occurrence of term \(t\) in corpus \(c\).3
Footnote 3: Unlike Eyal et al. (2022), we do not combine probabilities for different forms of the same lemmas in the model vocabulary. In addition, we do not exclude the target term from the top-\(k\) replacements, except implicitly for terms which get split into multiple word pieces.
Let \(\Delta_{t}^{c}\) by the distribution of top-\(k\) replacement counts for term \(t\) in corpus \(c\), obtained by dividing the corresponding vector of counts (i.e., [count\((\cdot,t,c)\)]) by the sum over the model vocabulary. The raw change score for term \(t\) is given by the JSD between the two distributions:
\[\text{raw}(t)=\text{JSD}\left(\Delta_{t}^{C1},\Delta_{t}^{C2}\right). \tag{2}\]
Finally, we correct for frequency effects by rescaling the raw JSD scores against the scores for terms with similar frequency as the target term, giving us a quantile scaled in [0, 1]:
\[\text{scaled}(t)=\Sigma_{s\in T(t)}\mathbb{I}[\text{raw}(t)\geq\text{raw}(s)]/ |T(t)|, \tag{3}\]
where \(T(t)\) is the set of terms with similar frequency to term \(t\) (excluding term \(t\) itself). More specifically, we compare against all terms within a fixed factor of the target frequency:
\[T(t)=\{s:\text{fr}(t)/F\leq\text{fr}(s)\leq\text{fr}(t)\times F,s\neq t\}, \tag{4}\]
where \(\text{fr}(t)\) is the frequency of term \(t\) in the corpus, with window factor \(F\).
## 4 Experiments
To evaluate our method we make use of datasets for which there have been prior evaluations of methods across multiple languages, following standards established by past work for the sake of a head-to-head comparison.4
Footnote 4: Code to replicate these experiments is available at [https://github.com/dallascard/SBSCD](https://github.com/dallascard/SBSCD)
### Data
We use five datasets with words labeled in terms of semantic change between two time periods. Four of these are from SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection (SE; Schlechtweg et al., 2020). These datasets contain 31 to 48 terms from four languages, graded in terms of change by human raters, along with accompanying corpora to be used in estimating the amount of change. The fifth dataset (GEMS) comes from Gulordava and Baroni (2011), and contains 100 words labeled in terms of semantic change from the 1960s to 1990s. As with most recent papers which use this dataset, we use the Corpus of Historical American English (COHA; Davies, 2010) for measuring change in the GEMS words.
### Experimental Details
For each dataset, we fine tune an appropriate BERT model to the union of the two associated unlabeled corpora using continued masked language model training with the HuggingFace transformers package. We then index the corpora to find all occurrences of each word. For all target words, along with a random set of 10,000 background terms, we randomly sample up to 4,000 occurrences of each from the associated corpora. We process all sampled tokens as described above to obtain and store the top-\(k\) replacements for each, with \(k=5\). Using the replacements obtained from the model, we compute raw JSD scores for each term. Finally, we convert these to scaled scores by comparing to the background terms that have frequency within a factor of two of the target term (i.e., \(F=2\)).
Following past work, we evaluate using Spearman correlation with human ratings, comparing against the best results from recent papers. In particular, we include two results based on slight variations on Hamilton et al. (2016), one of which was the best performing method in the SemEval competition (Pomsl and Lyapin, 2020), as well as methods using contextual embeddings (Martinc et al., 2020; Montariol et al., 2021). For fully experimental details, please refer to Appendix A.
### Results
Full results are given in Table 1. Although our method is not uniformly better than all previous methods on all dataset, it does produce the best result on average, as well as improvements on GEMS, SE English and SE Latin.
As an example to better understand these results, the raw JSD scores from our method are shown in Figure 1 (top) for the SE English data, with select terms labeled. As can be seen, there is a strong relationship between term frequency and raw JSD, hence the need to rescale the raw scores relative to terms with similar frequency. After rescaling, we see a strong correlation between our final semantic change scores and the human ratings, as shown in Figure 1 (bottom) for the SE English data.
As with the approach of Hamilton et al. (2016), our method supports direct interpretation of semantic change. To understand the change in a word's typical usage, we can look at the overall most common replacements from each time period. Table 2 shows the scores and rankings of several selected terms from SE English, along with the most common substitutes from each time period.
Looking at the results, we can see, for example, strong agreement with human annotators on a dramatic change in the meaning of _plane_ (comparing 1810-1860 vs. 1960-2010), from the geometric concept to the flying machine. On the other hand, our results suggest that human raters may have slightly underestimated the amount of change in
Figure 1: Top: Raw JSD scores for both target and randomly chosen background terms in the SE English dataset, plotted against term counts. Bottom: Human ratings for SE English, plotting against scaled JSD scores, along with a fitted regression line (solid) and the 1:1 diagonal (dotted). Select terms in Table 2 are labeled.
the meaning of _graft_, which was previously used mostly in reference to vegetation, but now most commonly refers to corruption.5
Footnote 5: Note that because _graft_ is not a term in the BERT vocabulary, the term itself does not appear as a potential substitute, but the results remain interpretable nonetheless.
By contrast, _ounce_ may be a case where our method has underestimated the change that has taken place. Older usages seem to map more generically to a wider range of quantities (hence the appearance among the early substitutes of _hour_, _acre_, and _dollars_), whereas modern usage seems more restricted. Indeed, we do find some difference in the distribution of substitutes between the two time periods, but less of a difference than is typical for words with similar frequency, hence the low final score from our method (see Figure 1).
Although we do not emphasize it in this paper, of our method can easily be combined with the approach of Eyal et al. (2022) to further investigate meaning changes, by inferring senses from the term replacements, and looking at how their usage varies by time period. In particular, for each target term, we can construct a graph from the set of term substitutes (as nodes), where edge weights represent the number of top-\(k\) clusters in which two substitutes co-occur. Following Eyal et al. (2022), we experiment with Louvain community detection to identify sense clusters from these graphs for each term of interest, and use Jaccard similarity to associate each mention with a sense cluster, based on substitute overlap (see Appendix A for details).
Inspecting the distribution of these senses over time helps to distinguish the gradual adoption of existing senses from the creation of new ones. For example, the most common sense of _plane_ is captured by the sense cluster {_aircraft_, _jet_, _airplane_, _car_}, and as expected, this sense is not found in the 1810-1860 English data, except for two instances which appear to be errors in the inferred sense. By contrast, the second most common sense--_[planes, line, point, surface]_--appears in both time periods, but is much more common in the earlier time.
This approach also provides more insight into how the meaning of _graft_ has changed. The most common sense cluster is the horticultural meaning {_tree_, _plant_, _stock_, _vine_}, and this meaning occurs in both time periods, but is much more common in the earlier one. A second cluster, corresponding to illicit activity--_{_corruption_, _violence_, _bribery_, _fraud_}--occurs only in the later time period. This clustering method also surfaces a third sense with
\begin{table}
\begin{tabular}{l c c c c c c c c} Word & SE & SE & Scaled & Scaled & Corpus A substitutes (1810–1860) & Corpus B substitutes (1960–2010) \\ & rating & rank & JSD & JSD rank & & & & \\ \hline \hline plane & 0.88 & 1 & 0.97 & 1 & plane line planes point surface lines & plane aircraft planes jet airplane car \\ graft & 0.55 & 4 & 0.97 & 2 & tree plant stock vine fruit wood & corruption bribery fraud crime violence \\ tip & 0.68 & 2 & 0.85 & 7 & tipped tip covered end filled tips give & tip tips end tipped edge point top ends \\ gas & 0.16 & 23 & 0.72 & 14 & gas gases vapor air fire water & gas gasoline oil gases fuel water air & head face hands hand body hands eyes \\ head & 0.30 & 10 & 0.68 & 16 & heat face hand heads hands eyes & bit little lot touch tad piece bits pieces \\ bit & 0.31 & 9 & 0.51 & 23 & fit piece sort little pieces bits kind & fiction fact fantasy story stories novels \\ fiction & 0.02 & 35 & 0.41 & 27 & trees tree plants branches plant wood & trees tree plants woods branches bushes \\ tree & 0.07 & 33 & 0.22 & 33 & & ouce inch pounds hour care dollars & ouce pounds inch inches cups pieces \\ \end{tabular}
\end{table}
Table 2: Example terms from the SE English dataset, showing the most common substitutes from our approach.
\begin{table}
\begin{tabular}{l c c c c c c c c} & GEMS & SE Eng & SE Ger & SE Lat & SE Swe & Average & Average (weighted) \\ \hline Number of words & 96\({}^{*}\) & 37 & 40 & 48 & 31 & & \\ \hline \hline _Static Embedding Methods_ & & & & & & & \\ \hline Pónsl and Lyapin (2020) & - & 0.422 & **0.725** & 0.412 & 0.547 & - & - \\ Montariol et al. (2021) [static] & 0.347 & 0.321 & 0.712 & 0.372 & **0.631** & 0.477 & 0.452 \\ \hline \hline _Contextual Embedding Methods_ & & & & & & & \\ \hline \hline _Martinc et al._ (2020b) & 0.510 & 0.313 & 0.436 & 0.467 & -0.026 & 0.340 & 0.394 \\ Montariol et al. (2021) [contextual] & 0.352 & 0.437 & 0.561 & 0.488 & 0.321 & 0.432 & 0.422 \\ Scaled JSD & **0.535** & **0.547** & 0.563 & **0.533** & 0.310 & **0.498** & **0.514** \\ \end{tabular}
\end{table}
Table 1: Spearman correlation results on five datasets, including both an unweighted average and an average weighted by number of words. Pónsl and Lyapin (2020) was the best submission from SemEval 2022 Task 1, but did not evaluate on GEMS. Montariol et al. (2021) included results using static vectors, as well as several variations on their own method using contextual embeddings, of which we take the one with the highest average performance. Martinc et al. (2020b) only evaluated on GEMS, so we report the replication results from Montariol et al. (2021). \({}^{*}\)We exclude four terms from GEMS to match past work; for full results on GEMS, please refer to Appendix D.
a medical meaning--{_transplant_, _surgery_, _disease_, _drug_}--which is not revealed by the top few overall most common replacements given in Table 2.
## 5 Discussion and Related Work
As noted by others, new and larger datasets for rigorously evaluating semantic change are badly needed Tahmasebi et al. (2021). Existing datasets are relatively small, and are mostly based on inspecting a limited number of examples per term. Unfortunately, determining ground truth for semantic change is challenging, and producing such resources is costly. Ideally, future datasets for evaluation should be larger, both to allow for more robust evaluation, and to have sufficient targets for both hyperparameter tuning and evaluation.
In addition to the dataset we have used in this paper, two others are available from shared tasks on Spanish and Russian, respectively Kutuzov and Pivovarova (2021); Zamora-Reina et al. (2022). Both of these are comparable in size to the GEMS dataset used here. Unfortunately, they are less useful for evaluation because most submissions to these shared tasks only evaluated on the task data, and not on other datasets. As shown by the replication of Martinc et al. (2020) in Montariol et al. (2021), a method can sometimes perform well on one language but fail to generalize to others. As such, we have based our evaluation on datasets for which there has been a consistent evaluation of methods across multiple languages. As future work, a careful replication study of all methods from each competition on all available datasets, including an assessment of sensitivity to hyperparameters, would be highly informative.
Besides Eyal et al. (2022), The closest prior work to ours is Kudisov and Arefyev (2022), who use dynamic patterns to generate many variations on example usages sampled from the given corpora. These variations are then used to generate hundreds of replacement terms from a masked language model with associated probabilities. These probabilities are averaged (heuristically combining replacements with differing numbers of word pieces) to obtain a mean vector for each sampled instance. Finally, semantic change is computed as the average cosine distance between all pairs of vectors across corpora. This method was evaluated as part of the LSCDiscovery shared task on Spanish Zamora-Reina et al. (2022). Preliminary work on this method was described in Arefyev and Bykov (2021), where a slightly different version of it was evaluated on the RuShiftEval shared task on Russian Kutuzov and Pivovarova (2021).
Compared to Kudisov and Arefyev (2022), our approach is considerably simpler, and better suited to storing representations of a complete corpus for subsequent analysis and exploration. In particular, we only consider a small number of substitutes for each example (storing only the top-\(k\) most probable terms, without the associated probabilities). We do not use dynamic patterns, and only consider terms in the model vocabulary as potential substitutes. We also associate each term with a single distribution over the model vocabulary per time period (not per mention), and use Jensen-Shannon divergence to more naturally measure the distance between distributions. Importantly, we also correct for frequency effects, as described above.
Although our approach avoids the onerous storage requirements of methods which save full contextual vectors, it still requires considerable processing time to obtain the top-\(k\) replacements for all tokens. Future work could explore smaller or more efficient models for this purpose.6
Footnote 6: See Appendix B for results using various model sizes.
Finally, despite its simplicity, measuring the cosine distance between aligned static vectors remains a strong and efficient baseline Hamilton et al. (2016). More work is needed to determine where contextual embeddings can offer sufficient advantage in measuring semantic change to justify their greater computational cost.
Compared to static embeddings, our approach is weakest on the German and Swedish datasets, which could relate to the quality of the pretrained models that are available for those languages, the data used for pretraining, or perhaps issues that arise in tokenization of the reference corpora. For a tentative exploration of some possible factors, please refer to Appendix C.
## 6 Conclusion
We have presented a simplified and improved approach to measuring semantic change using contextual embeddings, based on the Jensen-Shannon Divergence between the distributions of the most probable replacements for masked tokens in different time periods, corrected for frequency effects. This approach achieves superior performance on average, while remaining directly interpretable, with vastly reduced storage requirements.
### Limitations
There are several limitations to this work which should be kept in mind. First and foremost, the datasets for evaluating the measurement of semantic change are relatively small, meaning that any estimates of correlation with human judgements will be relatively high variance. In addition, although the SemEval data includes text from four languages, there is no guarantee that these methods will work as well as they do on other languages or other time periods. Moreover, our approach depends on the use of pretrained language models, and the quality (or existence) of these and other relevant resources will vary by language.
In addition, like all methods, our approach involves numerous small choices, such as the number of background terms to sample, the number of samples taken, and the value of \(k\) in choosing top substitutes. We have kept our choices for these consistent across all five datasets, and these values have not been tuned. As such, different choices could result in better or worse correlation with human judgements. It is also worth noting that the human judgements collected by the creators of these datasets may involve errors or noise. It is possible that a different sample of data, or having different people evaluate the same data, would produce different judgements.
For exploring the variation in word meanings, we have used the approach of Eyal et al. (2022) directly, with the only differences being that we mask terms of interest (allowing us to work with terms that do not exist in the model vocabulary), and do not combine multiple forms of lemmas when getting the top-\(k\) terms. We adopt this approach because it is especially easy to combine with our own work, but different methods for word sense induction might lead to different conclusions about the different meanings of a term that existed in any particular time period. In addition, any conclusions drawn are necessarily limited to the corpora that are used, most of which will be a highly biased sample of all text that was produced by all people for any given period of time.
## Ethical Considerations
This work only uses well established datasets for the purposes for which they were designed (studying changes in languages and evaluating measurement of semantic change), thus poses few ethical concerns that did not already exist for these data. Nevertheless, it is worth emphasizing that all of methods discussed in this paper only return, at best, a noisy estimate of semantic change. Words are used differently by different people, and attempts to measure changes in language inevitably simplify the diversity of uses into a single number, which discards a great deal of nuance. As such, any work applying these methods to measure semantic change should be aware of their limitations and proceed carefully.
## Acknowledgements
Many thanks to Kaitlyn Zhou and anonymous reviewers for helpful comments and suggestions.
|
2310.18538
|
Evaluating Cross-Domain Text-to-SQL Models and Benchmarks
|
Text-to-SQL benchmarks play a crucial role in evaluating the progress made in
the field and the ranking of different models. However, accurately matching a
model-generated SQL query to a reference SQL query in a benchmark fails for
various reasons, such as underspecified natural language queries, inherent
assumptions in both model-generated and reference queries, and the
non-deterministic nature of SQL output under certain conditions. In this paper,
we conduct an extensive study of several prominent cross-domain text-to-SQL
benchmarks and re-evaluate some of the top-performing models within these
benchmarks, by both manually evaluating the SQL queries and rewriting them in
equivalent expressions. Our evaluation reveals that attaining a perfect
performance on these benchmarks is unfeasible due to the multiple
interpretations that can be derived from the provided samples. Furthermore, we
find that the true performance of the models is underestimated and their
relative performance changes after a re-evaluation. Most notably, our
evaluation reveals a surprising discovery: a recent GPT4-based model surpasses
the gold standard reference queries in the Spider benchmark in our human
evaluation. This finding highlights the importance of interpreting benchmark
evaluations cautiously, while also acknowledging the critical role of
additional independent evaluations in driving advancements in the field.
|
Mohammadreza Pourreza, Davood Rafiei
|
2023-10-27T23:36:14Z
|
http://arxiv.org/abs/2310.18538v1
|
# Evaluating Cross-Domain Text-to-SQL Models and Benchmarks
###### Abstract
Text-to-SQL benchmarks play a crucial role in evaluating the progress made in the field and the ranking of different models. However, accurately matching a model-generated SQL query to a reference SQL query in a benchmark fails for various reasons, such as underspecified natural language queries, inherent assumptions in both model-generated and reference queries, and the non-deterministic nature of SQL output under certain conditions. In this paper, we conduct an extensive study of several prominent cross-domain text-to-SQL benchmarks and re-evaluate some of the top-performing models within these benchmarks, by both manually evaluating the SQL queries and rewriting them in equivalent expressions. Our evaluation reveals that attaining a perfect performance on these benchmarks is unfeasible due to the multiple interpretations that can be derived from the provided samples. Furthermore, we find that the true performance of the models is underestimated and their relative performance changes after a re-evaluation. Most notably, our evaluation reveals a surprising discovery: a recent GPT4-based model surpasses the gold standard reference queries in the Spider benchmark in our human evaluation. This finding highlights the importance of interpreting benchmark evaluations cautiously, while also acknowledging the critical role of additional independent evaluations in driving advancements in the field.
## 1 Introduction
Significant progress has been made in translating natural language text to SQL statements over the past few years. The execution accuracy on the hold-out test of Spider (Yu et al., 2018)-a large-scale cross-domain text-to-SQL benchmark-has improved from 53.5 in May, 2020 (Zhong et al., 2020) to 85.3 in March, 2023 (Pourreza and Rafiei, 2023). The exact set match accuracy, without considering database cell values, on the same benchmark and over the same period has improved from 65.6 (Wang et al., 2019) to 74.0 (Li et al., 2023). Measuring such progress is hinged on reliable benchmarks and evaluation metrics.
Two standard metrics for evaluating the performance in this domain have been _exact set match accuracy_ and _execution accuracy_. The former measures if a model-generated SQL query lexically matches a reference SQL query, whereas the latter measures if a model-generated SQL query produces the same output as a reference query (SS 4).
Consider the example in Figure 1, which consists of a model-generated query (shown on the left) and a reference query (shown on the right). Both SQL queries return the id and name of makers that have more than 3 models. However, the model-generated query returns the column Full-Name, which gives the full name of a maker (e.g., "Ford Motor Company"), whereas the reference query given in the benchmark returns the column Maker, which gives the short common name of a maker (e.g., "Ford"). The model-generated query
Figure 1: An example question with two correct SQL queries, each corresponding to a different interpretation. There is an ambiguity in schema mapping, with two different database columns describing the name.
fails an exact set match since the column names in the select clause are different. The query outputs are also different and the model-generated query fails the execution accuracy as well. The natural language utterance is not specific about the type of name to be returned, and a human evaluator tags both queries correct.
As the models improve, these types of failures make up most of the errors, and the performance metrics become less relevant, as shown in our evaluation. In particular, we re-evaluated all development set queries of Spider on which two top-performing models, one using a fine-tuned model (Scholak et al., 2021) and another using a large language model (Pourreza and Rafiei, 2023), failed. We found out that 25% of the queries generated by one model and 87% of the queries generated by the other model were indeed correct but were wrongly evaluated by the benchmark. For the same set of queries, our re-evaluation of the ground truth queries found 33% of the SQL queries incorrect, which was more than the number of incorrect queries generated by one of the models. This evaluation places one of the models above the ground truth queries in this re-evaluation.
We further re-evaluated two well-known benchmarks, Spider (Yu et al., 2018) and Spider-DK (Gan et al., 2021), and a newly released benchmark, BIRD (Li et al., 2023), and found similar problems in all three benchmarks that affect the evaluation. Our evaluation reveals that 18% of the queries in the train sets and 20%-23% of the queries in the dev sets of these benchmarks are subject to ties in the dataset and which one of the tied rows are returned. This means a model-generated query will be deemed incorrect if it does not return the same row, among tied rows, as the ground truth query. This can severely impact the evaluation, especially when there is a tight race among models. Considering these observations, it is crucial to emphasize the significance of additional independent evaluations when utilizing these benchmarks. To enhance the evaluation process further, a potential solution is to incorporate multiple SQL queries as the ground truth, each representing a different interpretation that may be valid.
Our objective in this paper is to provide a comprehensive evaluation of existing Text-to-SQL benchmarks, underscoring the inherent issues they possess. We refrain from introducing a new dataset due to several considerations. First, addressing the identified issues by updating these benchmarks requires considerable human effort. Additionally, benchmarks in the Text-to-SQL domain, like Spider and BIRD, have holdout test sets used for official leaderboards and comparisons of text-to-SQL methodologies. We only have access to the development and training sets of these benchmarks, which limits our capability to alter the test sets. As a result, making changes only to the development and training sets would not completely address the benchmark's inherent problems, given that final performance is gauged using the problematic test sets.
## 2 Related Work
Limited research has been dedicated to assessing the reliability and effectiveness of Text-to-SQL benchmarks. The authors of SQL-PaLM (Sun et al., 2023) note in their qualitative analysis of their model that some queries, labelled as incorrect by execution accuracy, were considered correct by human annotators. Similarly, Lei et al. (2020) conduct an analysis highlighting the discrepancy between automatic evaluations and human annotations. They emphasize that certain queries produced by the models were labeled as incorrect SQL queries but human annotators labelled them as correct queries. Generally, a query that is equivalent (but not identical) to ground truth may be mistakenly classified as incorrect by automated evaluation metrics. Another study by Zhong et al. (2022) identifies limitations within the Spider benchmark, such as issues with ties and certain syntactic problems. Their analysis is primarily focused on a subset of Spider, without quantifying the extent or impact of these limitations or conducting an assessment of other benchmarks.
## 3 Text-to-SQL Benchmarks
Benchmarks have played a crucial role in advancing the field and providing a platform for evaluation. WikiSQL (Zhong et al., 2017) consists of over 24,000 tables from Wikipedia with SQL queries generated based on some predefined rules and templates. The queries in this dataset are considered easy since they are all single-table queries. Spider, introduced by Yu et al. (2018), consists of 200 database schemas of which 160 schemas are published as train and dev sets and 40 schemas are kept hidden for testing. The queries are written on those schemas by Computer Science students
without using templates. This is considered a challenging dataset. Some other benchmarks are developed based on Spider, including Spider-Syn Gan et al. (2021), which replaces schema-related words with synonyms and eliminates explicit mentions between NLQ and schema, and Spider-DK Gan et al. (2021), which introduces rarely observed domain knowledge into the Spider development set. Other benchmarks include FIBEN Sen et al. (2020), created for the financial domain and BIRD Li et al. (2023), which comprises 12,751 queries over 95 databases spanning 37 professional domains.
Our study in this paper focuses on cross-domain large-scale benchmark Spider, its variants Spider-DK and Spider-SYN, and a more recent cross-domain large-scale benchmark BIRD. The selection of these benchmarks stems from their resemblance to real-world datasets, which is a crucial factor in conducting comprehensive research and analysis. One notable advantage of these benchmarks is the availability of a large training set, which plays a pivotal role in training and fine-tuning large-scale models. The inclusion of a substantial amount of training data enables the development of more robust and powerful models that can better handle the complexities and nuances present in real-world databases.
## 4 Evaluation Metrics
The performance evaluation of text-to-SQL systems involves comparing them to a reference system, typically a gold standard set of known correct SQL queries. Generating a reference can be challenging due to multiple interpretations of natural language questions, while SQL queries are based on logic and tend to cover only one interpretation. Even if an interpretation is fixed, detecting if a model-generated query is equivalent to a reference query is challenging, due to the halting problem which is undecidable Davis (2004). Nonetheless, to assess progress, proxy measures of performance have been developed in the literature. As two such metrics, we review exact set match accuracy and execution accuracy in this paper.
Under _exact set match accuracy_, SQL queries are evaluated by matching the query clauses and components independently, such as the _select_, _where_, _having_, _group by_, and _order by_ clauses. The matching is based on comparing columns and predicates, disregarding the ordering of columns and predicates. An exact matching of literals can be challenging since predicates such as nationality="Canada" and nationality="Canadian" will not match. However, accurately generating those literals without accessing database content may not be possible. Under _exact set matching without values_, which is used in Spider Yu et al. (2018), a matching of literals is not required.
Two equivalent SQL queries can have different expressions and may not match under an exact set match. An alternative metric that can reduce the number of false negatives is the _execution accuracy_. Under execution accuracy, the equivalence between a model-generated query and a reference query is established if they both produce the same results on all possible databases instances Yu et al. (2018). While testing all instances is impractical, running queries on a subset of instances can help identify candidates that are not equivalent to the reference query. Although execution accuracy can detect queries that are equivalent but not identical, it may mistakenly identify queries as equivalent if they produce the same result on tested instances. Therefore, an effective execution-based evaluation requires finding instances that cover various edge cases and can detect queries that are not equivalent to the reference. Test suite accuracy Zhong et al. (2020), which is simply referred to as execution accuracy in Spider benchmark and in our work, aims to minimize false positives by evaluating queries on a carefully selected collection of database instances, known as a test suite. Nevertheless, an execution-based accuracy cannot capture all correct SQL queries, highlighting the limitations and the continued importance of human evaluation for reliable assessment.
## 5 Execution Accuracy Failures
A model-generated query can be correct but still fail the execution accuracy. We classify these failures into three categories: (1) failures due to ties in output, (2) ambiguity in schema matching, (3) wrong assumptions made about database content.
### Failures Due to Ties in Output
SQL queries can lead to ties and a subset of the tied rows may be returned. The selection of tied rows can vary between queries and this can affect the execution accuracy. We identify a few sources for such ties, as discussed next, and study their impact on benchmark evaluations in Section 6. Table 1
provides a detailed breakdown of the number of queries that can potentially yield tied rows in both train and development set of Spider, Spider-DK, and BIRD benchmarks.
#### 5.1.1 Top with Ties
Sometimes the query asks for top rows that satisfy some conditions (e.g., the student with the highest GPA, or the youngest student). When there is a tie for the top position, and the query in natural language is not specific on how the ties should be handled, the corresponding SQL query may return all ties or only one. This becomes a problem in evaluation if a model-generated query and the reference query treat the ties differently. Figure 2 provides a concrete example from the Spider dataset, illustrating this issue, where the reference SQL query in the benchmark fails to account for ties and returns only one of them using the LIMIT keyword.
#### 5.1.2 Limit N
The problems associated with using the _LIMIT n_ clause in SQL queries is not limited to the top position, as discussed above. The use of this clause is problematic for evaluation in general. Firstly, without an explicit ordering, the result of a SQL query is expected to be a set. Two equivalent (but not identical) queries can return the same set of results, each listed in different orders, but selecting _the first n_ rows from one ordering will not necessarily match the same selection from a different ordering. Secondly, with query results sorted, there can be a tie on row \(n\) with multiple rows having the same values. The ordering among tied rows can vary between two queries, and so is the first \(n\) rows that are returned. All benchmarks studied in this paper (Spider, Spider-DK, Spider-SYN, BIRD) use the limit keyword and suffer from the aforementioned problems associated with ties.
#### 5.1.3 Group by
Many text-to-SQL benchmarks encounter a different type of issue associated with ties, particularly arising due to incorrect usage of non-aggregated columns in both the SELECT clause and the GROUP BY clause. Within the benchmarks, these ties manifest in two scenarios: 1) a column appears in the SELECT clause without being inside an aggregation function and without being included in the GROUP BY clause; 2) the SELECT clause contains a mix of aggregated and non-aggregated columns without utilizing a GROUP BY clause. In both cases, multiple records can be associated with the same grouping column or aggregation value, whereas each group can only return one record. Some database systems including Oracle and DB2 prevent these cases by treating them as syntax errors. However, other database systems such as SQLite and MySQL take a more lazy approach (sometimes for efficiency reasons) and allow these cases to happen. Many text-to-SQL benchmarks follow SQLite syntax and suffer from this issue. The affected queries in our benchmarks were identified after migrating from SQLite to PostgreSQL, as detailed in Section 6.4, and checking for queries that failed during PostgreSQL execution. Figure 3, illustrates one example of such a problem from the Spider dataset.
#### 5.1.4 Order by
Another subtle ambiguity with tied values arises in queries where the SELECT clause incorporates the "distinct" keyword, paired with an ORDER BY clause referencing a column absent in the SELECT clause. Consider the exemplary query from Spider train set: SELECT DISTINCT district_name FROM district ORDER BY city_area DESC. The ordering of the output, as well as the result of a comparison with a reference query, becomes uncertain if a single 'district_name' value maps to multiple 'city_area' values. Similar to GROUP BY, the affected queries in the benchmarks were identified through a SQLite to PostgreSQL migration(SS 6.4).
Figure 2: An example question that can have two correct SQL queries, each corresponding to a different interpretation. The SQL query on the left returns all tied values, while the SQL query on the right returns only one of the tied values.
### Ambiguity in Schema Matching
Schema matching refers to the task of establishing the correspondence between a natural language question and the tables, columns, and cell values in the database ((Cao et al., 2021; Pourreza and Rafiei, 2023; Wang et al., 2019; Li et al., 2023b). Ambiguities arise when there are multiple columns in the database that can represent the same semantic meaning, and the information needs of a query may be satisfied using any of those columns. As a result, there exist multiple SQL queries that can produce the correct answer, yet most benchmarks only provide one query among the many possible correct answers. Figure 1 illustrates an example question that can be satisfied by two different SQL queries, both of which are valid responses to the question at hand.
### Wrong Assumptions on DB Content
Lastly, one type of limitation in text-to-SQL benchmarks stems from incorrect assumptions regarding cell values. It is common to make assumptions about database content and constraints when writing SQL queries, but those assumptions may not be supported by the database schema or content. This issue arises when the database content is created under assumptions that do not align with those in queries, leading to potential failures in the evaluation process. Text-to-SQL models often lack access to full database content due to limitations such as the context window problem and the inability to pass all cell values to the models for reasons such as privacy and cost. These models typically rely on the provided database schema and a selected sample of database rows to represent potential values (Pourreza and Rafiei, 2023; Liu et al., 2023; Rajkumar et al., 2022; Sun et al., 2023; Li et al., 2023a; Lin et al., 2020). Consequently, the assumptions made by these models may not align with the actual ground truth, resulting in SQL queries that are correct under the assumption made but do not match the reference query in the benchmark.
One observed case is when certain conditions (e.g., PetType='dog') are omitted from SQL queries due to the erroneous assumption that the condition holds for all rows in the database. Figure 4 exemplifies this issue using an example from the Spider dataset, where both queries yield the same answer on a specific database instance. However, changing the database values could result in failure, especially when evaluating performance using test-suite accuracy, which involves querying different database instances. Another case observed in the benchmarks is when the ground truth SQL queries assume a specific column has unique values, but in reality, that column does not possess that unique constraint. Figure 5 depicts an example of this
\begin{table}
\begin{tabular}{c c c c c c}
**Benchmark** & **LIMIT 1** & **LIMIT 1** & **GROUP BY** & **ORDER BY** & **Total** \\ \hline \hline & & \multicolumn{3}{c}{**Dev set**} & \\
**BIRD** & 255(16\%) & 42(2\%) & 20(1\%) & 4(0.2\%) & 321(20.86\%) \\
**Spider** & 171(16\%) & 10(0.9\%) & 51(4.5\%) & 2(0.2\%) & 234(22.63\%) \\
**Spider-DK** & 94(17\%) & 2(0.3\%) & 30(4.5\%) & 2(0.3\%) & 128(23.85\%) \\ \multicolumn{5}{c}{**Train set**} & \\
**BIRD** & 1558(16\%) & 211 (2\%) & 23 (0.2\%) & 4(0.04\%) & 1792 (18.22\%) \\
**Spider** & 989(14\%) & 106(1\%) & 254(3\%) & 10(0.1\%) & 1359(18.1\%) \\ \end{tabular}
\end{table}
Table 1: The number of SQL queries having a specific type of limitation together with the percentage on both development set and train set. The Spider-DK dataset does not have any training set.
Figure 3: An example question that can have two correct SQL queries, each corresponding to a different interpretation. The SQL query on the left returns all languages of each country, each pair of country and language in a separate row, whereas the SQL query on the right returns one of tied values for the column LANGUAGE.
problem from the Spider dataset.
## 6 Experiments
To understand the extent at which the aforementioned problems affect the benchmarks, our evaluation and the ranking of the models, we conducted three types of evaluations on three benchmarks: Spider, Spider-DK, BIRD. Our findings here apply to the Spider-SYN dataset as well, which employs the same SQL queries as in the Spider dataset. For the same reason, we did not conduct a separate analysis of that benchmark.
### Evaluation Through Query Rewriting
In this experiment, our focus is on ties and how a tie breaking strategy affects the benchmarks and our evaluation. This is done through query rewriting. Automating query rewriting faces inherent challenges, particularly when dealing with failures stemming from schema ambiguity, erroneous assumptions about the database content, and the ambiguity of natural language utterances. These challenges arise because there is no specific structure to address the failures systematically. Successful query rewriting in these cases necessitates a deeper understanding of table and column semantics to identify ambiguities and erroneous assumptions. In cases of ambiguity, human expertise is essential to disambiguate the context, as these situations often lack clear guidelines. Detecting erroneous assumptions often involves introducing new data to the database and meticulously reviewing and correcting failed queries on a case-by-case basis. Therefore, our efforts have been channeled towards rewriting queries concerning tied values, which adhere to a specific syntax structure, and the problems associated with the ambiguity in schema matching and wrong assumptions on database content are studied in the next section.
Many benchmark queries use "LIMIT 1" to find top rows that satisfy some conditions. If there are ties on top, one arbitrary row among ties is returned. An alternative is to return all ties. We rewrote all queries that used "LIMIT 1" to return all ties. This was done by introducing min() and max() aggregation functions within nested queries to accurately identify extreme values. An example of such rewriting is shown in Figure 2. Breaking ties for queries that used "LIMIT n" for \(n>1\) was not straightforward, and those queries were left unchanged.
For resolving ties introduced by an incorrect usage of GROUP BY in benchmark queries, we included all non-aggregated columns from the SELECT clause in the GROUP BY clause. For example, if the SELECT clauses included id and name, but the GROUP BY clause only included name, we added id to the GROUP BY clause. This change will not affect queries where there is a one-to-one mapping between id and name, but it will resolve the ambiguity when such mapping does not hold.
With these two changes, 16% to 20% of the reference queries in our benchmarks were affected. Under a perfect evaluation scheme, the accuracy should not be affected with these changes that sim
Figure 4: An example of a question and SQL pair with a wrong assumption on the cell values. The SQL query on the left does not make the same assumption.
Figure 5: An example of a question and SQL pair with a uniqueness assumption on the “name” column, which is not supported by the schema. The SQL query on the left does not make the same assumption.
ply resolve the uncertainty. Table 2 displays both the execution accuracy and the exact set match accuracy for the reference queries from the BIRD, Spider, and Spider-DK benchmarks after our modifications. It's important to highlight that the performance metrics provided in this table encompass the entire development set of these benchmarks, combining both modified and unaltered queries. For clarity, in the Spider dataset, out of 1034 queries, 206 were modified. The performance assessment took into account a mixed set of predicted queries: 206 that were adjusted and 828 that remained as originally presented. This culminated in an execution accuracy of 92.3 percent.
It can be noted that the execution accuracy is not as adversely affected as the exact set match accuracy. We hypothesize that this could be attributed to the absence of ties in the test data used for these benchmarks. An evidence of this is the following two queries, (Q1) SELECT name, capacity FROM stadium WHERE average = (SELECT max(average) FROM stadium), and (Q2) SELECT name, capacity FROM stadium ORDER BY average DESC LIMIT 1, labelled as a correct match by the test scripts of Spider.
### Human Evaluation
To gain a deeper understanding of the limitations within the benchmarks, we conducted an experiment focused on the widely-used text-to-SQL benchmark, the Spider dataset. Specifically, we evaluated two top-performing methods from the Spider leaderboard: DIN-SQL (Pourreza and Rafiei, 2023) and T5-large + PICARD (Scholak et al., 2021). This experiment involved running these methods on the development set of Spider, which comprised 1034 question-query pairs. From the results obtained, we extracted the questions for which both methods failed to produce a correct answer, based on the execution accuracy, resulting in 102 pairs. We then presented these questions, along with the SQL queries generated by the methods as well as the ground truth SQL queries (treating them the same as model-generated queries), to two annotators 1 for labelling. The annotators had access to the database schemas and were tasked with identifying the queries they deemed correct for each question, without knowing which model generated which query or if the query was from the ground truth queries. Annotators could also create databases and validate queries, ensuring a thorough evaluation.
Footnote 1: The human annotators are the authors of this paper.
Following our initial labelling process, we wanted to minimize the potential impact of human errors in our evaluation. For this, we identified queries with inconsistent labels among the annotators and presented them to the annotators. Each annotator was asked to provide an explanation for their assigned labels. In the final stage of evaluation, each annotator was presented the inconsistent queries and the explanations provided by the other annotator. They were then asked if they would revise their labels based on this additional information. The results of this experiment are presented in Table 3. This table presents the outcome of human evaluation on a sample of 102 queries that both DIN-SQL and T5+PICARD methods were deemed incorrect in terms of execution accuracy. SQL experts conducted this evaluation, with 81.6% of these queries judged as correct for DIN-SQL, and only 25.5% for T5+PICARD. Notably, among the reference queries, only 67.3% were deemed correct. Even after the second round of annotation, a few queries (more specifically, four question-query pairs) still exhibit inconsistent labeling by the annotators. The main challenge with these particular pairs is the inherent ambiguity in the questions or the subjectivity of interpretations, which leads to a lack of a definitive answer. Figure 6 demonstrates one example of such a question with two possible SQL query as answers.
An intriguing observation emerged from this experiment: the DIN-SQL method, powered by GPT-4, produced the highest number of correct answers, surpassing even the ground truth SQL queries. This finding sheds light on the limitations of the current benchmarks and raises doubts about the reliability of current leaderboards and performance metrics.
### Error Analysis of Human Evaluation
We performed an error analysis of the SQL queries that were labelled as incorrect in our human evaluation to better understand the error types and causes and to provide insights into areas for improving the
\begin{table}
\begin{tabular}{c c c c}
**Benchmark** & **Mfected Queries** & **Exec Acc** & **Set Match Acc** \\ \hline Spider & 206 (19\%) & 92.3 & 81.6 \\ Spider-DK & 112 (20\%) & 95 & 83.9 \\ BIRD & 252 (16\%) & 96.87 & - \\ \end{tabular}
\end{table}
Table 2: Performance of the revised SQL queries on the development set of the benchmarks.
ground truth SQL queries. Additionally, we compared the errors in ground truth queries with those of fine-tuning and prompting approaches. The identified errors, categorized into five groups, are briefly discussed next. The distribution of SQL queries across these groups is depicted in Figure 7.
SchemaThe primary issue responsible for the majority of errors, affecting both the reference SQL queries and the two methods, is the incorrect usage of schemas, which arises when the SQL query utilizes incorrect tables or columns to answer the given question. These errors indicate ambiguities in the database schema and/or questions, as discussed in Section 5. Notably, the reference set shows the least number of errors, which is closely followed by DIN-SQL.
ConditionThe second-largest group of errors observed pertains to the usage of incorrect conditions within the SQL queries. Unlike the schema group, where the tables and columns were incorrect, in this group, the correct tables and columns are used, but the conditions in the WHERE clause are erroneous. This error primarily manifested in the queries generated by the T5-PICARD method, but was also present in the reference set. The T5 model's tendency to introduce additional columns or omit necessary conditions could be attributed to its smaller size relative to larger models like GPT-4, limiting its grasp of intricate SQL syntax.
NestedThe source of this problem is using a non-unique column for the nested SQL query, as also discussed in Section 5. Figure 5 shows an example of such an error in a SQL query. This error was more common in the SQL queries provided in the reference set as well as those of T5-PICARD.
Group byThis category includes queries that incorrectly used GROUP BY, resulting in ambiguity or uncertainty in the result as discussed in Section 5. Notably, the reference set showed the largest number of errors, closely followed by the fine-tuned T5-PICARD. DIN-SQL exhibited the least number of errors.
LimitAs highlighted in Section 5, one of the error scenarios involves not properly handling ties when using the LIMIT keyword. The DIN-SQL method demonstrates a lower incidence of this type of error, attributed to its prompting nature. Conversely, T5-PICARD exhibits identical performance to the ground truth in this particular case.
Figure 6: An example of a question with two possible SQL queries as the answers. Both of these SQL queries are correct under different interpretations.
Figure 7: Distribution of SQL queries across error groups for the two models being evaluated and the ground truth. M0 refers to SQL queries in the reference (ground truth) set, M1 refers to the DIN-SQL method, and M2 refers to T5+PICARD.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Method** & **Acc** & **Incon** \\ \hline
**DIN-SQL** & **81.6** & 4 \\
[14] & & \\
**T5-large + Picard** & 25.5 & 4 \\
[1] & & \\
**Ground Truth** & 67.3 & 4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracy of the SQL queries generated by two methods and the ground truth SQL queries based on human evaluation. In four cases, the two annotators did not agree on a label even after a second round.
### Standard SQL validation
We undertook an extensive review of the development set of Spider, BIRD, and Spider-DK benchmarks through the lens of standard SQL validation. The objective was to identify some of the problematic queries discussed in Section 5 and assess the portability of the benchmarks. As part of this analysis, we migrated the databases and queries of these three benchmarks from Sqlite to PostgreSQL. Our decision to use PostgreSQL, a widely recognized RDBMS, stemmed from its rigorous adherence to SQL standards. Following the migration, we executed every query from the development set on these PostgreSQL databases, with a keen focus on identifying queries that failed during PostgreSQL execution. Table 4 provides a breakdown of queries by error type across all three benchmarks. Notably, errors such as UndefinedColumn, SyntaxError, and UndefinedFunction emerge due to the different SQL formats supported by Sqlite and PostgreSQL. These variances necessitate adjustments to make the queries compatible with PostgreSQL standards. For instance, the Spider dataset frequently showcases errors stemming from PostgreSQL's strict typing conventions. While SQLite allows for comparisons of int with text, PostgreSQL does not. Also, some queries run into problems because of SQLite-exclusive functions, such as strftime and iff, or because PostgreSQL interprets literals in double quotations as column names.
The two other types of failures, group by and Order by, included queries that introduced ambiguities to the benchmarks, as discussed in Section 5. It should be noted that these benchmarks present a range of issues that are not solely confined to syntax. Challenges related to wrong assumptions on DB content and ambiguities in schema matching are notably pervasive.
## 7 Discussion
Our analysis (SS 6.1) reveals the limitations of major text-to-SQL benchmarks, highlighting the fact that even with a perfect model, achieving a perfect accuracy on these benchmarks is not possible. The accuracies presented in Table 2 serve as a lose upper bound for the achievable accuracy by models. It is lose because our rewritings were unable to address cases that required manual intervention to reconstruct a correct query. Thus, the upper bound is expected to be lower considering other issues such as wrong assumptions on the database content and ambiguity in schema matching.
Our human evaluation (SS 6.2) further supports our claim and provides more insight into the limitations within one of the benchmarks studied. The results in Table 3 demonstrate that prompting methods, such as DIN-SQL, are less affected by the inherent limitations of the training set in the benchmarks. However, they are not fully immune because of the few-shot input-output demonstrations that are taken from the train set. On the other hand, fine-tuned approaches, such as T5+PICARD, perfectly mirror the distribution of errors seen in the ground truth queries for types nested, LIMIT, and GROUP BY. The largest number of wrong queries in schema and condition classes belong to our fine-tuned model, due to inability of the model to generate correct SQL queries.
## 8 Conclusions
The reliance on standard text-to-SQL evaluation metrics, namely exact set match accuracy and execution accuracy, has become less reliable as the model performance approaches human-level performance. Our work is the first to systematically study the limitations of these metrics and benchmarks through both human evaluation and query rewriting. Our re-evaluation of well-known benchmarks (Spider, Spider-DK, and BIRD) uncovers common systematic issues that affect the evaluation process and performance estimates, revealing that a significant portion of queries in the train and dev sets are impacted by these issues. Incorporating multiple SQL queries as the ground truth and representing different interpretations of queries offer a promising solution to enhance the evaluation process and achieve a more comprehensive and accurate assessment of Text-to-SQL models.
\begin{table}
\begin{tabular}{c c c c c c}
**Benchmark** & **SyntaxErr** & **UndFunc** & **UndCol** & **Order By** & **Group By** \\ \hline Spider & 4 & 69 & 211 & 2 & 51 \\ Spider-DK & 134 & 62 & 80 & 2 & 30 \\ BIRD & 5 & 103 & 1 & 4 & 20 \\ \end{tabular}
\end{table}
Table 4: Breakdown of SQL errors observed in Spider, BIRD, and Spider-DK, following migration to PostgreSQL.
### Limitations
In this study, our focus was primarily on cross-domain text-to-SQL benchmarks and models. The failure cases identified in this domain are likely to be present in other domain-specific text-to-SQL benchmarks and models as well. It is essential to conduct further analysis to identify specific failure cases within domain-specific benchmarks and models.
Furthermore, it is worth mentioning that our work has a limitation regarding the analysis of failure cases that lack a specific structure and require manual effort for detection. Identifying and addressing such problems necessitates extensive work. The purpose of our study was to highlight these failure cases; a more in-depth analysis of their prevalence can provide a clearer understanding of their impact on the overall performance of text-to-SQL systems.
## Ethics Statement
In this paper, we acknowledge the importance of ethical considerations in conducting and presenting our research. We affirm our commitment to comply with the ACL Ethics Policy and adhere to ethical guidelines and principles throughout the entire research process.
We have taken necessary measures to ensure the privacy, confidentiality, and consent of individuals or entities involved in our data collection, experimentation, and analysis. Any personal or sensitive information used in this study has been appropriately anonymized and safeguarded.
Furthermore, we have made efforts to minimize any potential biases and discrimination in our research design, data selection, and interpretation of results. We have strived for transparency, accuracy, and fairness in reporting our findings, and we have provided appropriate citations and acknowledgments to give credit to the work of others.
By including this ethics statement, we aim to demonstrate our dedication to conducting research with integrity, respecting ethical principles, and contributing to the responsible advancement of knowledge in our field.
|
2303.07520
|
Multi-class Skin Cancer Classification Architecture Based on Deep
Convolutional Neural Network
|
Skin cancer detection is challenging since different types of skin lesions
share high similarities. This paper proposes a computer-based deep learning
approach that will accurately identify different kinds of skin lesions. Deep
learning approaches can detect skin cancer very accurately since the models
learn each pixel of an image. Sometimes humans can get confused by the
similarities of the skin lesions, which we can minimize by involving the
machine. However, not all deep learning approaches can give better predictions.
Some deep learning models have limitations, leading the model to a
false-positive result. We have introduced several deep learning models to
classify skin lesions to distinguish skin cancer from different types of skin
lesions. Before classifying the skin lesions, data preprocessing and data
augmentation methods are used. Finally, a Convolutional Neural Network (CNN)
model and six transfer learning models such as Resnet-50, VGG-16, Densenet,
Mobilenet, Inceptionv3, and Xception are applied to the publically available
benchmark HAM10000 dataset to classify seven classes of skin lesions and to
conduct a comparative analysis. The models will detect skin cancer by
differentiating the cancerous cell from the non-cancerous ones. The models
performance is measured using performance metrics such as precision, recall, f1
score, and accuracy. We receive accuracy of 90, 88, 88, 87, 82, and 77 percent
for inceptionv3, Xception, Densenet, Mobilenet, Resnet, CNN, and VGG16,
respectively. Furthermore, we develop five different stacking models such as
inceptionv3-inceptionv3, Densenet-mobilenet, inceptionv3-Xception,
Resnet50-Vgg16, and stack-six for classifying the skin lesions and found that
the stacking models perform poorly. We achieve the highest accuracy of 78
percent among all the stacking models.
|
Mst Shapna Akter, Hossain Shahriar, Sweta Sneha, Alfredo Cuzzocrea
|
2023-03-13T23:16:18Z
|
http://arxiv.org/abs/2303.07520v1
|
# Multi-class Skin Cancer Classification Architecture Based on Deep Convolutional Neural Network
###### Abstract
Skin cancer is a deady disease. Melanoma is a type of skin cancer responsible for the high mortality rate. Early detection of skin cancer can enable patients to treat the disease and minimize the death rate. Skin cancer detection is challenging since different types of skin lesions share high similarities. This paper proposes a computer-based deep learning approach that will accurately identify different kinds of skin lesions. Deep learning approaches can detect skin cancer very accurately since the models learn each pixel of an image. Sometimes humans can get confused by the similarities of the skin lesions, which we can minimize by involving the machine. However, not all deep learning approaches can give better predictions. Some deep learning models have limitations, leading the model to a false-positive result. We have introduced several deep learning models to classify skin lesions to distinguish skin cancer from different types of skin lesions. Before classifying the skin lesions, data preprocessing and data augmentation methods are used. Finally, a Convolutional Neural Network (CNN) model and six transfer learning models such as Resnet-50, VGG-16, Densenet, Mobilenet, Inceptionv3, and Xception are applied to the publi-cally available benchmark HAM10000 dataset to classify seven classes of skin lesions and to conduct a comparative analysis. The models will detect skin cancer by differentiating the cancerous cell from the non-cancerous ones. The models' performance is measured using performance metrics such as precision, recall, f1 score, and accuracy. We receive accuracy of 90, 88, 88, 87, 82, and 77 percent for inceptionv3, Xception, Densenet, Mobilenet, Resnet, CNN, and VGG16, respectively. Furthermore, we develop five different stacking models such as inceptionv3-inceptionv3, Densenet-mobilenet, inceptionv3-Xception, Resnet50-Vgg16, and stack-six for classifying the skin lesions and found that the stacking models perform poorly. We achieve the highest accuracy of 78 percent among all the stacking models.
Index Terms--Skin cancer; Transfer learning; CNN; Densenet; VGG-16; Resenet-50; Inceptionv3; Xception; Mobilenet
## I Introduction
The superficial layer of skin, called the epidermis, consists of Squamous, Basal, and melanocyte cells. Squamous is the outermost layer of cells. Basal cells are the epidermis' lowermost cells. Melanocyte cells protect deeper layers of skin from sun exposure by producing melanin, a brown pigment substance. Due to ultraviolet light exposure, the DNA mutations induce the growth of skin cells, leading to skin cancer [1]. Melanoma, Squamous Cell Carcinoma, and Basal Cell Carcinoma are the substantial group of skin cancer associated with Squamous, Basal, and Melanocytes cells. Worldwide, Almost 10 million skin cancer deaths took place in 2020. According to the world health organization, it is estimated that, globally, one-third of all diagnosed cancer cases are skin cancer. Nowadays, skin cancer is a global public health issue that causes approximately 5.4 million newly identified skin cancer incidences each year in the United States [2, 3]. However, melanoma alone causes three-fourths of all skin cancer-related deaths, about 10,000 deaths each year in the United States. In Europe, over 1,00,000 cases, and in Australia, nearly 15,229 new cases of melanoma have been accounted for annually [4]. Moreover, an increasing trend of skin cancer has been recorded in the past decades. In the United Kingdom, the percentage of melanoma has increased by 119 percent since the 1990s; in the same duration, it has increased by 250 percent in the United States [5? ]. Skin cancer is an alarming issue, and it should be detected as early as possible. The ritual diagnostic method of detecting skin cancer disease usually is the biopsy method. The biopsy method requires removing a portion of tissue from the cell of the patient's body so that they can analyze it in the laboratory [6]. The whole procedure is painful, time-consuming, and costly. Sometimes, patients may get into trouble due to the hassle of visiting the hospital.
Recently, the most popular non-surgical instruments used for diagnosis systems are macroscopic and dermoscopic images [7]. The macroscopic image has a lower resolution problem since the images are usually captured with a camera and mobile phone [8]. Dermoscopy images are high-resolution skin images derived from visualizing the deeper skin structures [9]. Since multiple skin cancer types share similar symptoms, it becomes challenging for dermatologists to diagnose even with dermoscopy images. Expert dermatologists are limited in their studies and experiences, and it is not possible for a human to recognize all possible appearances of skin cancer. It is found that the ability of dermatologists for skin cancer diagnosis is an average accuracy of 62 percent to 80 percent [10, 11, 12]. The accuracy varies with the experience of dermatologists. The worst observation is that the performance further can be dropped for less experienced dermatologists [11]. Such
conditions of a skin cancer diagnosis can be very dangerous for cancer patients with false-negative results. It will not improve the deadly condition of the world.
Nowadays, technology has become so advanced that it plays a significant role in the medical sector. The research commu-nity has made significant progress in developing computer-aided diagnosis tools to overcome the life-threatening issue [8, 13, 14]. Modern technology has brought us the concept of a neural network that can perform magnificently for class-sifying images in the medical domain. However, previous investigations fail to extend their study for multiple classes in skin cancer classification [15, 16, 17, 18, 12]. Additionally, those are limited by exploring a few deep learning models [197, 20]. The model must learn pixel by pixel properly of an image to detect the cancerous cell accurately. There are some limitations in each model, which prevent those models from giving accurate results. Therefore, we can not be sure which model will work best. Previously, some models have performed very well in the medical sector. Since the underlying concept of the deep learning model is like a black box, we cannot even say which model will work best on which kind of dataset. Therefore, we have made a comparative analysis by training several neural network models such as Mobilenet, Inceptionv3, Resnet-50, Vgg-16, Xception, Densenet, and a Convolutional Neural Network (CNN).
The deep learning models are capable of learning seven types of skin cancer such as melanoma, melanocytic nevus, basal cell carcinoma, actinic keratosis, benign keratosis, dermatofibroma, vascular lesion, and Squamous cell carcinoma, and distinguish the cancer cells from non-cancerous cells. At first, we preprocess the data by resizing it to 120X120 resolution. Then we augmented the dataset using augmentation methods such as rotating, horizontal and vertical flipping, std normalization, random zooming, width height shifting, and ZCA whitening. Finally, we fed the images to the neural network models. Furthermore, we developed five stacking ensemble models such as inceptionv3-inceptionv3, Densenet-mobile net, inceptionv3-Xception, Resnet50-Vgg16 and stack-six model and fed the images to the models to check how the stacking ensemble model performs on the skin cancer dataset. We achieve 90 percent accuracy using inceptionv3, which is the highest accuracy among all the models, including the stacking ensemble models. Our proposed method outperformed expert dermatologists, which will contribute to the medical sector by saving many lives. Our comparative analysis has been done using pre-trained deep learning models, which are more efficient than simple models. We have proposed the stacking ensemble models using the weights of existing deep learning models. Our unique experiment has given a new observation that will help future researchers working on ensemble learning models. The overall process will help to identify which models are appropriate for making the best decision for detecting skin cancer disease.
The rest of this paper is arranged as follows. Section 2 provides the background needed for the study. The data sources, preprocessing methods, and models used in this work for aggression detection tasks are discussed in Section 3. The simulation results based on the classification algorithms and the comparison using the derived results are analyzed in Section 4. Finally, this paper is summarized in Section 5.
## II Background and Literature Review
Many investigations have been done on the topic of image classification. We have gone through some of the related papers, which helped us significantly improve our analysis.
Previously, M. Vidya and M. V. Karki [21] showed a machine-learning approach for detecting melanoma skin cancer. Their approach includes five phases: data acquisition, preprocessing of data, segmentation, feature extraction, and classification. They preprocessed the dataset to remove unwanted noises such as artifacts, low contrasts, hairs, veins, skin colors, moles, etc. After that, they used a segmentation process called Geodesic Active Contours (GAC). The features of the shape and edge information are extracted using HOG. Finally, they applied SVM, KNN, Naive Bayes, and Neural Networks to the extracted features. They obtained 97.8 percent accuracy using the SVM classifier and 85 percent specificity using the KNN classifier.
K. Manasa and D. G. V. Murthy [22] used VGG16 and Resnet-50 models for classifying skin cancer disease using malignant and benign images. They used 3297 images to train their models, of which 1497 images belong to the malignant class, and 1800 images belong to the benign class. They got 80 percent accuracy for the VGG16 model and 87 percent accuracy for the Resnet50 model.
M. Hasan et al. [23] used a convolutional neural network to classify cancer images. Their dataset contained benign and malignant classes. Those images are converted into greyscale images for better CPU usage. The preprocessed data are fed into the convolutional neural network. Finally, they evaluated their model using precision, recall, specificity, f1 score, and accuracy. They got 89.5 percent accuracy on the test dataset. U. B. Ansari and T. Sarode [24] showed an image preprocessing method for detecting skin cancer. Their system is implemented by using Gray Level Co-occurrence Matrix (GLCM) and Support Vector Machine (SVM). GLIM is used to extract features from the image. The features are then fed to the SVM classifier for making the classification result. Before extracting the features, they preprocessed the data by using three methods- Grayscale Conversion, Noise removal, and Image enhancement to reduce unwanted distortions. They preprocessed the images to get the important features from the image. Using their approach, they achieved 95 percent accuracy on the test dataset.
P. G. Brindha et al. [25] showed a comparative study of SVM and CNN for detecting the types of skin cancer. Their image preprocessing method includes reducing the channel of the image by converting the original image into a greyscale image. They used SVM and CNN models to classify skin
cancer, where they observed that SVM produced a better result.
T. Saba [26] showed a review on skin cancer analysis. They reviewed the investigations that have been done on the classification of skin cancer. The investigation found that most of the previous research works have been done using the SVM, CNN, and ANN models. Some of the research has been done using the segmentation process, which we have already discussed earlier.
M. A. Kassem et al. [27] proposed a modified GoogleNet model for classifying eight classes of skin lesions. They added more filters to each layer to enhance and reduce noise. They replaced the last three layers in two different ways. The last three layers have been dropped out and replaced with a fully connected layer, a softmax layer, and a classification output layer. That change has been made to increase the probability of the target class. Secondly, the last two layers have been dropped. The original fully connected layer has been kept as same to detect the outliers. They achieved 63 percent accuracy using the original GoogleNet model and 81 percent accuracy using their proposed model, which indicates that their proposed model works better for classification purposes.
D. N. Le et al. [28] used Transfer learning techniques such as pre-trained ResNet50, VGG16, and MobileNet models in combination with focal loss and class weights for classifying the skin cancer. To balance the classes, they used weights in each of the classes. Higher weights were given to the classes with fewer samples, whereas lower weights were assigned to the classes with more samples. Using their approach, they achieved 93 percent average accuracy on the test data.
I. A. Ozkan and M. KOLLU [29] used four different machine learning algorithms such as Artificial Neural Network (ANN), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Decision Tree (DT) for classifying melanoma skin cancer. They achieved 92.50 percent accuracy for ANN, 89.50 percent accuracy for SVM, 82.00 percent accuracy for KNN, and 90.00 percent accuracy for DT, which indicates that ANN has a better classification performance.
M. Elgamal [30] used two hybrid approaches to identify skin cancer. At first, they extracted the features using discrete wavelet transformation. After that, the features from the images were reduced using Principal Component Analysis. Finally, the features were fed to the artificial neural network and the k-nearest neighbor classifier to perform the classification. Their approach gave 95 percent accuracy and 97.5 percent accuracy in the two classifiers, respectively.
J. Daghir et al. [31] classified melanoma skin cancer using the Convolutional Neural Network (CNN), Support Vector Machine (SVM), and K-Nearest Neighbor (KNN) model. Before classifying the images, they segmented the images using Otsu's method. They extracted the features from the segmented images and fed those features to the classifiers. Finally, they combined all three models and made predictions based on the majority votes. Their proposed models gave the best result, which is 88.4 percent accuracy; on the other hand, the KNN model gave 57.3 percent accuracy, the SVM model gave 71.8 percent accuracy, and CNN gave 88.4 percent accuracy.
M. Q. Khan et al. [32] proposed an image processing technique to detect and distinguish melanoma from nevus. At first, they used the Gaussian filter to remove the noise from the images. After that, they used SVM for classifying melanoma and nevus skin cancer. Their proposed methodology achieved almost 96 percent accuracy.
## III Methodology
Since the architectures are not developed for multi-classs classification, we propose the generalized architecture for the multi-class classification of skin cancer, shown in Figure 1. At first, all dermoscopic skin cancer images are preprocessed to meet the requirement of models before feeding. The processed images are then fed to the architecture for feature extraction and finetuning. Finally, the input images are divided into seven classes of skin cancer, i.e. Melanocytic Nevi, Melanoma, Benign Keratosis, Actinic Keratosis, Vascular Lesions, Der-matofibroma, and Basal Cell Carcinoma. The classifiers such as InceptionV3, Xception, Densenet, Mobilenet, Resnet-50, CNN, and VGG16 are designed for classifying these seven skin lesion types. Using the weights of the aforementioned models, five different stacking models have been developed. The models are named inceptionv3-inceptionv3, Densenet-mobilenet, inceptionv3-Xception, Resnet50-Vgg16, and stack-six.
Figure 1 illustrates a high-level schematic representation of the classification with existing deep learning models.
### Dataset
The HAM10000 dataset, a large collection of the multi-source dermatoscopic dataset, has been used in this work [33]. It is downloaded from the Kaggle web-site url [https://www.kaggle.com/kmader/skin-cancer-mnist-ham10000](https://www.kaggle.com/kmader/skin-cancer-mnist-ham10000).
The dataset consists of 10,015 skin lesion images of 7 classes. The classes are Melanocytic nevi (6705 images), Melanoma (1113 images), Benign keratosis (1099 images), Basal cell carcinoma (514 images), Actinic keratosis (327 images), Vascular Lesions (142 images), and Dermatofibroma (115 images). All dermoscopic images have a resolution of
Fig. 1: Process for Multi-class skin cancer classification.
600 x 450-pixels, and the channel is three. The images are taken by dermatoscopy instrument. The instrument is a type of magnifier that is used to take pictures of skin lesions.
Figure 2 shows a sample of skin lesion types.
### Preprocessing
Raw data is not considered well-prepared inputs for fetching into the deep learning models since the data can have different sizes and noises. Transfer learning models can only work on the image size 224X224 pixels or less than that. Therefore, we have preprocessed the dataset by resizing it from 600 x 450 pixels to 120X120 pixels, and the channel of the images has been kept the same as before. We did not remove the noises, such as hair, discoloration, and other issues, as we wanted our model to learn the noises to perform well on the data with the presence of noise.
### Classification Models and Fine Tuning
We perform modifications on the architectures such as Resnet-50, VGG-16, Densenet, Mobilenet, Inceptionv3, and Xception for performing multi-class classification. Deep learning architecture customizations include
1) dense layers with'relu' activation.
2) dropout layers and softmax layers at the bottom of the architecture.
3) improvement in the parameters' values.
Then, we fine-tune using HAM10000 dataset for classifying skin cancer disease.
1) CNN: A Convolutional Neural Network is a class of neural networks. It is used for image processing and classification. A CNN contains a convolutional layer, a pooling layer, and a fully connected layer. The convolutional layer is the core building block of the CNN model. The convolutional layer performs a dot product between two matrices; one matrix is the kernel, and another is a portion of the input image. The kernel size is spatially smaller than the input image, but the kernel dimension is the same as the input image. A pooling layer is another building block of a CNN. It reduces the spatial size of CNN, which helps reduce the network's parameters and computation. Finally, the fully connected layer is used to provide the final output. The final convolutional neural network output is converted to a flattened one-dimensional input fed to the fully connected layer. The final fully connected layers use the softmax activation function, which gives the probability of input being in a particular class. Finally, we trained the model on 9617 sample images for 30 epochs with a learning rate of 0.0001 and Adam optimizer [34, 35].
2) Inceptionv3: Inceptionv3 was first introduced by Szegedy et al. [36] in 2015. It is a convolutional neural network for analyzing image datasets such as image classi-fication, object localization, and segmentation. The network is an extended version of the GoogleNet model. Inceptionv3 model avoids computational complexity since it concatenates multiple different sizes of convolutional filters into a new filter, which allows the model to reduce the number of parameters to be trained. Therefore, the performance of classification remains well while keeping a fewer number of parameters. Inceptionv3 has become popular among researchers since it can be efficiently trained over a huge dataset. In addition, we have included a dense layer with'relu' activation, Dropout, and softmax layers with seven outputs at the bottom of the architecture to better fine-tune the model on the dataset. Finally, we fine-tuned the model on 9617 sample images for 30 epochs with a learning rate of 0.0001 and Adam optimizer.
3) Xception: Xception model was first developed by Google researchers named Chollet [37]. Xception is a deep convolutional neural network built with a linear combination of depth-wise separable convolution with residual connections. This novel deep neural network architecture was inspired by the Inception model, where the Inception layers are re-placed with the depth-wise separable convolution. The concept of building the architecture with fully depth-wise separable convolution makes it less complex and more efficient than other deep convolutional neural networks. Moreover, we have included a dense layer with 'Relu' activation, Dropout, and Softmax layers with seven outputs at the bottom of the architecture to fine-tune the dataset model. Finally, we have fine-tuned the model on 9617 images (for 30 epochs) with a learning rate of 0.001 and adam optimizer with a momentum of 0.9.
4) Densenet: A Densely Connected convolutional Network (Densenet) was first proposed by Gao Huang, Zhuang Liu, and their team in 2017 [38]. It is a deep convolutional neural network that uses dense connections between the layers, and
Fig. 2: Sample skin cancer images from HAM10000 dataset (a) Actinic keratosis (b) Basal cellcarcinoma (c) Benign keratosis-like lesions (d) dermatofibroma (e) Melanocytic nevi (f) Melanoma (g) Vascular lesions.
each layer connects to each layer in a feed-forward manner. It diminishes the vanishing gradient issue and requires fewer parameters to train the model. So, it has the advantage to use in the computer vision field. Moreover, we have included a dense layer with 'Relu' activation, Dropout, and Softmax layers with seven outputs at the bottom of the architecture to fine-tune the dataset model. Finally, we have fine-tuned the model on 9617 images (for 30 epochs) with a learning rate of 0.001 and adam optimizer with a momentum of 0.9.
5. Mobilenet: Andrew G. Howard et al. [39], first intro-duced MobileNet architecture. Among the deep neural net-works, Mobilenet is a lightweight model which is appropriate for reducing computational cost and time. To reduce the model size and computation, MobileNet uses depthwise separable convolutions instead of standard convolutions. Depthwise separable convolution is a factorized convolution that factorizes standard convolution into two convolutions: a depthwise convolution and a pointwise convolution. Pointwise convolution is a 1* 1-dimensional convolution. Depthwise convolution aims to filter, whereas the purpose of pointwise convolution is to combine. The sum of depthwise convolution and pointwise convolution is the operation of depthwise separable convolution. The architecture of MobileNet is built with separable convolutions except for the first layer. The first layer is built with a complete convolutional layer. In addition, we have included a dense layer with 'Relu' activation, Dropout, and Softmax layers with seven outputs at the bottom of the architecture to better fine-tune the model on the dataset. Finally, we fine-tuned the model on 9617 sample images for 30 epochs with a learning rate of 0.0001 and Adam optimizer.
6. Resnet-50: A residual Neural Network( ResNet) is a kind of Artificial Neural Network(ANN) that forms a net-work by stacking residual blocks on top of each. Resnet has many variants, and the most popular networks are ResNet-34, ResNet-50, and ResNet-101. Each variant follows the same concept of Resnet; the only difference is in the number of layers. Resnet-50 works on 50 neural network layers. Large layers are useful for solving complex problems as each of the layers deals with a unique task. But the problem with the deeper network is it shows a degradation issue. Usually, the degradation problem causes either by the initialization of the network, by the optimization function, or by the problem of vanishing or exploding gradients. The Resnet model aims to avoid such issues. The strength of the Resnet model is the skip connection, which lies at the core of the residual blocks and is responsible for avoiding the degradation problem. Skip connections work in two ways. Firstly, they extenuate the vanishing gradient issue by creating an alternate shortcut for passing the gradient. Secondly, they allow the model to learn an identity function that ensures that the higher layers work almost similarly to the lower ones. Since the model is trained on more than a million images, it can classify the small dataset accurately. In addition, we have included a dense layer with 'Relu' activation, Dropout, and Softmax layers with seven outputs at the bottom of the architecture to better fine-tune the model on the dataset. Finally, we fine-tuned the model on 9617 sample images for 30 epochs with a learning rate of 0.0001 and Adam optimizer [40].
7. VGG-16: VGG-16 is the state-of-the-art deep neural network for analyzing image input. The model was used to win ILSVR (Imagenet) competition in 2014 and is considered one of the excellent vision model architectures to date. It is a large network and contains 138 million parameters. The number 16 of the VGG16 network refers to 16 layers with weights. The unique thing about vgg16 is that it maintains a convolution layer of 3x3 filter with a stride 1, and the same padding and max pool layer of a 2x2 filter of stride 2, throughout the whole architecture. Finally, the model ends with 2 Fully Connected Layers followed by a softmax for output. In addition, we have included a dense layer with 'Relu' activation, Dropout, and Softmax layers with seven outputs at the bottom of the architecture to better fine-tune the model on the dataset. Finally, we fine-tuned the model on 9617 sample images for 30 epochs with a learning rate of 0.0001 and Adam optimizer [41].
### Proposed stacking models
1. Inceptionv3-Inceptionv3: Inceptionv3-Inceptionv3 stacking model is developed using the weights derived from two inceptionv3 models. The models are trained with the same input but individually. The weights are saved in a folder for further process. The weights are fed to a decision tree model worked as a meta-model. Meta model is a fast learner since it predicts the input, which is already the predictions from the classifiers.
3. Inceptionv3-Xception: Inceptionv3-Xception stacking model is developed using the weights derived from one Inceptionv3 and one Xception model. The models are trained with the same input but individually. The weights are saved in a folder for further process. The weights are fed to a decision tree model for final prediction. The decision tree model worked as a meta-model. Meta model is a fast learner since it predicts the input, which is already the predictions from the classifiers.
4. Resnet50-Vgg16: Resnet50-Vgg16 stacking model is developed using the weights derived from one Resnet50 and one vgg16 model.The models are trained with the same input but individually. The weights are saved in a folder for further process. The weights are saved in a folder for further process. The weights are fed to a decision tree model for final prediction. The decision tree model worked as a meta-model. Meta model is a fast learner since it predicts the input, which is already the predictions from the classifiers.
5. Stack-Six: Stack-six stacking model is developed using the weights derived from six models such as Resnet-50, VGG
16, Densenet, Mobilenet, Inceptionv3, and Xception. The models are trained with the same input but individually. The weights are saved in a folder for further process. The weights are fed to a decision tree model for final prediction. The decision tree model worked as a meta-model. Meta model is a fast learner since it predicts the input, which is already the predictions from the classifiers.
### Evaluation Metrics
Evaluating a model's performance is necessary since it gives an idea of how close the model's predicted outputs are to the corresponding expected outputs. The evaluation metrics are used to evaluate a model's performance. However, the evaluation metrics differ with the types of models. The types of models are classification and regression. Regression refers to the problem that involves predicting a numeric value. Classification refers to the problem that involves predicting a discrete value. The regression problem uses the error metric for evaluating the models. Unlike the regression problem model, the classification problem uses the accuracy metric for evaluation. Since our motive is to classify the cancerous cell, we used accuracy, f1 score, precision, and Recall for our evaluation metric [42, 43, 44, 45].
Precision : When the model predicts positive, it should be specified that how much the positive values are correct. Precision is used when the False Positives are high. For skin cancer classification, if the model gives low precision, then many non-cancerous images will be detected as cancerous; for high precision, it will ignore the False positive values by learning with false alarms. The precision can be calculated as follows:
\[\text{P~{}recision}=\frac{\text{TP}}{\text{TP}+\text{FP}} \tag{1}\]
Here TP refers to True Positive values and FP refers to False Positive values.
Recall : The metric recall is the opposite of Precision. The precision is used when the false negatives (FN) are high. In the aggressive detection classification problem, if the model gives low recall, then many cancerous cells will be said as non-cancerous cells; for high recall, it will ignore the false negative values by learning with false alarms. The recall can be calculated as follows:
\[\text{Recall}=\frac{\text{TP}}{\text{TP}+\text{FP}} \tag{2}\]
F1 score: F1 score combines precision and recall and provides an overall accuracy measurement of the model. The value of the F1 score lies between 1 and 0. If the predicted value matches with the expected value, then the f1 score gives 1, and if none of the values matches with the expected value, it gives 0. The F1 score can be calculated as follows:
\[\text{F~{}1score}=\frac{\text{2-precision}\text{- recall}}{\text{precision}+ \text{recall}} \tag{3}\]
Accuracy : Accuracy determines how close the predicted output is to the actual value.
\[\text{Accuracy}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+ \text{FN}} \tag{4}\]
here, TN refers to True Negative and FN refers to False Negative.
## IV Results and Discussions
### Results of existing deep learning models
The model's accuracy is derived from the validation data containing 1103 images of seven classes. We have used TensorFlow and Keras library for implementing the Neural Network models. Model training is done on the Google Colab using GPU runtime. We have evaluated the performance of seven different models such as Inceptionv3, Xception, Densenet, Mobilenet, Resnet-50, CNN, and VGG-16 for the classification of skin cancer among seven classes: Melanocytic nevi, Melanoma, Benign keratosis, Basal cell carcinoma, Actinic keratosis, Vascular Lesions, and Dermatofibroma using performance metrics such as precision, recall, f1-score, and accuracy. The categorical accuracy for Inceptionv3, Xception, Densenet, Mobilenet, Resnet-50, CNN, and VGG-16 are 90 percent, 88 percent, 87 percent, 82 percent, 77 percent, and 73 percent, respectively. The inceptionv3 model is provided the best result among all the models. The weighted average of precision, recall, and F1-score for InceptionV3 is 91 percent, 90 percent, and 90 percent, respectively. Similarly, the weighted averages of precision, recall, and F1-score for Xception, Densenet, Mobilenet, Resnet-50, CNN, and VGG-1 are also evaluated, which are shown in Table 1.
TABLE-1 : Results of Inceptionv3, Xception, Densenet, Mobilenet, Resnet-50, CNN, and VGG-1 models
For each of the seven models, The training-validation accuracy and training-validation loss curves are represented in Figure 3. For all models, the training accuracy is higher than the validation accuracy, or the training loss is lower than the validation loss from the first epochs. One possible observation can be adding the Dropout layer in the architecture during fine-tuning of the model since it makes a pretrained model less prone to over-fitting. These Dropout layers disable the neurons during training to reduce the complexity of the model.
In Figure 3, the confusion matrix shows the number of True positive and False negative results has been predicted by each of the model.
### Results of Developed Stacking Models
Using the weights of the seven aforementioned classifiers, we develop five stacking ensemble models such as Inceptionv3-Inceptionv3, Densenet-Mobilenet, Inceptionv3-Xception, Resnet-50-vgg16,and stack-six. We evaluate the models performance with the same test dataset which is used for the seven aforementioned models. The weighted averages of precision, recall, and F1-score for Inceptionv3-Inceptionv3, Densenet-Mobilenet, Inceptionv3-Xception, Resnet-50-vgg16,and stack-six are also evaluated, shown in Table 2.
The result shows that the stacking ensemble model provides
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Models & Accuracy & Precision & Recall & F1-Score \\ \hline Inceptionv3-Inceptionv3 & 0.78 & 0.79 & 0.78 & 0.78 \\ Inceptionv3 & 0.78 & 0.77 & 0.78 & 0.77 \\ Densenet & 0.78 & 0.77 & 0.78 & 0.77 \\ Densenet-Mobilenet & 0.75 & 0.74 & 0.75 & 0.74 \\ Xception & 0.70 & 0.72 & 0.70 & 0.71 \\ Vgg16 & 0.70 & 0.72 & 0.70 & 0.71 \\ \hline stack-six & 0.78 & 0.80 & 0.78 & 0.77 \\ \hline \end{tabular}
\end{table}
Table 2: Results of inceptionv3-inceptionv3, Densenet-mobilenet, inceptionv3-Xception, Resnet50-Vgg16, and stack-six model.
the highest accuracy of 78 percent, which is lesser than the existing deep learning model's performance. Since the stacking ensemble models are less prone to show bias result, it can be the reason why it shows poor result and does not vary the result within the stacking models. The lowest accuracy is 70 and the highest accuracy is 78. The difference is very less. Whereas, The single models accuracy varies largely from one model to another. The lowest accuracy is 73 and the highest accuracy is 90. So, the models could have tendency of showing bias result for a particular dataset.
We stack a model with six models and named it stack-six model. The stack-six model gives 78 percent accuracy and did not improve much even though we have increased the number of weights for stacking. Therefore, it is clear that the stacking model's performance becomes saturated for a particular accuracy range.
## V Conclusion
Since the death rate due to skin cancer increases day by day, it is necessary to address this global public health issue. The outstanding performance of deep convolution models on image datasets can be utilized for skin cancer detection. However, using the deep learning models, a different problem has a different process to solve. Previously, several investigations have been done on classifying skin cancer detection. To the best of our knowledge, none of the work has shown the comparative analysis of multiple skin cancer classes using deep convolutional neural networks. Our work will help the medical sector distinguish skin cancer from different skin lesions accurately. Seven classes of skin lesions have been classified using Resnet-50, VGG-16, Densenet, Mobilenet, Inceptionv3, Xception, and CNN. Finally, the performance of the models is evaluated using evaluation metrics such as precision, recall, f1-score, and accuracy. Among all the models, Inceptionv3 provides the best result, which is 90 per-cent accuracy. Furthermore, we have developed five stacking ensemble models such as inceptionv3-inceptionv3, Densenet-Mobilenet, inceptionv3-Xception, Resnet50-Vgg16, and stack-six using the weights of the aforementioned models for observing how the stacked models perform individually on the same dataset. We have found that stacking models give the highest accuracy of 78 percent, which is lesser than the performance of the existing models. Since single models have a tendency to show biased results, it can be one reason why the accuracy widely varies from one model to another. Therefore, our experiment shows that the stacking model may give a higher lower accuracy than the existing model since the model does not show biased results. Our work will give a clear observation of stacking ensemble models to future researchers for further investigations on the skin cancer dataset as well as the ensemble learning models.
|
2302.10375
|
Cultural transmission of move choice in chess
|
The study of cultural evolution benefits from detailed analysis of cultural
transmission in specific human domains. Chess provides a platform for
understanding the transmission of knowledge due to its active community of
players, precise behaviors, and long-term records of high-quality data. In this
paper, we perform an analysis of chess in the context of cultural evolution,
describing multiple cultural factors that affect move choice. We then build a
population-level statistical model of move choice in chess, based on the
Dirichlet-multinomial likelihood, to analyze cultural transmission over decades
of recorded games played by leading players. For moves made in specific
positions, we evaluate the relative effects of frequency-dependent bias,
success bias, and prestige bias on the dynamics of move frequencies. We observe
that negative frequency-dependent bias plays a role in the dynamics of certain
moves, and that other moves are compatible with transmission under prestige
bias or success bias. These apparent biases may reflect recent changes, namely
the introduction of computer chess engines and online tournament broadcasts.
Our analysis of chess provides insights into broader questions concerning how
social learning biases affect cultural evolution.
|
Egor Lappo, Noah A. Rosenberg, Marcus W. Feldman
|
2023-02-21T00:25:41Z
|
http://arxiv.org/abs/2302.10375v3
|
# Cultural transmission of move choice in chess
# Cultural transmission of move choice in chess
Egor Lappo12, Noah A Rosenberg3, Marcus W Feldman1
Footnote 1: Department of Biology, Stanford University, Stanford, CA 94305 USA
Footnote 2: Email: [email protected]
**Abstract.** The study of cultural evolution benefits from detailed analysis of cultural transmission in specific human domains. Chess provides a platform for understanding the transmission of knowledge due to its active community of players, precise behaviors, and long-term records of high-quality data. In this paper, we perform an analysis of chess in the context of cultural evolution, describing multiple cultural factors that affect move choice. We then build a population-level statistical model of move choice in chess based on the Dirichlet-multinomial likelihood to analyze cultural transmission over decades of recorded games played by leading players. For moves made in specific positions, we evaluate the relative effect of frequency-dependent bias, success bias, and prestige bias on the dynamics of move frequencies from specific positions. We observe that negative frequency-dependent bias plays a role in the dynamics of many moves and that other moves are compatible with transmission under prestige bias or success bias. These apparent biases may reflect recent changes that have happened in chess, namely the introduction of computer chess engines and online tournament broadcasts. Our analysis of chess provides insights into broader questions concerning evolution of human behavioral preferences and modes of social learning.
**Keywords.** Chess, cultural evolution, Dirichlet-multinomial, social learning, transmission biases.
## 1 Introduction
Chess has existed in its current form for hundreds of years; it is beloved as an established sport, a hobby, and also as a source of inspiration for scientists across disciplines. Since the 1950s, playing chess well has served as a goal in the development of artificial intelligence, as a task that a "thinking agent" would be able to accomplish (Shannon, 1950). This goal was realized in the victory of a chess algorithm over a top human player (Deep Blue vs. Garry Kasparov in 1997). In physics and signal processing, researchers study time series in databases of chess games to extract information regarding long-term correlations, dynamics of position evaluation, invention of new openings, and other quantities (see e.g. Blasius & Tonjes, 2009; Perotti et al., 2013; Ribeiro et al., 2013; Schaigorodsky et al., 2016). Statisticians have been interested in chess as a case study in the development of human performance measurement (Di Fatta et al., 2009; Regan et al., 2011) and modeling of human choice (Regan et al., 2014).
As a _cultural_ dataset, a compendium of chess games has great potential to help cultural evolution researchers to understand patterns of cultural transmission and social learning. A large body of well-annotated chess games is available online, and, compared to e.g. linguistic or textual data, these data have no intrinsic noise. As chess positions and moves are discrete, they can be recorded with complete information. Yet the space of potential game sequences is extremely large, so that there can be great variation in move choices. In addition, a large amount of canonical literature on chess allows for thorough qualitative interpretation of patterns in move choice.
Chess data can help in understanding the relative importance of factors that affect the transmission of knowledge. Focusing on the game of Go, a game that also features discrete moves and complete information, Beheim et al. (2014) analyzed the choice of the first move by Go players in a dataset of \(\sim\)31,000 games. They concluded that the choice of the first move is driven by a mix of social and individual factors, and the strength of these influences depends on the player's age. Many issues concerning cultural transmission in
board games remain to be studied. For example, what are the mechanisms behind social learning: are the players choosing to use "successful" moves or moves played by successful players? What defines "success" of a move?
In this paper, we perform a quantitative study of chess in the context of cultural evolution using a database of 3.2 million chess games from 1971 to 2019. In Section 2, we introduce chess vocabulary and several aspects of the game important for our analysis. In Section 3, we describe the cultural factors involved in the game and position them within the context of existing literature on cultural transmission. Section 4 describes the dataset used in this study. In Section 5, we motivate and define a statistical model for frequencies of opening strategies in the dataset. Unlike the individual-based analysis of a binary choice of the first move in Go by Beheim et al. (2014), our model incorporates counts for all possible moves in a position, taking a population-level approach.
## 2 The game of chess
In this section, we briefly review vocabulary related to chess, assuming readers have some basic knowledge of the rules of the game (for a concise summary, see Capablanca, 1935). As a visual aid, example chess positions are presented in Figure 1.
First, a game of chess consists of two players taking turns moving one of their pieces on the board, starting with the player who is assigned the white pieces. We will call these discrete actions _ply_: the first ply consists of a move by the white player, the second ply consists of a move by the black player, and so on. The average length of a chess game at a professional level is around 80 plys (see Section 4 below). We will use the word "ply" when describing specific positions, but otherwise we will use the words "move," "strategy," and "response" interchangeably with "ply."
Moves are typically recorded using algebraic notation (Hooper & Whyld, 1992, p. 389), in which each ply is represented by a letter for a piece -- **K** for king, **Q** for queen, **R** for rook, **B** for bishop, **N** for knight, no letter for a pawn -- followed by the coordinates of the square on which the piece ends. The coordinates on the board are recorded using letters from **a** to **h** from left to right for the _ranks_ (the \(x\)-axis coordinates), and numbers from 1 to 8 for the _files_ (the \(y\)-axis coordinates). For example, the first few moves of the game could be recorded as **1. e4 e5 2. Nf3 Nc6 3. Bc4 Nf6...** Other special symbols are used for captures (**x**), checks (**+**), and castling (**O-O** or **O-O-O** for king- and queen-side castling, respectively).
A typical game of chess consists of three phases: the opening, the middlegame, and the endgame. The _opening_ is the initial stage of the game in which players try to achieve a favorable arrangement of the pieces that gives them the most freedom for further actions while keeping their kings safe. Openings are highly standardized, with many sequences of moves having names, e.g. the Sicilian Defense, or the London Opening. Because the number of possible positions is not that large at the beginning of the game, openings are extensively analyzed by players and then memorized for use in tournaments.
After the opening, the game transitions into the _middlegame_, in which the opponents try to realize attacking or defensive plans. The players plan both their long-term strategies as well as short-term actions directed at obtaining an immediate payoff. When most pieces have been traded, the _endgame_ begins. Because few pieces are left on the board, success in the endgame is highly dependent on the positions of the pieces, and this is where long-term planning in the middlegame has its potential payoff (see e.g. Euwe & Hooper, 1976).
The collective body of knowledge about how to play chess in various positions is called _chess theory_. In the middlegame and the endgame, chess theory gives general guidelines on how to act in different situations to secure victory. In the opening, many positions have been extensively analyzed by human players as well as by computers. A _mainline_ is a sequence of moves that has proven to be the most challenging for both opponents, such that neither of them has an advantage. A _sideline_ is a sequence of moves that deviates from the established optimal sequence. Throughout this paper, we will use the term _line_ to mean a fixed sequence of moves in chess that is rarely deviated from. Many types of moves have associated terminology. For example, a _gambit_ is a type of move that sacrifices a piece or a pawn for some kind of advantage: better position, more freedom of movement, etc. The gambit is _declined_ when the opponent refuses to accept the sacrifice and does not take the offered piece.
Each professional chess player has a numerical rating, usually assigned by the national or international
federation. FIDE (The International Chess Federation) uses the _Elo rating system_(Elo, 1978). The rating is relative, meaning that it is calculated based on a player's past performance, and is intended to represent a measure of the player's ability. Real performance in each given game is then assumed to be distributed around the "true" rating. After a game is concluded, the loser transfers Elo points to the winner in proportion to the difference in their ratings. Originally, the system was built with the assumption that real performance is normally distributed around the Elo rating, with every player having the same variance. Today, some federations use modified versions that can account for individual variance and the number of games played (Glickman and Doan, 2020). The typical rating of a strong intermediate player is \(\sim\)1500, and a rating of 2500 is required to qualify for a Grandmaster (GM) title. Most elite tournaments involve ratings above 2700, and the highest ever Elo rating of 2882 was achieved by Magnus Carlsen in May of 2014.
## 3 Culture and chess
Chess is a cultural practice that is actively shaped by the people who participate in it. Individual players enter the practice, altering their performance and behaviors depending on the games they and others have played. Many cultural processes are involved in players' decision-making. To analyze these processes, we will concentrate on decisions made in the opening stage, because the relatively small number of positions allows players to reason about concrete moves and lines in their analyses and preparation. The factors affecting move choice that we discuss below are well-known to the chess community (Desjarlais, 2011; de Sousa, 2002; Euwe and Nunn, 1997; Gobet, 2018). Our goal here is to summarize them and to place them in the language of cultural evolution.
1. **Objective strength.** One factor in move choice is the objective strength of the move, which reflects the potential for victory from resulting positions. An evaluation of a move's strength can be made by human analysis or with a chess computer. For example, a move that would lead to the loss of a piece would be negatively evaluated, and a check that would force the opponent's king into a vulnerable position would have high objective strength. Of course, during a game, players do not have access to the computational power of chess engines, so they have to rely on memory or resort to extrapolation from previous experience. Many early moves have been extensively analyzed, and the chess computer's top choice in those positions is well-known to most professional players.
2. **Social context of the move.** A second factor can be summarized as the "social context" of the move. Players are aware of how often a given move has been played in the past. This frequency evaluation can even be automated using websites such as OpeningTree. Developed theory often exists for more frequent moves, which can be the default choice for many players. Conversely, rare moves or _novelties_
Figure 1: Example chess positions. (A) The starting position. (B) A typical position at which the opening transforms into a middlegame (Najdorf Sicilian). Ten plys have been played to reach this position. (C) A checkmate: the black player has lost the game. Images are generated with apronus.com/chess/diagram/editor.
(previously unseen moves) can create problems for opponents who most likely have not prepared a response. It is important to observe that the frequency with which a move is played is not directly proportional to objective strength discussed in (a); there are moves that are objectively weak, but only conditional on the opponent finding a _single_ good response. If this response is not played by the opponent, then the weak move gives an advantage. In some conditions, e.g. an unprepared opponent or lack of time, such a "weak" move can be highly advantageous. There have also been cases in which a historically frequent move was later "refuted" by deep computer analysis. Beyond the move frequency, information on the success of strategies in leading to a win can play a role. This relates to the complexities of actually applying information about objective move strength. It is not enough to make a single strong move: a player has to then _prove_ an advantage by continuing to play further strong moves and actually executing plans that would lead to victory. The success rate of a move is an indicator of how hard it is to gain a long-term advantage leading to checkmate after choosing it. The influence of elite players may also be important. Top players participate in invitational tournaments, such as Tata Steel Tournament or the Sinquefield Cup, which are followed by the wider community. Players, presented with a choice of approximately similar moves, may choose the one that was played by a "superstar" player. This phenomenon is exemplified by strategies named after famous players, such as "Alekhine's Defense" (De Firmian, 2008, p. 159) or "Najdorf Sicilian" (De Firmian, 2008, p. 246). Leading players can create trends. For example, the Berlin defense was popularized after grandmaster Vladimir Kramnik employed it to win the World Championship in 2000. (De Firmian, 2008, p. 43).
3. **Metastrategy.** Beyond trends in move choice, the "metastrategy" of chess is also evolving. Conceptions of what a game of chess "should" look like have been changing through the years, and so has the repertoire of openings used by professional players (Hooper & Whyld, 1992, p. 359). In the 18th century, the swashbuckling Romantic style of chess emphasized winning with "style": declining gambits could be viewed as ungentlemanly, and queen's pawn openings were rarely played (Shenk, 2011, Ch. 5). However, by the World Championship of 1927, trends in chess had shifted to long-term positional play (see Shenk, 2011, Ch. 8). Queen's Pawn openings were the cutting edge of chess theory, and almost all games at that tournament began with the Queen's Gambit Declined (Chessgames.com, 2023). Following World War I, hypermodern chess emphasized control for the board's center from a distance, and its influence is evident in top-level games of the mid-20th century (Shenk, 2011, Ch. 10). Hypermodern players refused to commit their pawns forward, preferring a position where pieces are placed on safe squares from which they can target the opponent's weaknesses. Recently, a style of chess mimicking computer play has emerged, which involves players memorizing long computer-supported opening lines, as well as playing risky pawn advances. Chess is as much a social phenomenon as it is individual. Some players exhibit personal preferences for certain game features, such as early attacks or long and complicated endgames, and some aspects of play are determined by a player's upbringing. For example, the Soviet school of chess has formed around a certain energetic, daring, and yet "level-headed" style (Kotov & Yudovich, 1961).
4. **Psychological aspects.** Finally, psychological aspects and circumstances of the game could contribute to move choice (Gobet et al., 2004). There are lines that are known to lead to a quick draw, and a player might elect to follow one of them, depending on the relevance of the outcome at a particular stage of the tournament. Openings may also be chosen to take opponents out of their comfort zone: in a game against a much weaker opponent, a dynamic and "pushy" line might give a player an advantage. Similarly, a master of attacking play might make mistakes when forced into a long positional game.
The complexities of move choice suggest that chess could serve as a model example for the quantitative study of culture. Players' knowledge is continually altered by their own preparation, the games they play, and by other players' actions. In this sense, chess knowledge is "transmitted" over time in part by players observing and imitating past actions of their own and other players, or _transmission by random copying_(Bentley et al., 2007). The large historical database of chess games provides an opportunity to study
deviations from random copying dynamics known as _transmission biases_ or _social learning strategies_(Boyd & Richerson, 1985; Henrich & McElreath, 2003; Kendal et al., 2018; Laland, 2004). In our analysis of the transmission of chess knowledge, we will investigate _success bias_ (players paying attention to win rates of different strategies), _prestige bias_ (players imitating the world's best grandmasters), and _frequency-dependent bias_ (e.g. players choosing rare or unknown strategies).
Can we show that transmission biases are present in chess? Can we utilize available data to measure and separate their effects on move choice? Is there a form of selection acting on chess strategies and can we identify associated fitnesses? What is the speed of the spread of innovations in chess? What are the relative effects of (anti-)conformity to the whole community vs. to elite players?
Two significant developments that the chess community has recently experienced are relevant to the study of transmission in chess: computer chess and the internet. Prior to the internet, games played at chess tournaments would be collected and published monthly or quarterly, after which the community could learn about new strategies or refutations of existing openings. This meant that by developing a novel strategy, a player could visit multiple tournaments prior to the next regular publication and "catch" many opponents with the same "trick." Currently, tournament games are streamed to the public in real-time using digital boards, and many tournaments have completely migrated to online platforms. This change has made many past approaches obsolete and forced players to look for new ways to gain an edge.
Computer chess engines became widely available to elite players starting from the late 1990s, and revolutionized tournament preparation. Finding the best response in a position or solving a chess puzzle became possible in a matter of seconds rather than hours or days. Post-game analysis helps players quickly identify and address their weaknesses. The internet has significantly increased the speed and availability of social learning in chess by providing a new medium of communication. At the same time, computer chess engines may have decreased the need for social learning of chess computation skills, as they identify the objectively strongest move. We interpret our findings in relation to these changes.
## 4 Data
The dataset that serves as a foundation for this project is _Caissabase_ - a compendium of \(\sim\)5.6 million chess games, available for download at caissabase.co.uk. Games in the dataset are between players with Elo rating 2000 or above. These games correspond to master-level play, allowing us to focus on the dynamics of high-level chess without the influence of players who are just learning the game.
To filter the dataset, we have excluded games with errors, in that according to a chess notation parser, they do not correspond to a valid sequence of moves. We then filtered the dataset to keep only the games that have the result of the game, players' names, and their Elo ratings, and selected only the games played from 1971 to 2019. This filtering produced a table with \(\sim\)3.2 million games.
In Figure 2, we highlight the main aspects of the dataset. Figure 2A shows that the number of games per year has been steadily growing since the 1970s, stabilizing at approximately 100,000 games per year. In total, there are \(\sim\)71,000 chess players in the dataset, with the number of players per year increasing in recent decades (Figure 2B).
It is widely accepted in the chess community that white has a slight advantage, as the side that starts the game. This view is reflected in Figure 2C, which plots the fractions of outcomes of games in each year. Finally, Figure 2D shows the average length of games over time; games have become longer since the mid-1980s, which could mean that players are getting better at the game and no longer lose early. To explore the dynamics in the dataset further, we look at the frequencies of individual moves.
## 5 Modeling move choice
### Move frequencies
Here, we present the dynamics of move frequencies over time for several game positions. Given a position on the board, the player whose turn it is has a choice of which move to play. In positions where the king is in check, a player would only have a few choices, since the player is forced to get out of check. In some other cases, several equally attractive moves could be available, and any of the factors in Section 3 has the
potential to affect the choice. By examining move frequencies and their dynamics over time, we focus on four types of positions that are shown in Figure 3.
**Starting Position,ply 1.** Figure 3A shows the fractions of games in which different starting moves have been chosen by players in each year from 1971 to 2019. The frequencies of the moves are mostly constant over time, suggesting that the choice of the starting move is a well-understood and well-developed idea. Due to the excessive computation required, it is impossible for a modern chess engine to evaluate the relative objective strength of these choices, so no innovation could be introduced through computer analysis.
**Sicilian Defense,ply 3.** Moving further down the tree of possible moves, Figure 3B shows move frequencies in response to **1. e4 c5** -- the Sicilian Defense. In this position, there is a mainline move -- **Nf3** -- which an overwhelming majority of players prefer to play. This line is called the Open Sicilian, a type of game in which the center of the board is uncluttered with pawns and players have maximal freedom to apply their calculation skills. Other moves, such as the Closed Sicilian **Nc3**, are examples of sidelines that are rarely played. The frequencies change only slightly over time, again suggesting that there are rigid preferences for moves in this position. Move distributions in which one specific move dominates are common, possibly because some sequences of moves are perceived as a single coherent unit.
**Najdorf Sicilian,ply 11.** A game starting with a Sicilian Defense can follow a sequence known as the Najdorf Sicilian, named after a famous grandmaster Miguel Najdorf. This sequence consists of 10 plys, and the moves at ply 11 that have been played in the resulting position are presented in Figure 3C. Qualitatively, the picture is dramatically different from the early positions considered above. In response to the Najdorf Sicilian, there are three main moves for white: **Be2**, **Be3**, or **Bg5**. All three have consistently high frequencies through the years. However, there are responses that became "obsolete" (**f4**, light green on the plot) and other options that have rapidly gained in popularity. One of the recent "innovations," the move **h3** (purple), was almost never played before the 2010s, but now occurs in more than 10% of games. Chess professionals associate the move **h3** with the "computer-inspired" style of play because modern chess engines are known for quickly launching their flank pawns forward to overwhelm the opponent. The popularity of this style of chess also potentially contributes to the increase in the number of games with **h3**.
**Queen's Gambit Declined,ply 7.** Finally, Figure 3D presents an example of a gradual change. Instead of a rapid explosion in popularity, the move **cxd5**, in which the pawn on the **c**-file captures the pawn on **d4**, has slowly been replacing the alternatives in one of the Queen's Gambit Declined positions over the last 40 years. This change might have happened either due to the change in the metastrategy of play (preference for different styles of positions) or because of the gradual development of chess theory.
The qualitative picture of move frequency changes can be summarized as follows. On one hand, very early opening moves do not show large fluctuations in frequencies, most likely because a significant change in frequency necessitates some kind of "innovation," and these are impossible to produce at such an early stage. On the other hand, moves beyond the standardized opening frequencies (after the 16th-20th ply) involve positions that do not repeat often enough for humans to memorize and analyze during preparation. This phenomenon makes quantitative analysis of specific late-game moves nearly imposs
Figure 2: Features of the dataset. (A) Number of games per year. (B) Number of unique players per year. (C) Outcome proportions in each year. (D) Average game length per year, measured in the number of plys (half-moves).
these two extremes, there are positions at which chess theory is actively developed and tested. Positions like the Najdorf Sicilian occur early enough in the game to be reached often, but are advanced enough to provide many continuation possibilities that are approximately equal in terms of objective strength. In such positions, all factors including engine analysis, move frequency, stylistic trends, and personal preferences, could play a role in move choice.
### Population-level modeling of move choice
We develop a statistical model that can help to explain the data described above. A complete model of move choice would involve parameters associated with the whole population, with subgroups of players (e.g. top 50 players), or with each individual. Such a model would be very complex, so our model is restricted to population-level features of dynamics, analyzing frequency-dependent, success, and prestige biases.
#### 5.2.1 Unbiased model
First, we consider a null model that would generate the simplest dynamics, reflecting unbiased transmission of move choice preferences from one year to the next. Conceptually, the model assumes that each year, players "sample" a move randomly from games that were played in the last year. More precisely, fix an arbitrary chess position and suppose that in each year \(t\), exactly \(N_{t}\) games having this position were played. The data for the model are the counts of \(k\) different response moves, denoted by \(\mathbf{x}_{t}=(x_{t}^{1},\ldots,x_{t}^{k})\). We do not attempt to model appearance of novel strategies, so we will assume that all counts are positive, \(x_{t}^{i}>0\). The vector of response strategy counts in the next year, \(\mathbf{x}_{t+1}\), is multinomially distributed,
\[\mathbf{x}_{t+1}\sim\text{Multinomial}(N_{t+1},\mathbf{\theta}_{t}). \tag{1}\]
The probability vector \(\mathbf{\theta}_{t}\) has the Dirichlet distribution with counts in the current year, \(\mathbf{x}_{t}\), as Dirichlet allocation parameters,
\[\mathbf{\theta}_{t}\sim\text{Dirichlet}(\mathbf{x}_{t}). \tag{2}\]
The multinomial likelihood depends on a positive integer parameter \(n\) and on the vector of probabilities \(\mathbf{\theta}\) that sum to one,
\[f_{M}(\mathbf{y};n,\mathbf{\theta})=\frac{n!}{y_{1}!\cdots y_{k}!}\theta_{1}^{y_{1}} \cdots\theta_{k}^{y_{k}}. \tag{3}\]
The Dirichlet likelihood depends on a vector of non-negative real numbers \(\mathbf{\alpha}\):
\[f_{D}(\mathbf{\theta};\mathbf{\alpha})=\frac{\Gamma\left(\sum_{i=1}^{k}\alpha_{i} \right)}{\prod_{i=1}^{k}\Gamma(\alpha_{i})}\prod_{i=1}^{k}\theta_{i}^{\alpha_ {i}-1}. \tag{4}\]
These two likelihoods can be combined into the compound Dirichlet-multinomial likelihood by integrating over \(\mathbf{\theta}\) (Johnson et al., 1997, pp. 80-83),
\[f_{DM}(\mathbf{y};n,\mathbf{\alpha})=\frac{n!\,\Gamma\left(\sum_{i=1}^{k}\alpha_{i} \right)}{\Gamma\left(n+\sum_{i=1}^{k}\alpha_{i}\right)}\prod_{i=1}^{k}\frac{ \Gamma(y_{i}+\alpha_{i})}{y_{k}!\,\Gamma(\alpha_{k})}, \tag{5}\]
which will be the likelihood for the model. In other words, under our unbiased model, the counts of moves in year \(t+1\) are distributed with probability density function
\[p(\mathbf{x}_{t+1}\mid N_{t+1},\mathbf{x}_{t})=f_{DM}(\mathbf{x}_{t+1};N_{t+1},\mathbf{x}_{t}), \tag{6}\]
so that the counts in the previous year \(\mathbf{x}_{t}\) take the role of the Dirichlet parameters \(\mathbf{\alpha}\). As a shorthand, we write
\[\mathbf{x}_{t+1}\sim\text{Dirichlet-multinomial}(N_{t+1},\mathbf{x}_{t}). \tag{7}\]
For a vector \(\mathbf{y}\) having a Dirichlet-multinomial distribution with parameters \(n\) and \(\mathbf{\alpha}\), the expectation is
\[\mathbb{E}\left[\mathbf{y}\right]=\frac{n}{\sum_{j=1}^{k}\alpha_{j}}\mathbf{\alpha}. \tag{8}\]
For our model, this formula yields
\[\mathbb{E}\left[\mathbf{x}_{t+1}\right]=\frac{N_{t+1}}{\sum_{j=1}^{k}x_{t}^{j}}\mathbf{x} _{t}=\frac{N_{t+1}}{N_{t}}\mathbf{x}_{t}, \tag{9}\]
meaning that essentially no changes happen in this unbiased model, except possibly for the change in the number of games played. The strategies are "transmitted" from one year to the next proportionally to their current frequencies in the population. In this way, the null model is analogous to a neutral many-allele Wright-Fisher model in population genetics (Ewens, 2004). In constrast with the population genetics models, we do not use relative move frequencies, and work instead with counts directly via the Dirichlet distribution. As we show below, this choice allows us to account for overdispersion in the data and to introduce transmission biases in a simple and intuitive way.
Of course, it should be noted that chess players pay attention to games further back in the past, not just in the last year. This null model is still a good representation of the process for several reasons. First, there is a high degree of autocorrelation in the move count data (Schaigorodsky et al., 2016), meaning that it is likely that the most recent data point is representative of counts in the most recent years. Second, players tend to look only at _select_ famous games of the past, whereas the more recent games can be more easily perceived in their totality.
#### 5.2.2 Full model
In the actual data, a strategy transmitted at a rate greater than what is expected in the null model can be said to have higher _cultural fitness_(Cavalli-Sforza & Feldman, 1981). Conversely, a strategy having a lower transmission rate than expected has lower cultural fitness. Selection on strategies is carried out by players when they decide which move to play based on any of the factors discussed in Section 3. Extending the model to allow for deviations from "neutrality" in transmission rates would allow us to detect which factors might be causing the selection.
In our extended model, the vector of strategy counts in the year \(t+1\) again has the Dirichlet-multinomial distribution with parameters \(N_{t+1}\) and \(\mathbf{\alpha}\):
\[\mathbf{x}_{t+1}\sim\text{Dirichlet-multinomial}(N_{t+1},\mathbf{\alpha}). \tag{10}\]
However, vector \(\mathbf{\alpha}\) is now defined as
\[\alpha_{i}=\exp(\mathbf{\beta}_{i}\cdot\mathbf{y}_{t}^{i})f_{i}(x_{t}^{i}/N_{t})x_{t} ^{i}. \tag{11}\]
Here, \(x_{t}^{i}\) is the count of games with the \(i\)th strategy in year \(t\), \(f_{i}\) is a piecewise constant function of the strategy frequency which we call a _frequency-dependent fitness_ function, and \(\mathbf{\beta}_{i}\) is a vector of constant coefficients. The interpretation of the frequency-dependent fitness functions is discussed below in Section 5.2.3.
Additional features beyond just the move count or frequency that may affect the dynamics of move frequencies are denoted \(\mathbf{y}_{t}^{i}\) in eq. (11). They consist of:
1. The average outcome of the strategy in the whole population for games in year \(t\), with win for white encoded as \(1\), win for black encoded as \(-1\), and draw encoded as \(0\). We denote the corresponding coefficient by \(\beta_{\text{win},i}\).
2. The average outcome of the strategy among the top \(50\) players in the dataset in year \(t\). The list of top \(50\) players was computed separately for each year using the average Elo rating of the players in that year. We denote the corresponding coefficient by \(\beta_{\text{top50-win},i}\).
3. The frequency of the strategy among the top \(50\) players in year \(t\). We denote the corresponding coefficient by \(\beta_{\text{top50-freq},i}\).
These features represent biases different from frequency dependence that could also contribute to cultural fitness of moves; if the average outcome significantly affects move choice, success bias is present in transmission (\(\beta_{\text{win},i}\), \(\beta_{\text{top50-win},i}\)). Similarly, prestige bias could be important for transmission if players imitate the top \(50\) players (\(\beta_{\text{top50-win},i}\), \(\beta_{\text{top50-freq},i}\)).
The extra features are included in the model as an exponential factor \(\exp(\mathbf{\beta}_{i}\cdot\mathbf{y}_{t}^{i})\). This has two purposes: first, it ensures that the variables \(\alpha_{i}\) stay positive for all parameter values and data points; second, it represents _multiplicative_ effects of several types of transmission biases, a common approach in theoretical models of cultural evolution (see e.g. Denton et al., 2020; Lappo et al., 2023)
#### 5.2.3 Interpretation of frequency-dependent fitness functions \(f_{i}\)
The role of the frequency-dependent fitness functions \(f_{i}\) in our model is complex. To understand it better, suppose first that the \(f_{i}\)'s are _constants_ and the linear term in the exponential vanishes. Then eq. (11) gives \(\alpha_{i}=f_{i}x_{t}^{i}\) and the model is analogous to the Wright-Fisher model with selection, where
\[\mathbb{E}\left[x_{t+1}^{i}\right]=N_{t+1}\frac{f_{i}x_{t}^{i}}{\sum_{j=1}^{k }f_{j}x_{t}^{j}}, \tag{12}\]
and the values of \(f_{i}\) represent selection coefficients. The mean fitness in the "population" of games in year \(t\) is \(\bar{f}_{t}=\frac{1}{N_{t}}\sum_{j=1}^{k}f_{j}x_{t}^{j}\).
Ideally, we would want \(\bar{f}_{t}=1\): if the mean fitness is less than one, then the population would not maintain its size. However, it is possible that \(\bar{f}_{t}<1\) in a model fitted to data, despite the number of games \(N_{t}\) staying consistently high. This apparent contradiction is due to the \(f_{i}\)'s measuring two phenomena at once: their _relative_ values represent selection, while the _mean_ value of the \(f_{i}\)'s measures overdispersion. Mathematically, the expectation of a Dirichlet(\(\mathbf{\alpha}\))-distributed random variable is invariant with respect to multiplying \(\mathbf{\alpha}\) by a positive constant, but its variance is determined by the magnitudes of the parameters. Overdispersion in the data means that the variance of move counts is _larger_ than what we would expect under multinomial random choice, and the ability to account for overdispersion is an important feature of the Dirichlet-multinomial likelihood.
We normalize the functions \(f_{i}\) by the mean fitness \(\bar{f}_{t}\) so that we can concentrate our attention on the frequency-dependent behaviors, and not on modeling variance in the data. This normalization works because eq. (12) can be rewritten as
\[\mathbb{E}\left[x_{t+1}^{i}\right]=\frac{N_{t+1}}{N_{t}}\frac{f_{i}}{\bar{f}_ {t}}x_{t}^{i}=\frac{N_{t+1}}{N_{t}}f_{i}^{\prime}x_{t}^{i}, \tag{13}\]
so that the expectation of \(x_{t+1}^{i}\) in the model matches the model with normalized coefficients \(f^{\prime}\) and the mean fitness equal to one.
By allowing \(f_{i}\) to depend on the frequency of the strategy, we are able to model _frequency-dependent selection_ phenomena, while still accounting for possible overdispersion. The normalization constant is now
\[\bar{f}_{t}=\sum_{j=1}^{k}f_{j}(p_{t}^{j})p_{t}^{j}, \tag{14}\]
where \(p_{t}^{j}=x_{t}^{j}/N_{t}\), and \(k\) is the number of distinct moves played from a position. In our analysis of the model results, we will report scaled frequency-dependent fitness values \(f_{i}/\bar{f}_{t}\).
We choose a piecewise-constant form for the functions \(f_{i}\). That is, for \(i=1,\dots,k\) we have
\[f_{i}(x)=\begin{cases}c_{1}^{i}&\text{ if }x\in[0,b_{1}^{i}),\\ c_{j}^{i}&\text{ if }x\in[b_{j-1}^{i},b_{j}^{i}),\\ c_{\ell}^{i}&\text{ if }x\in[b_{\ell-1}^{i},1],\end{cases} \tag{15}\]
with \(c_{j}^{i}\) being values of \(f_{i}\) and \(b_{j}^{i}\) being breakpoints that determine the boundaries of constant segments. We choose quartiles of move frequencies as the values for \(b_{j}^{i}\), so that each function \(f_{i}\) has three breakpoints and four constant segments. This choice does not uniformly cover the domain of \(f_{i}\), but allows for the same amount of data to be used in estimating each segment. The piecewise-constant form for \(f_{i}\) introduces no assumptions about the shape of the function while keeping low the number of parameters.
#### 5.2.4 Inference
In total, our model has parameter vector \(\mathbf{\theta}=(c_{j}^{i},\mathbf{\beta}_{i})\) of length \(7k\), where \(k\) is the number of different moves played in a given position. For each move, there are three coefficients \(\beta_{\text{win},i}\), \(\beta_{\text{top50-win},i}\), \(\beta_{\text{top50-freq},i}\), as well as four values \(c_{1}^{i},c_{2}^{i},c_{3}^{i},c_{4}^{i}\) characterizing the function \(f_{i}\) in eq. (15).
We choose to fit the model in a Bayesian framework using Markov Chain Monte-Carlo sampling, as this choice makes implementation of the model straightforward and allows us to obtain both point estimates and uncertainty quantification from the same analysis. To conduct Bayesian inference, we need to specify a prior distribution for \(\mathbf{\theta}\). Following Gelman et al. (2020), we specify non-informative priors for each parameter. Each constant segment \(c_{j}^{i}\) of each function \(f_{i}\) was assigned an \(\text{Exp}(1)\) prior, such that \(f_{i}\) is always non-negative, and the prior mean of \(f_{i}\) is equal to one, corresponding to neutrality. The coefficients \(\mathbf{\beta}_{i}\) were each assigned a normal \(\mathcal{N}(0,1)\) prior, since the features were standardized to have zero mean and unit variance. Given these priors and the model likelihood (defined in eqs. (10) and (11)), samples were generated from the posterior distribution using the Hamiltonian Markov Chain Monte-Carlo sampler provided by the Stan software package (Gelman et al., 2015; Stan Development Team, 2023). For this procedure, we only consider the data from 1980 to 2019, since earlier years have significantly less data available.
Many moves were played only a few times in the whole dataset. To prevent extremely rare moves from inflating the number of parameters, we have combined moves that individually have average frequency of less than 2% into a single category called "other." In addition, it is commonly accepted by professional players that rare moves serve the same purpose: to take the opponent "out of theory" into positions where neither player had spent significant time preparing, leading to more chaotic and tense games. There are also years in which some move counts are equal to zero, and in this case, our model does not apply directly. To remedy this situation, in computational inference we replace the parameter \(\mathbf{\alpha}\) from eq. (11) by \(\mathbf{\alpha}+1\), such that
\[\alpha_{i}=1+\exp(\mathbf{\beta}_{i}\cdot\mathbf{y}_{t}^{i})f_{i}(x_{t}^{i}/N_{t})x_{t }^{i}. \tag{16}\]
This choice essentially increases each move count by one in every year -- a common approach to deal with zeros in data. The adjustment does not significantly bias the model, as most moves were observed in most years (cf. Figure 4).
#### 5.2.5 Insights from the model
Estimates for coefficients \(\mathbf{\beta}_{i}\) and values of the frequency-dependent fitness functions \(f_{i}\) can be extracted from the model fitted using the Stan Markov Chain sampler. For point estimates, the posterior median is used, and for quantifying uncertainty, we present posterior 1% and 99% quantiles for each estimate
Fits for three different chess positions are discussed here. Figure 4 shows the original count data, the move choice probabilities estimated by the model, and estimates of frequency-dependent fitness \(f_{t}^{i}(x_{t}^{i})/\bar{f}_{t}\) of moves over time. The estimates of the parameters \(f_{i}\) and \(\mathbf{\beta}_{i}\) are presented in Figures 5 and 6, respectively.
We consider three positions at varying depths in the game tree: the Queen's Pawn opening at ply 2 (**1. d4**), the Caro-Kann opening at ply 5 (**1. e4 c6 2. d4 d5**), and the Najdorf Sicilian at ply 11 (**1. e4 c5 2. Nf3 d6 3. d4 cxd4 4. Nf6 5. Nc3 a6**). Comparing the first and second rows of panels in Figure 4, our model fits the data well, with estimated move choice probabilities matching the actual move frequencies.
Considering the responses to the Queen's Pawn opening in Figure 4A, from 1980 to 2005, the move **d5** was, on average, increasing in popularity, with this trend reversing after 2005. The move **Nf6** shows the opposite dynamics. Fluctuations are typical for frequency dependent-selection, and indeed the values of the fitness function observed in Figure 4C confirm this. The fitnesses of these two strategies are higher when they are at lower frequencies. The plots of frequency-dependent function functions \(f_{i}(x)\) for \(x\) from 0 to 1 are shown in Figure 5A, and there is a downward slope in the values of \(f_{i}(x)\) characteristic of negative _frequency-dependent bias_, or anti-conformity.
In the Caro-Kann opening, the move e5 gradually becomes more popular, whereas **exd5** is used less and less (Figure 4D). The "inverse" dynamics can be seen on the plot of move fitnesses in Figure 4F, suggesting that frequency-dependent dynamics play a role. However, the functions \(f_{i}\) are not the only determinants of move frequencies in our model: the coefficients \(\mathbf{\beta}_{i}\) shown in Figure 6B suggest that the choice to play the move **exd5** is affected by the win rate in the population, indicating _success bias_. The decrease in the frequency of **exd5** then comes from many players losing after playing this move. Indeed, computer engines
have shown that the move **e5** provides the strongest winning probability for the player, while after **exd5** the opponent can "equalize" the position and take over the game (Schandorff, 2021).
In the case of the Najdorf Sicilian, the model confirms the analysis in Section 5.1. The move **h3** was highlighted as a recent strong trend. The frequency-dependent fitness function \(f_{\textbf{h3}}\) shows that there is no negative frequency-dependent bias for a choice of **h3** (Figure 5C); in fact, Figure 4I shows that **h3** becomes fitter as it _increases_ in frequency. This result suggests that the move is a genuine innovation, becoming more popular "on its own merit" and not because of frequency-dependent trends. The coefficient for the win rate among the top 50 players, \(\beta_{\textbf{h3},\text{top50-win}}\) is large (Figure 6C), meaning that the increase in the frequency of **h3** could possibly be due to a trend started by the elite players, which then led to wider adoption and development of theory. We conclude that the choice to play **h3** is subject to _prestige bias_.
## 6 Discussion
Data from the last five decades of high-level chess games can be evaluated in terms of cultural transmission and evolution. In particular, our population-level model of move choice in Section 5 has brought together some of the cultural "features" of transmission in attempting to measure influences of factors such as frequency-dependent bias, success bias (win rate), and prestige bias (the use of the move by the very top players). We have shown that many of the moves analyzed are under negative frequency-dependent cultural selection, having higher fitness at lower frequencies (Figure 5). This result suggests that anti-conformity is important in the transmission of chess strategies. In addition, our model is able to identify moves for which other factors play a role: the dynamics of **h3** in the Najdorf Sicilian are affected by the win rate among the top 50 players (Figure 6C), indicating the presence of prestige bias, and the choice of **exd4** in the Caro-Kann suggests success bias (Figure 6B).
Our inference of transmission biases can be connected to the recent changes in chess. The rise in popularity of the move **e5** in the Caro-Kann has happened because of extensive computer analysis showing that it is the most challenging response in that position. Before computers, most grandmasters chose to play **exd4**, which now is considered to give too many winning chances to black. The development of online chess and simultaneous broadcasting of tournament games may have made prestige bias stronger, since it is now easier to follow games by elite players. Similarly, there are now online resources providing analyses of move frequencies in the population and by individual players, potentially making frequency-dependent bias more important.
The model complements other recent work on measuring the strength of transmission biases in cultural datasets. For example, Newberry and Plotkin (2022) used databases of baby names and dog breed popularity to measure frequency-dependent selection. They focused on exchangeable entities, for which the evolutionary forces are blind to any distinguishing characteristics and operate based only on frequencies of types. Our model can be seen as extending the frequency-dependent selection modeling from exchangeable entities such as names and dog breeds to chess moves, which are _nonexchangeable_. This non-exchangeability is what allowed us to add extra features and to simultaneously measure several types of transmission bias.
Our study could also be expanded to include individual-based features as in the analysis of Go by Beheim et al. (2014), which would allow evaluation of the relative importance of several types of social _and_ individual biases. Comparing dynamics _between_ groups of players could give insights into the direction of the spread of chess knowledge within the community. For example, do innovations by professional players diffuse "down" the skill ladder to intermediate players?
Further, the individual behavior of chess players could be studied in its own right. One possibility is to examine players' _opening repertoire_ -- the set of openings that a player has mastered and uses regularly. Focusing on individual behavior, one could ask about the dynamics of opening repertoires as players progress through their careers. Beginning players typically start by mastering a single opening, after which they gradually increase the number of strategies they are comfortable playing, since the ability to play many openings is important for high-level tournaments. As the players age and stop participating in many tournaments, one would expect their opening repertoire to settle into a few familiar strategies. Modeling the evolution and transmission of individuals' opening repertoires would complement the population-level perspective provided by this paper.
Statistical models based on the Dirichlet-multinomial likelihood are known in many related areas, includ
ing linguistics (e.g. Madsen et al., 2005), human genetics (Wang et al., 2023), molecular ecology (Harrison et al., 2020), and microbiome data analysis (e.g. Osborne et al., 2022). Often, they take the form of a multinomial likelihood with Dirichlet priors (and arbitrary hyperpriors), such as in the case of mixed membership clustering models (Blei et al., 2003; Pritchard et al., 2000). Our model in Section 5.2 uses an approach in which the Dirichlet-multinomial distribution is used to approximate the process by which the data were generated.
Our modeling and estimation of transmission biases could be useful to chess historians. Many qualitative "explanations" are available for the popularity of certain strategies, and a statistical evaluation of move frequency dynamics could help verify the validity of these explanations. More broadly, our statistical approach could potentially be used to complement the historical study of cultural trends in other games with discrete choices, or even in other domains such as art and fashion.
**Data and code.** The code to generate figures in this paper and links to access the dataset are available at github.com/EgorLappo/cultural_transmission_in_chess.
**Acknowledgments.** We acknowledge NSF grant BCS-2116322 and grant 61809 from the J. T. Templeton Foundation for support. We thank Sharon Du for suggesting the problem and Kaleda Denton for helpful comments on the manuscript.
|
2308.12539
|
CALM : A Multi-task Benchmark for Comprehensive Assessment of Language
Model Bias
|
As language models (LMs) become increasingly powerful and widely used, it is
important to quantify them for sociodemographic bias with potential for harm.
Prior measures of bias are sensitive to perturbations in the templates designed
to compare performance across social groups, due to factors such as low
diversity or limited number of templates. Also, most previous work considers
only one NLP task. We introduce Comprehensive Assessment of Language Models
(CALM) for robust measurement of two types of universally relevant
sociodemographic bias, gender and race. CALM integrates sixteen datasets for
question-answering, sentiment analysis and natural language inference. Examples
from each dataset are filtered to produce 224 templates with high diversity
(e.g., length, vocabulary). We assemble 50 highly frequent person names for
each of seven distinct demographic groups to generate 78,400 prompts covering
the three NLP tasks. Our empirical evaluation shows that CALM bias scores are
more robust and far less sensitive than previous bias measurements to
perturbations in the templates, such as synonym substitution, or to random
subset selection of templates. We apply CALM to 20 large language models, and
find that for 2 language model series, larger parameter models tend to be more
biased than smaller ones. The T0 series is the least biased model families, of
the 20 LLMs investigated here. The code is available at
https://github.com/vipulgupta1011/CALM.
|
Vipul Gupta, Pranav Narayanan Venkit, Hugo Laurençon, Shomir Wilson, Rebecca J. Passonneau
|
2023-08-24T03:53:55Z
|
http://arxiv.org/abs/2308.12539v3
|
# CALM : A Multi-task Benchmark for Comprehensive Assessment of Language Model Bias
###### Abstract
As language models (LMs) become increasingly powerful, it is important to quantify and compare them for sociodemographic bias with potential for harm. Prior bias measurement datasets are sensitive to perturbations in their manually designed templates, therefore unreliable. To achieve reliability, we introduce the Comprehensive Assessment of Language Model bias (CALM), a benchmark dataset to quantify bias in LMs across three tasks. We integrate 16 existing datasets across different domains, such as Wikipedia and news articles, to filter 224 templates from which we construct a dataset of 78,400 examples. We compare the diversity of CALM with prior datasets on metrics such as average semantic similarity, and variation in template length, and test the sensitivity to small perturbations. We show that our dataset is more diverse and reliable than previous datasets, thus better captures the breadth of linguistic variation required to reliably evaluate model bias. We evaluate 20 large language models including six prominent families of LMs such as Llama-2. In two LM series, OPT and Bloom, we found that larger parameter models are more biased than lower parameter models. We found the T0 series of models to be the least biased. Furthermore, we noticed a tradeoff between gender and racial bias with increasing model size in some model series. The code is available at _[https://github.com/vipalgupta1011/CALM_](https://github.com/vipalgupta1011/CALM_)
## Introduction
The rapid rise in prominence of large language models (LMs) has resulted in widespread usage and real-world applications in many domains [23, 27]. But it has also fueled concerns about hidden bias in LMs [14, 17, 21]. Prior research has demonstrated that LMs have varying effects across different groups of individuals [5, 50, 8]. Recently, there is significant emphasis on qualitative analysis of bias in LMs, commonly referred to as red teaming language models [41, 19, 71]. Such approaches help in underscoring bias issues within the models. However, due to the rapidly evolving nature of the field, it is important to have reliable datasets to quantify bias in LMs. We believe that our work is an important step in creating reliable and robust bias evaluation datasets.
We present Comprehensive Assessment of Language Model bias (CALM): a benchmark dataset and set of procedures to quantify bias in language models using diverse templates across three tasks - question answering, sentiment analysis, and natural language inference. Inspired by GLUE [63] and SuperGLUE [61], we use existing datasets to construct the CALM benchmark. We use 16 popular datasets across these three tasks to filter 224 templates. Using these templates, we generate a dataset of 78,400 examples for evaluating gender bias and racial bias. We assess LM accuracy independently for different demographic groups and take the difference between maximum and minimum accuracy across groups as the bias score. Our approach for template selection and dataset creation has a high degree of adaptability and can be extended to include more tasks and bias categories.
Unlike previous bias analysis datasets, where templates have been manually designed by the authors [3, 32], we use pre-existing datasets that vary significantly from each other. Previous bias datasets are shown to be unreliable and prone to author bias [52, 2, 51]. These design choices make prior datasets very sensitive to small modifications in templates [51, 52]. In constructing the CALM dataset, we put special emphasis on having a diverse set of templates. We measure the template diversity by computing the semantic similarity between templates using BERTScore [69]. We show that our dataset exhibits higher diversity than prior bias benchmark datasets. To evaluate sensitivity, we conduct robustness tests using syntactic modifications to the templates pro
Figure 1: Examples in CALM are created by replacing person placeholders in templates with names belonging to different social groups. The accuracy of LM is measured independently for each social group.
posed by Selvam et al. [51] and found that our bias score remains stable across multiple runs. Furthermore, we conduct a comparison of the characteristics of templates across bias datasets. We find the templates used in our dataset to have the highest variance in template length, another reflection of greater template diversity. Thus CALM captures a breadth of linguistic variation for comprehensive evaluation of model bias. The increased template diversity and score stability represent key advantages over prior bias analysis datasets.
We conduct bias benchmarking on 20 large language models (LLMs), including six prominent families of LMs, such as Llama-2. To our knowledge, no prior bias benchmark dataset has been tested on such a large collection of LLMs. In two LM series, OPT and Bloom, we found that larger parameter models are more biased than lower ones: average bias in OPT models increased by 23%, from a CALM score of 8.2 for OPT-2.7B to 10.1 for OPT-30B. The T0 series of models demonstrate significantly lower bias than other models. Conversely, we found that Llama-2, Falcon, and Bloom models exhibit more bias compared with others. Moreover, we noticed a tradeoff between gender and racial bias in some models, where increasing model size decreased one bias type while increasing the other. These findings shed light on the interplay among bias types in LMs with respect to model size and series, providing essential insights into model behavior across different social groups.
The proposed dataset provides a standardized benchmark for the comparative evaluation of model bias. The dataset can be used to analyze any LM with adequate capabilities on the prescribed tasks. Given the increasing trend of multi-task LLMs, bias quantification datasets like CALM will be essential for comprehensive bias testing.
## Related Work
Quantification of bias is an active research area. Earlier work often measured distance in embedding space [12, 15, 9, 56, 58], using cosine similarities between embeddings extracted from LM hidden layers. In general, such approaches are very dependent on the selection of target terms, and face reliability issues [64].
Recent work has shifted towards template-based approaches, where models are prompted with a set of pre-defined templates to capture specific types of bias. Each prompt involves completing a template with a set of pre-defined target words [22]. Notable examples are Smith et al. [53], Prabhakaran, Hutchinson, and Mitchell [42], Sakaguchi et al. [46]. Smith et al. [53] collect 600 descriptor terms such as "Dear" using crowdsourcing to identify bias across 13 demographic groups. Sakaguchi et al. [46] created a dataset of 44,000 pronoun-resolution question pairs. Within each pair, the questions are nearly identical, differing only in a trigger word that flips the expected answer between questions. Nadeem, Bethke, and Reddy [36] create a dataset for four target domains using 321 target terms and 16,995 test instances, applied to fill-in-the-blank and next-sentence prediction tasks. Some approaches are designed for a specific task like conference resolution [44, 30], machine translation [55, 13] or sentiment analysis [6]. Use of coreference resolution for bias measurement has focused on gender-occupation associations, measuring alignment of LM performance with relevant stereotypes [44, 70].
Liang et al. [33] use humans to collect descriptor terms to quantify bias across thirteen demographic categories, with 26 pre-defined templates. They quantify bias using the token likelihood of generative models. Li et al. [32] create 30 templates across four demographic categories to quantify bias. Their questions were intentionally designed not to have obvious answers, to see how models perform differently based on subtle changes. Kiritchenko and Mohammad [28] use eleven templates to create a dataset of 8,640 sentences to measure gender and racial bias. Ahn and Oh [1] aim for semantically similar templates to measure how models perform differently with slight changes, for quantifying ethnicity bias. Parrish et al. [39] create a QA dataset for bias measurement. For each context, they provide a neg
Figure 2: Examples of two datasets for each of the three CALM tasks. We first select examples from each dataset, then convert them into templates by replacing person names with placeholders.
ative question and a non-negative question with two answer choices. They create a dataset of 58,000 testing examples using 325 templates across nine bias categories. Nangia et al. [37] designed 1,508 examples across nine bias categories to measure relative performance between sentence pairs.
In sum, prior template approaches have been found to be sensitive to small modifications in templates, and suffer from author bias and lack of diversity [51, 52]. To address these issues, we selected templates from pre-existing datasets. We show that having a diverse set of templates is crucial for robust and reliable bias measurement.
## Dataset
CALM tests language models on different domains for three tasks: question answering, sentiment classification, and natural language inference. These tasks are well-studied, and address a wide range of capabilities for integrating information across QA pairs or narrative sequences, including meaning, sentiment and logical relationships. For each task, we create a dataset using a set of templates and target words. We extract templates across many datasets to create a diverse set of examples with different levels of difficulty. We describe the tasks below; additional details are present in appendix.
### Question Answering
For Question Answering (QA), we selected datasets where the answer is present in or easily inferred from the context. This is done to confine the evaluation to model behavior across different social groups, and to avoid confounding this with factual knowledge about real-world scenarios.
#### bAbI
Weston et al. [66] provides a set of 20 toy QA tasks for text understanding and reasoning. Each task is generated using a simulation of characters interacting in a world. This dataset tests various skills such as chaining facts, simple induction, and deduction. Details about template selection can be found in the appendix.
#### sddapop
SOcial bias Discovery from Answers about PeOPle dataset [3] modified instances from the Social IQa dataset [48] to identify bias and stereotypical associations between groups and attributes in LMs. We use the Bethany dataset file provided by authors.
#### TweetQA
This dataset for QA in social media was created by using tweets used by journalists [67]. TweetQA is challenging due to the informal nature of the language used on Twitter, as compared to news or Wikipedia. We use the dev set, as test set answers are not publicly available.
#### MCTest
Machine Comprehension of Text [43] consists of fictional stories and multiple choice questions. This dataset was collected via crowdsourcing. We use the MC500 test, as it is more grammatically correct than MC160.
#### Relation Extraction
Levy et al. [31] reduce relation extraction (RE) to reading comprehension, to create a new dataset for zero-shot RE. They crowd-source questions for each relation and align them with Wikipedia paragraphs. We use their test dataset for template generation.
sdd
### Natural Language Inference
The Natural Language Inference (NLI) task pairs sentence stating a premise and a hypothesis. The models predict whether the sentences are entailed, contradictory, or neutral. This task requires the model to to understand logical relationships between the two sentences.
#### Snli
Stanford Natural Language Inference contains human annotations grounded by image captioning [10]. Premise sentences were taken from image captions, and hypothesis sentences were written by crowdworkers. We use the test data from this dataset.
#### Wnli
Winograd Natural Language Inference is one of the nine GLUE benchmarks [62]. It is designed to evaluate a model's ability to do pronoun resolution and understand contextual entailment. We use the dev data, as answers to the test data are not publicly available.
#### Rte
Recognizing Textual Entailment is one of the nine GLUE benchmarks [62]. It contains sentence pairs from news and Wikipedia text. Similar to WNLI, we use the dev data.
#### Sick
Sentences Involving Compositional Knowledge contains sentence pairs rich in lexical, syntactic and semantic phenomena [34]. It was created using image and video descriptions. We use the test data.
### Template Creation
To filter examples for the above tasks from each dataset, we use criteria directed at sociodemographic distinctions, and diversity of templates. For QA and NLI, we look for the presence of person names. For SA, we retrieve sentences with pronouns or person names. To ensure template quality after filtering, we manually verified each template. Following the filtering step, each example undergoes a template extraction process, where person names and pronouns are replaced with corresponding tags. We use the same set of templates to generate examples for both of our bias categories, race and gender. Example templates are shown in Figure 1.
To create CALM, we filter 224 templates for the three tasks. For the QA task, we filter 93 templates from the 8 datasets. For the SA task, we filter 77 templates from the 4 datasets. For the NLI task, we filter 54 templates from the 4 datasets. The distributions of templates from the three task types are shown in Tables 1-3. Notably, our approach emphasizes the selection of a diverse set of templates during the filtering process to ensure comprehensive coverage across different domains.
#### Bias Categories
Gender biasTo quantify gender bias, names were sampled from three gender categories - male, female, and non-binary - with 50 names per category. This resulted in 150 testing examples for each template. Male and female names were selected from the top 1000 names from the US Social Security dataset.1 We restrict to names with \(>\) 80% usage in a given gender. This partitioning approach is similar to previous approaches [65]. Non-binary names were sampled from the list provided in [18]. We removed non-binary names from male and female names to ensure no data overlap.
Footnote 1: [https://www.ssa.gov/acdb/babumanes/](https://www.ssa.gov/acdb/babumanes/)
#### Racial bias
To quantify racial bias, we sampled names across four racial/ethnic groups - White, Black, Hispanic and Asian - with 50 names per category, yielding a total of 200. We selected these four groups based on the availability of corresponding labels in US census data, and the Harvard dataverse.2 We restricted selection to names with \(>\) 80% usage in a given category.
Footnote 2: [https://dataverse.harvard.edu/dataset.shtml?persientId=doi:10.7910/DVN/SGKW0K](https://dataverse.harvard.edu/dataset.shtml?persientId=doi:10.7910/DVN/SGKW0K)
To broaden bias assessment beyond US names, we compiled a dataset tabulating names from various national origins. This dataset, using the scripts we provide, allows the evaluation of LM bias across diverse social groups from various countries. Due to computational limitations, we confine experiments reported here to US-origin names.
## Comparison with Other Bias Datasets
To evaluate the diversity of templates in the CALM dataset, we compared diversity measures with other bias datasets. Template diversity can be quantified using BERTScore [69], which measures the semantic similarity between sentences using BERT embeddings [16]. Specifically, BERTScore computes cosine similarity between the contextual embeddings of corresponding words between pair of sentences. To quantify the diversity of the dataset, we take the average of the BERTScore between all pairs of templates within each dataset. We also examine template length, defined as the average number of words per template in a dataset, and standard deviation of template lengths, to further characterize diversity.
To evaluate the relative diversity of the CALM templates, we compared CALM with seven other bias datasets: DisCo [65], BEC-Pro [4], UNQOVER [32], BITS [60, 59], HolisticBias [53], Counterfactual-eval [25] and BBQ [39]. The first difference we see is relative number of templates. DisCo and BEC-Pro proposes 14 and 5 templates respectively to
\begin{table}
\begin{tabular}{l|r|r} \hline
**Dataset** & **Count** & **Percentage** \\ \hline SST & 29 & 37.6\% \\ \hline ToxicComments & 29 & 37.6\% \\ \hline Sentiment140 & 11 & 14.4\% \\ \hline EEC & 8 & 10.4\% \\ \hline Total & 77 & 100.0\% \\ \hline \end{tabular}
\end{table}
Table 2: Percentage of templates from each SA dataset.
\begin{table}
\begin{tabular}{l|r|r} \hline
**Dataset** & **Count** & **Percentage** \\ \hline SNLI & 15 & 27.8\% \\ \hline WNLI & 15 & 27.8\% \\ \hline RTE & 13 & 24.0\% \\ \hline SICK & 11 & 20.4\% \\ \hline Total & 54 & 100.0\% \\ \hline \end{tabular}
\end{table}
Table 3: Percentage of templates from each NLI dataset.
measure gender bias. UNQOVER proposes 30 templates to measure gender, nationality, ethnicity, and religion bias. BITS and Counterfactual-eval proposes 10 templates each for quantifying sentiment bias. HolisticBias uses 26 templates to measure bias across 13 categories. BBQ proposes 325 templates across nine sociodemographic bias categories. CALM uses 224 templates.
The comparison with other datasets demonstrates that our dataset exhibits greater template diversity. As shown in Table 4, CALM has the lowest average BERTScore of 0.388, indicating lower semantic similarity between template pairs. In contrast, datasets such an UnQOVER (0.660), BITS (0.617), and BEC-PRO (0.594) have high average BERTScore \(\geq\) 0.59, suggesting a substantial template redundancy. The higher diversity of the CALM templates is further supported by examining template length. Our dataset has a significantly higher average template length of 38.5 words as compared to other datasets. Moreover, a higher standard deviation illustrates considerable variability in template length, with templates ranging from short sentences to large paragraphs.
To evaluate the reliability of the CALM dataset, we conducted robustness tests where we modify templates as proposed in [51]. Following their procedure, we generated four alternative constructions of the CALM by paraphrasing templates through addition of clauses, addition of adjectives, and synonym substitution. The template paraphrasing modifications generated a dataset five times the size of the CALM dataset. For the Bloom-7B model, the bias score changed from 14.9 in the original dataset to 15.2 in the modified dataset. For OPT-6.7B, the score changed from 10.1 to 10.8. We evaluated 5 different models on the modified dataset and found that the CALM bias metric remains relatively stable, with a maximum difference of less than 10% across all the tested models. This difference is strikingly low, compared with 70% decrease in bias score, declining from 41.6 to 13.4 for the BiasNLI dataset, as well as 77% elevation in bias score, rising from 5.83 to 10.33 for the Winogender dataset [51]. Detailed results are present in the appendix.
The above analysis shows that our dataset is more diverse and captures a wider range of linguistic variation, as compared to other bias datasets. Furthermore, our dataset leads to more reliable measurement, to better capture and quantify the space of potentially biased behavior of language models.
## Evaluation
### Models
In this work, we perform an empirical evaluation of 20 open-source LMs including six prominent families of large language models: Llama-2 [57], Bloom [49], OPT [68], Falcon [40], T0 [47] and GPT-Neo [7]. The models under examination vary in size from 1 billion parameters for Bloom to 70 billion parameters for Llama-2, allowing us to analyze performance across a wide range of model sizes.
In line with recent work on in-context learning for language model evaluation [33, 11], we evaluate all models using 5-shot prompts. For each template, five examples are randomly sampled from the training set of the corresponding dataset following the procedure established in HELM [33]. These examples are appended to the prompt to provide the model with demonstrative examples before evaluating on a given task. Furthermore, we fix the in-context examples for each dataset across models to ensure standardized comparison, an approach also adopted by HELM [33].
For prompt formating for each of the three tasks, we select the prompt structure followed by HELM [33] and Brown et al. [11]. As argued by Liang et al. [33], prompts tailored for each model may yield optimal performance but poses challenges for controlled evaluation. Due to practical computation and time constraints, in this work, we use the commonly accepted prompts following Liang et al. [33]. Moving forward, it is desirable to have standardized prompts across models to have similar prompting, and to facilitate greater comparability.
### Bias Score
For each template in CALM, model accuracy is evaluated separately for each social group. For example, given a particular template and the female gender group, if the model generates the correct response for 45 out of 50 examples, the accuracy for a female gender group is 90%. In order to quantify potential gender and race bias in LM, we propose a _bias score_ defined as the difference between maximum and minimum accuracy across contrastive social groups. A bias score of 0 indicates identical performance across groups, while a score of 50% means a 50 percentage-point gap between the highest and lowest accuracy groups in the bias category. This simple differential accuracy metric intuitively captures the degree to which model performance decreases between social groups for a given template. For each template, we match the answers with the correct answer taken from the source dataset. This helps us to identify and exclude templates from our analysis in which the LM exhibits poor performance while measuring bias.
The gender bias score is defined as the difference in accuracy between Male, Female and Non-binary groups.
\[\begin{split}& BiasScore=max(acc_{g})-min(acc_{g})\\ &\ni acc_{g}=\{acc_{male},acc_{female},acc_{non-binary}\}\end{split} \tag{1}\]
Similarly, we measure racial bias as the difference between maximum and minimum accuracy across the four categories: White, Black, Asian and Hispanic. To assess bias
\begin{table}
\begin{tabular}{l|c|c}
**Dataset** & **Avg BERTScore** & **Template Length** \\ \hline UnQOVER & 0.660 & 16.9 \(\pm\) 1.8 \\ \hline BITS & 0.617 & 9.1 \(\pm\) 1.2 \\ \hline BEC-PRO & 0.594 & 6.2 \(\pm\) 1.3 \\ \hline DisCO & 0.581 & 4.2 \(\pm\) 1.1 \\ HolisticBias & 0.489 & 7.1 \(\pm\) 1.6 \\ Counter-eval & 0.438 & 7.6 \(\pm\) 2.0 \\ \hline BBQ & 0.455 & 20.7 \(\pm\) 2.8 \\ \hline
**CALM** & **0.388** & **38.5 \(\pm\) 7.1** \\ \end{tabular}
\end{table}
Table 4: In comparison to prior bias benchmark datasets, templates in CALM have the least semantic similarity and maximum length variation. Counter-eval refers to the Counterfactual-evaluation dataset [25].
for a given task, we aggregate bias scores for all templates associated with that task. For instance, in QA, we define gender bias as the average gender bias score for all QA templates in CALM. To provide a comprehensive view of gender bias for a model, we take the mean of gender bias scores across the three tasks. Likewise, we quantify racial bias for each task by averaging the racial bias scores obtained for respective templates. To assess the overall bias of a model, we compute the average of gender and racial bias scores.
While raw accuracy provides an absolute measure of model performance, our bias score assesses whether capabilities are consistent across different social groups. Even when models predict incorrect responses, we expect similar failure rates across subgroups if representations are unbiased. Analyzing and reducing this measure of relative performance could help ensure equitable performance of language models for every social group.
## Results
We evaluate each model once on the CALM dataset. Table 5 shows the bias results for each model along with a task-wise breakdown. In Table 5, the suffix with each model denotes the number of parameters in billions. For instance, Llama-2-7B signifies the 7 billion parameter variant of the Llama-2 series of language models. A detailed task and dataset breakdown of model bias can be found in the appendix.
Lower bias scores indicate reduced demographic disparities in model performance (a perfect unbiased model has 0 bias score across all tasks). During our experiments, we observed that certain models exhibit significant underperformance in specific tasks, achieving near-zero accuracy or producing identical output regardless of the input. As a result, we exclude such tasks from bias scores for those models.
Although Table 5 reports a single bias score per model, we test the reliability of our score over six runs. Due to computational constraints, we ran this reliability study for Llama-2-7B, Bloom-7B, Falcon-7B, and OPT-6.7B models. We ran two experiments with 100%, 90%, and 80% randomly sampled templates. We observed small deviations, indicating strong reliability. Specifically, for the Llama-2-7B model we found a standard deviation of 0.3 around the mean bias score of 12.2. Among all models tested, the maximum standard deviation was 0.6 for an average bias score of 13.2 for Falcon-7B model. These small fluctuations demonstrate the reliability and stability of our bias score. Further reliability analysis can be found in the appendix.
We found that for two out of six LM families, larger parameter models are more biased than lower parameter models. Specifically, for the OPT models, the average bias increased by 32% from 8.2 for the 2.7B parameter variant to 10.8 for the 13B parameter variant. Similarly, for the Bloom
Figure 3: Bar graph demonstrating the avg bias present in Llama-2 and OPT models. The bias scores decrease with increasing size in Llama-2, but follow a random pattern for OPT while increasing significantly from 2.7B to 13B model.
\begin{table}
\begin{tabular}{l|r|r|r|r|r|r|r|r|r} & \multicolumn{5}{c}{**Gender bias**} & \multicolumn{5}{c}{**Race bias**} \\ \cline{3-10} \multicolumn{1}{c|}{**Model Name**} & \multicolumn{1}{c|}{**Avg Bias**} & \multicolumn{1}{c|}{Avg bias} & \multicolumn{1}{c|}{QA bias} & \multicolumn{1}{c|}{NLI bias} & \multicolumn{1}{c|}{SA. bias} & \multicolumn{1}{c|}{Avg Bias} & \multicolumn{1}{c|}{QA bias} & \multicolumn{1}{c|}{NLI bias} & \multicolumn{1}{c}{SA. bias} \\ \hline Llama-2-7B & 12.5 & 12.1 & 9.4 & 14.9 & 12.0 & 12.8 & 9.5 & 16.3 & 12.7 \\ Llama-2-13B & 10.7 & 10.4 & 6.5 & 16.7 & 8.1 & 10.9 & 7.5 & 14.9 & 10.3 \\ Llama-2-7OB & 8.0 & 7.1 & 5.4 & 7.9 & 8.0 & 8.9 & 5.7 & 7.4 & 13.6 \\ Falcon-7B & 13.7 & 12.5 & 14.2 & 13.6 & 9.6 & 14.9 & 11.9 & 17.6 & 15.1 \\ Falcon-4OB & 11.8 & 10.8 & 7.5 & 18.8 & 6.1 & 12.7 & 8.5 & 16.4 & 13.2 \\ OPT-1.3B & 11.5 & 11.5 & 16.8 & 13.4 & 4.2 & 11.4 & 22.1 & 12.4 & 8.7 \\ OPT-2.7B & 8.2 & 9.8 & 17.6 & 3.6 & 8.1 & 6.5 & 11.5 & 3.6 & 4.4 \\ OPT-6.7B & 10.1 & 8.3 & 12.1 & 6.4 & 6.4 & 11.8 & 11.7 & 15.8 & 7.9 \\ OPT-13B & 10.8 & 10.3 & 12.8 & 13.7 & 4.3 & 11.3 & 12.9 & 14.4 & 6.7 \\ OPT-3OB & 10.1 & 9.9 & 15.2 & 9.4 & 5.0 & 10.3 & 12.5 & 13.5 & 4.9 \\ TO-3B & 6.8 & 6.8 & 5.3 & 10.5 & 4.6 & 6.8 & 6.3 & 6.1 & 7.9 \\ TO (11B) & 5.9 & 6.9 & 6.3 & 10.8 & 3.5 & 4.8 & 2.9 & 6.1 & 5.5 \\ TO+ (11B) & 4.7 & 5.3 & 5.0 & 8.3 & 2.7 & 4.0 & 3.6 & 4.1 & 4.2 \\ T0+ (11B) & 5.0 & 5.0 & 4.3 & 5.7 & 5.1 & 4.9 & 3.0 & 4.7 & 7.0 \\ Bloom-1B & 8.9 & 7.2 & 8.3 & 6.0 & - & 10.5 & 10.0 & 10.9 & - \\ Bloom-3B & 10.4 & 8.8 & 7.9 & 9.6 & - & 11.9 & 8.0 & 15.7 & - \\ Bloom-7B & 14.0 & 8.8 & 8.3 & 9.2 & - & 21.0 & 12.9 & 29.0 & - \\ GPT-Nco-1.3B & 11.2 & 11.1 & 13.7 & 8.9 & 10.6 & 11.2 & 10.6 & 9.1 & 14.0 \\ GPT-Nco-2.7B & 10.1 & 9.6 & 12.6 & 6.6 & - & 10.5 & 13.8 & 7.2 & - \\ GPT-6B & 6.9 & 6.1 & 7.4 & - & 4.8 & 7.6 & 8.6 & - & 6.5 \\ \end{tabular}
\end{table}
Table 5: Bias scores for each model on three tasks. A lower score represents less bias. The average bias score for a model is the average between gender and racial bias scores. Similarly, the average bias score for a bias type is the average of QA, NLI and SA bias scores. The suffix with each model denotes the no. of parameters in billions. We use a shading scale, with darker tones of green signifying a higher final average bias score.
models, the average bias exhibited a 67% increase, rising from 8.9 for the 1B parameter variant to 14.9 for the 7B parameter variant. The T0 series of LMs demonstrate significantly lower bias as compared to other models. Conversely, Llama-2, Falcon and Bloom models exhibit more bias than other model series as shown in Table 5. Notably, the T0+ model, an 11B parameter model from the T0 series, emerged with the lowest bias scores among all the tested models.
During our analysis, we observed that sometimes increasing model sizes results in a tradeoff between gender and racial bias. For OPT models increasing the model size from 6.7B to 30B, increases the gender bias by 19% from 6.5 for 6.7B to 11.8 for 30B parameter model, while decreasing the racial bias by 13% from 11.8 for 6.7B to 10.3 for 30B model. For Bloom models increasing the model size increases the racial bias by 76% from 11.9 for the 3B parameter variant to 21.0 for the 7B parameter variant, while the gender bias remains constant.
Looking at the results per task, we observe that for some models there is a tradeoff in the bias scores. For example, for Llama-2 models, increasing the model size from 7B to 13B parameters increases the NLI gender bias by 12% (14.9 for 7B vs 16.7 for 13B), while decreasing QA and SA bias by 31% and 32% respectively. Similarly for GPT-Neo increasing the model size from 1.3B to 2.7B increases QA race bias by 30% from 10.6 for 1.3B to 13.8 for 2.7B parameter model, while decreasing the NLI race bias by 21% from 9.1 for 1.3B to 7.2 for 2.7B parameter model.
For the OPT model series we observe a noteworthy trend, which is also depicted in Figure 3. Initially, the bias score decreases from 11.5 to 8.2 as the model size increases from 1.3B to 2.7B parameters. Subsequently, the bias score increases from 8.2 to 10.8 while increasing the model size from 2.7B to 13B parameters. Interestingly, the bias score decreases again slightly, from 10.8 to 10.1, between the 13B and 30B parameter models. This bias trend for OPT models is similar to the one observed by [24] on Winobias, where OPT-13B is the most biased model.
## Discussion
Interpretation of bias scores.Our bias score for a language model can be interpreted as the average decrease in absolute performance of the LM across different sociodemographic groups, for three tasks. A lower bias score means that the model's accuracy is relatively similar across sociodemographic groups, while a higher bias score indicates that the model's accuracy differs across sociodemographic groups. Ideally, we would want all LMs to have near-zero bias scores, independent of how well they perform on common benchmarks. A higher LM bias score is associated with an increased potential for harmful real-world impacts from use of the model.
Comparing different model series.We claim that our dataset is a good tool for comparing bias across model series, enabling observation of trends exhibited by different models. We observed all models in the T0 series to have significantly lower bias scores as compared with all models in the Llama-2, Falcon, and Bloom series of models. This indicates that the training procedure followed in T0 models may be effective at producing less biased models. While we focus on collecting a large number of diverse templates, slight differences in bias scores, as with T0+ vs T0++, can be attributed to noise. However, a significant difference in bias scores, as with Llama-2 vs T0, indicates a need for bias mitigation.
Comparing models in same language model series.Analysis of change in bias scores with increasing numbers of parameters for a model series provides interesting insights. We observed that for the OPT and Bloom model series, bias scores exhibit an upward trend with the increasing number of parameters. While increasing model parameters may improve performance on common benchmarks, it is important to evaluate the bias trend within each model series. Improvement in performance on common benchmarks might come at the expense of increased bias in models, thus potentially increasing the negative impact for real-world applications of these models. Our analysis shows that there is no common trend in bias trajectories across all model series, highlighting the complexity of bias behaviors.
## Conclusion
We present CALM, a benchmark dataset, and a set of procedures to quantify bias in language models. CALM draws from existing datasets for three NLP tasks to create a dataset to quantify gender and racial bias. Comparison of CALM with previous bias datasets shows CALM to have greater diversity in templates, and to be much more robust to syntactic and semantic modifications of the templates. We find that for some families of large language models, larger parameter models are more biased than smaller parameter models. We also find that sometimes increasing the number of model parameters creates a tradeoff between gender and racial bias. To create CALM, we paid special emphasis to creating a diverse and reliable dataset, and to making it extensible. We believe that our work addresses some of the issues with other bias datasets, and that it takes an important step towards reliable and robust bias evaluation in language models.
## Limitations
The target word list we used for the CALM dataset creation is limited to seven social groups in the US and we acknowledge that many more social groups belonging to gender and race, as well as different countries, are missing. However, we compile datasets for different countries and provide scripts that can be used to create examples for a broader section of social groups belonging to different countries.
|
2302.06517
|
Learning a quantum channel from its steady-state
|
We present a scalable method for learning local quantum channels using local
expectation values measured on a single state -- their steady state. Our method
is inspired by the algorithms for learning local Hamiltonians from their ground
states. For it to succeed, the steady state must be non-trivial, and therefore
the channel needs to be non-unital. Such non-unital channels are readily
implementable on present day quantum computers using mid-circuit measurements
or RESET gates. We demonstrate that the full structure of such channels is
encoded in their steady states, and can be learned efficiently using only the
expectation values of local observables on these states. We emphasize two
immediate applications to illustrate our approach: (i) Using engineered
dissipative dynamics, we offer a straightforward way to assess the accuracy of
a given noise model in a regime where all qubits are actively utilized for a
significant duration. (ii) Given a parameterized noise model for the entire
system, our method can learn its underlying parameters. We demonstrate both
applications using numerical simulations and experimental trials conducted on
an IBMQ machine.
|
Yigal Ilin, Itai Arad
|
2023-02-13T16:55:34Z
|
http://arxiv.org/abs/2302.06517v3
|
# Benchmarking a quantum computer using an engineered dissipative steady-state
###### Abstract
We present a new framework for a scalable benchmarking of a quantum computer that is based on local expectation values, measured on the steady state of an engineered, non-unital and dissipative channel. Such channels can be efficiently implemented on the quantum computer using mid-circuit measurements or RESET gates. We show that the expectation values of local Pauli operators in that state satisfy a set of local constraints that (i) depend on the underlying channel parameters, and (ii) can be checked efficiently. This gives us a simple way to check how well a given noise model describes the actual hardware when all qubits are being actively used for non-negligible amount of time. Moreover, as we do not need to classically calculate these expectation values, our method evaluates a quantum computer in a regime that might be classically inaccessible. Finally, given a parameterized noise model, we can use our method to learn the underlying noise parameters for the entire system. We demonstrate our method numerically and experimentally on an IBMQ machine, and show that a full noise model can be verified and learned from Pauli measurements on a single circuit output.
## I Introduction
In recent years, with the advances in quantum information and quantum computation, there have been a growing interest in methods for learning many-body quantum systems. These methods include, for example, recovering a many-body Hamiltonian based on its dynamics [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11], its steady states [12; 13; 14; 15], or using a trusted quantum simulator [16; 17; 18; 19; 20].
Among these methods, a promising line of research tries to learn the underlying generators of dynamics using local measurements on a steady state of the system [21; 22; 23; 24; 25; 26; 27]. It can be shown that if the underlying Hamiltonians or Lindbladians are sufficiently generic, they can be uniquely determined by the expectation values of local observables in such steady states. Specifically, Ref. [25] shows that there exist a set of _local_ linear constraints between expectation values of local Paulies in a steady state of the system. If we measure these expectation values in a given patch of the system, we can infer the underlying Hamiltonian or Lindbladian terms in that region [28]. In some respect, at least in the case of local Hamiltonians, the method relies on the non-commutativity of the local Hamiltonian terms, and is therefore a 'purely quantum' method, with no classical equivalent. It gives a surprisingly simple and efficient method for learning the generators of the local dynamics, and can also be used to test how well a particular model of the Hamiltonian or Lindbladian agrees with the empirical measurements. All this is done _without having to classically calculate these expectation values_, which is crucial for the scalablity of the approach.
The simplicity and efficiency of the above methods raises the question of whether and how they can be applied to quantum computers. Can we define "steady-states" of quantum computers? Can we find constraints on local expectation values in these states that can be used to benchmark the quantum computer or even learn a (noisy) gate set?
A quantum computer has several key distinctions from local Hamiltonians and Lindbladians. The latter evolve _continuously_, whereas, at least in an abstract level, a quantum computer evolves in a _discrete_, gate-based fashion. In addition, the steady state of a unitary circuit is any convex combination of its eigenstates, and it is not clear how to prepare them efficiently. Moreover, the presence of noise changes this picture significantly by breaking unitarity and the purity of the underlying quantum states.
In this work we propose a method that overcomes these obstacles and generalizes the approach of Ref. [25] to the realm of gate-based quantum computers. Our main idea is to engineer a dissipative channel that can be implemented on a quantum computer using non-unitary gates such as the RESET gates. By iteratively applying this channel, we rapidly reach a fixed point, on which we can measure local expectation values. These expectation values satisfy an extensive set of local linear constraints among themselves, which can be efficiently tested _without having to classically estimate these expectation values_. Therefore, they can be used to benchmark how well a noise model describes the underlying quantum computer. They can also be leveraged to learn such model from a parameterized set of models using a variational technique.
Engineered dissipative steady-states are becoming an increasingly useful part of the NISQ-era toolbox due to their ability to naturally incorporate or even take advantage of unavoidable noise effects [29; 30; 31; 32; 33; 34; 35; 36]. Our method suggests that they can also hold a finger print of the underlying noise in the system, and therefore
used for benchmarking and characterization. More importantly, as shown in Ref. [37], engineered dissipative steady-states can encode universal quantum computations. Therefore by measuring dissipative steady states, we are evaluating a quantum computer in a regime that may be classically inaccessible. Additionally, the steady states that we measure can be the result of actively using all qubits in the system simultaneously, and therefore our method can give a _global_ picture of the noise in the system. Finally, since we are only measuring the steady state, our method is insensitive to state preparation errors, and is also not limited by the qubits decoherence time.
Like the methods of Refs. [24; 25; 26; 27], our method is highly scalable, and can be used on systems with a large number of qubits. To demonstrate it, we introduce two possible dissipative maps. The first map is a stochastic map, which can be described as a local Kraus map (at least in the absence of noise). It can be implemented on a quantum computer by randomly applying a gate from a set of gates with a prescribed set of probabilities. Consequently, the steady state is probed by running many different random circuits, which realize different trajectories of the dynamics. The second map is a deterministic map that is based on a composed 2-local gate, which is both non-unitary and entangling. Implementing this map on a quantum computer requires only one circuit, which makes it easier to execute on currently available hardware such as IBMQ. To test our method we used numerical simulations of both maps on 5-11 qubits, and in addition applied the deterministic map to an IBMQ machine using 5 qubits.
We note that as a noise characterization tool, our method cannot match the precision of established methods such as randomized benchmarking (RB) [38; 39; 40; 41], or gate-set tomography (GST) [42; 43; 44]. This is mainly due to the fact that these methods utilize a form of "error amplification", where different components of the quantum channel can be measured with accuracy \(O(1/M)\) using only \(M\) shots. In comparison, our method relies on the accuracy of the local expectation values that scales as \(O(1/\sqrt{M})\). This imposes a higher upper bound on the overall accuracy of the characterization, approximately scaling as \(O(1/\sqrt{M})\). Nevertheless, in contrast to RB, it can be used to learn the direct noise channel instead of an average, twirlled version of it. Also, in contrast with GST, it can benchmark the noise _globally_ when all qubits are being actively used, thereby giving a holistic picture of the noise in the system.
The structure of this paper is as follows. In Sec. II we provide brief definitions and basic facts on quantum channels and gates, and the notation we use throughout the paper. Then in Sec. III we introduce the theoretical framework of learning a local quantum channel from its steady state, and how this can be implemented on a quantum computer. In Sec. IV we define the noise model we will be using to characterize and benchmark the IBMQ machines. We also briefly describe the optimization method used in the variational learning part. In Sec. V we describe the results of our numerical simulations, performed on the two types of engineered dissipative dynamics. In Sec. VI we present the results of applying our method on an actual IBMQ machine using 5 qubits, which include benchmarking different noise models, as well as using our method to learn its noise. Finally, in Sec. VII we present our conclusions and discuss possible future research directions.
## II Preliminaries
Throughout this work we consider a quantum computer (QC) made of \(n\) qubits with Hilbert space \((\mathbb{C}^{2})^{\otimes n}\). Quantum states will be generally denoted by Greek letters \(\rho\), \(\sigma\), etc. The expectation value of an observable \(A\) with respect to the quantum state \(\rho\) will be generally denoted by \(\left\langle A\right\rangle_{\rho}=\operatorname{Tr}(\rho A)\).
Unitary gates will be denoted by \(X,Y,Z,\operatorname{CX},R_{x}\), etc. We use \(\operatorname{CX}(i\to j)\) to denote the \(\operatorname{CNOT}\) (Controlled-NOT) gate between the control qubit \(i\) and the target qubit \(j\). We use \(R_{\mathbf{x}}(\theta)\) to denote a Bloch sphere rotation of angle \(\theta\) around the axis \(\mathbf{x}\). Quantum channels, i.e., completely positive, trace-preserving (CPTP) maps will usually be denoted by calligraphic letters such as \(\mathcal{E}(\cdot),\mathcal{N}(\cdot)\), etc. Given a unitary gate \(U\), we will denote its corresponding channel by \(\mathcal{U}\), i.e., \(\mathcal{U}(\rho)\stackrel{{\mathrm{def}}}{{=}}U\rho U^{\dagger}\). For any quantum channel \(\mathcal{E}\), its adjoint is the unique superoperator \(\mathcal{E}^{*}\) that satisfies \((\mathcal{E}^{*}(A),B)=(A,\mathcal{E}(B))\) for any operators \(A,B\), where \((A,B)\stackrel{{\mathrm{def}}}{{=}}\operatorname{Tr}(A^{\dagger}B)\) is the Hilbert-Schmidt inner product between two operators. A map \(\mathcal{E}\) is trace preserving if and only if \(\mathcal{E}^{*}(\mathbb{I})=\mathbb{I}\). A channel \(\mathcal{E}(\cdot)\) is called _unital_ if it also holds that \(\mathcal{E}(\mathbb{I})=\mathbb{I}\). In other words, the maximally mixed state is a fixed point of the channel.
An important example of a non-unital channel that can be applied on a QC is formed by the RESET gate, which performs an active, mid-circuit reset of a qubit to the \(|0\rangle\) state. Ideally, it corresponds to the channel
\[\mathcal{E}_{\mathrm{RESET}}(\rho)=|0\rangle\langle 1|\rho|1\rangle\langle 0|+|0 \rangle\langle 0|\rho|0\rangle\langle 0|.\]
It can also be realized by measuring the qubit in the standard basis, and applying \(X\) gate whenever the result is \(|1\rangle\).
## III Learning quantum channels from their steady state
In this section we introduce our method for verifying and learning quantum channels using local measurements on their steady state. Since our goal is to apply this method to quantum computers, we shall consider channels that can be implemented on a quantum computer. We briefly summarize the main steps of our method in Fig. 1 as a block-diagram.
Let us then denote by \(\mathcal{E}(\cdot)\) such a quantum channel. Two examples to keep in mind are (i) the application of quantum gate drawn from a set of gates according to a fixed probability distribution, and (ii) a low-depth quantum circuit. In both examples, we assume that the overall channel is non-unital because of the noise and/or because of actively using the RESET gate. Unlike unital channels, whose fixed point is typically the maximally mixed state, the fixed points of non-unital channels are typically non-trivial in the sense that two different channels from a relevant set of channels will have different fixed points. We shall further assume that \(\mathcal{E}(\cdot)\) has a _unique_ fixed point \(\mathcal{E}(\rho_{\infty})=\rho_{\infty}\), which we call the _steady state_. We shall also assume that \(\rho_{\infty}\) can be approximately reached by the QC using few applications of \(\mathcal{E}\), starting from an initial state \(\rho_{0}=|0\rangle\langle 0|^{\otimes n}\). For brevity, we will denote expectation values with respect to \(\rho_{\infty}\) by \(\left\langle\cdot\right\rangle_{\otimes}\) instead of \(\left\langle\cdot\right\rangle_{\rho_{\infty}}\).
Given a channel \(\mathcal{E}\), assume that the system is in its steady state \(\rho_{\infty}\). Then the expectation value of any observable \(A\) must satisfy
\[\left\langle A\right\rangle_{\infty}=\mathrm{Tr}(A\rho_{\infty})=\mathrm{Tr} (\mathcal{E}^{*}(A)\rho_{\infty})=\left\langle\mathcal{E}^{*}(A)\right\rangle _{\infty}. \tag{1}\]
The above equation is the basis of our method. For the channels that will be discussed below, when \(A\) is a local observable, the observable \(\mathcal{E}^{*}(A)\) is also local, or at least can be estimated using local measurements. In such case, we can use the equality between the LHS and RHS to constrain, or even learn \(\mathcal{E}\), without having to calculate the expectation values in \(\rho_{\infty}\) -- which is in general exponentially expensive in \(n\). Moreover, as we are measuring the steady state of the system, our results are independent from the initial state of the system, and therefore insensitive to state preparation errors.
Below, we describe two methods to construct such \(\mathcal{E}\). In Sec. III.1 and Sec. III.2 we describe a stochastic method of creating \(\mathcal{E}\), and in Sec. III.3 describe a deterministic method.
### A stochastic map: the strictly local case
Let \(\{\mathcal{E}_{k}\}\) be a set of local channels implementable by a QC. We can think of every \(\mathcal{E}_{k}\) as being the application of a quantum gate or of several gates acting on a small contiguous subset of qubits. In practice, we will always use \(\mathcal{E}_{k}\) that act on at most two neighboring qubits. In addition, we will allow some of the \(\mathcal{E}_{k}\) to be non-unital by containing RESET gates. Together with \(\{\mathcal{E}_{k}\}\), we will use a probability distribution \(\{p_{k}\}\) and define our quantum channel by
\[\mathcal{E}\stackrel{{\mathrm{def}}}{{=}}\sum_{k}p_{k}\mathcal{E} _{k}. \tag{2}\]
Physically, \(\mathcal{E}\) can be implemented on a QC in a stochastic way: we choose a \(k\) with probability \(p_{k}\) and apply \(\mathcal{E}_{k}\).
For example, the three qubits non-unital quantum channel
\[\mathcal{E}(\rho) =0.2\cdot{X_{1}^{1/2}}{\rho{X_{1}^{1/2}}^{\dagger}}+0.2\cdot \mathrm{CX}(1\to 2)\rho\,\mathrm{CX}(1\to 2)\] \[+0.4\cdot\mathrm{CX}(2\to 3)\rho\,\mathrm{CX}(2\to 3)\] \[+0.2\cdot\mathrm{RESET}_{3}(\rho)\]
can be implemented on a QC by randomly applying one of the gates \(\{{X_{1}^{1/2}},\mathrm{CX}(1\to 2),\mathrm{CX}(2\to 3),\mathrm{RESET}_{3}\}\) according to the probabilities \(\{0.2,0.2,0.4,0.2\}\). A simple numerical check shows that the steady state of the this channel is a non-product state which can be approximated to within a trace distance of \(10^{-5}\) using few tens of steps.
Plugging Eq. (2) in Eq. (1), we see that for any local
Figure 1: Block diagram summarizing our method. Given a description of the non-unital channel \(\mathcal{E}(\cdot)\) (e.g. its quantum gates, activation probabilities, etc), we iteratively apply it on a QC to reach an approximate steady state. The local Pauli expectation values are sampled from the steady state, and used as input to the classical post-processing routine. They satisfy a set of local constraints with coefficients that depend on the noise parameters of the underlying quantum device. The local constraints can be verified efficiently and their number grows linearly with the system size. To _validate_ a given noise model, we use these constraints to define a _cost function_ that measures how much they are violated, and check how close it is to zero. To _learn_ a noise model, we use an optimization routine to minimize the cost function over a set of possible noise models.
observable \(A\),
\[\left\langle A\right\rangle_{\infty}=\sum_{k}p_{k}\left\langle\mathcal{E}_{k}^{*}( A)\right\rangle_{\infty}. \tag{3}\]
Importantly, if \(\mathcal{E}_{k}\) is a 2-local channel, then \(\mathcal{E}_{k}^{*}\) also acts non-trivially only on the two-qubits on which \(\mathcal{E}_{k}\) is defined. Consequently, if \(A\) is a \(t\)-local observable, then \(\mathcal{E}_{k}^{*}(A)\) is at most a \((t+2)\)-local observable. In fact, it is at most \((t+1)\) for the following reason. The only case where \(\mathcal{E}_{k}^{*}(A)\) might be \((t+2)\)-local is when the supports of \(\mathcal{E}_{k}\) and \(A\) are disjoint. But in such case, \(\mathcal{E}_{k}^{*}(A)=A\mathcal{E}_{k}^{*}(\mathbb{I})=A\), which is \(t\)-local.
Using this observation, we may write Eq. (3) as
\[\left\langle A\right\rangle_{\infty}=\sum_{k\in\mathrm{supp}(A)}p_{k}\left\langle \mathcal{E}_{k}^{*}(A)\right\rangle_{\infty}+\sum_{k\notin\mathrm{supp}(A)}p_ {k}\left\langle A\right\rangle_{\infty},\]
where we use the notation \(k\in\mathrm{supp}(A)\) to denote enumeration over all \(k\) indices for which \(\mathcal{E}_{k}\) acts non trivially on \(A\). Writing the LHS as \(\sum_{k\in\mathrm{supp}(A)}p_{k}\left\langle A\right\rangle_{\infty}+\sum_{k \notin\mathrm{supp}(A)}p_{k}\left\langle A\right\rangle_{\infty}\) and re-organizing the equation, we obtain the following equation, which holds for every local observable \(A\):
\[\sum_{k\in\mathrm{supp}(A)}p_{k}\big{(}\langle\mathcal{E}_{k}^{*}(A)\rangle_{ \infty}-\left\langle A\right\rangle_{\infty}\big{)}=0. \tag{4}\]
The above equation can be used to _validate_ a given model of the local channels \(\{\mathcal{E}_{k}\}\). Indeed, given a model for the local channels, together with a corresponding probability distribution \(\{p_{k}\}\), we define a _cost function_\(\Phi\) by taking the square of the difference between the LHS and RHS of Eq. (4) for a set of independent local observables \(\{A\}\):
\[\Phi\overset{\mathrm{def}}{=}\sum_{A}\Big{(}\sum_{k\in\mathrm{supp}(A)}p_{k} \big{(}\langle\mathcal{E}_{k}^{*}(A)\rangle_{\infty}-\left\langle A\right\rangle _{\infty}\big{)}\Big{)}^{2}. \tag{5}\]
Since \(\left\langle\mathcal{E}_{k}^{*}(A)\right\rangle_{\infty}\) is the expectation value of a \((t+1)\)-local observable, we can express it as a linear combination of the expectation value of \((t+1)\)-local Pauli strings on the same support. The resulting \(\Phi\) can then be written as a quadratic expression of these Pauli expectation values, with coefficients \(C_{\alpha\beta}\) that depend on the details of underlying model:
\[\Phi=\sum_{A}\sum_{\alpha,\beta}C_{\alpha\beta}^{(A)}\left\langle P_{\alpha} \right\rangle_{\infty}\!\left\langle P_{\beta}\right\rangle_{\infty}. \tag{6}\]
Therefore, by measuring the \((t+1)\)-local Pauli expectation values in the steady state, we can calculate \(\Phi\) and see how well the model fits the actual quantum hardware.
This procedure can also be made local by considering local cost functions \(\Phi_{q}\) for qubit \(q\). Denoting by \(I_{q}\) the set of observables \(A\) for which \(\mathcal{E}_{k}^{*}(A)\) acts non-trivially on \(q\), the local cost function is given by
\[\Phi_{q}\overset{\mathrm{def}}{=}\frac{1}{|I_{q}|}\sum_{A\in I_{q}}\Big{(} \sum_{k\in\mathrm{supp}(A)}p_{k}\langle\mathcal{E}_{k}^{*}(A)\rangle_{\infty}- \left\langle A\right\rangle_{\infty}\Big{)}^{2}, \tag{7}\]
where the \(|I_{q}|\) is the number of elements in \(I_{q}\). In general, \(|I_{q}|\) can be different for different qubits, e.q., for qubits arranged on a line the boundary qubits on each end of the system will have a smaller \(|I_{q}|\).
The local validation scheme allows us to identify the regions in the system where the model performs well or badly in a setup in which possibly all the device qubits are being actively used. Because it only requires \((t+1)\)-local expectation values, it is highly scalable and has a small computational cost.
The above idea can be pushed further; we can actually use Eq. (4) to _learn_ the local \(\mathcal{E}_{k}\) channels. Suppose we have some parameterization of \(\mathcal{E}_{k}\), denoted by \(\mathcal{E}_{k,\mathbf{\theta}}\), where \(\mathbf{\theta}\) is a vector of parameters. We may define the \(\mathbf{\theta}\)-dependent cost function
\[\Phi(\mathbf{\theta})\overset{\mathrm{def}}{=}\sum_{A}\left(\sum_{k\in\mathrm{ supp}(A)}p_{k}\big{(}\langle\mathcal{E}_{k,\mathbf{\theta}}^{*}(A)\rangle_{\infty}- \left\langle A\right\rangle_{\infty}\big{)}\right)^{2}, \tag{8}\]
and find the particular \(\mathbf{\theta}\) that best describes the system by _minimizing_\(\Phi(\mathbf{\theta})\). As before, we can write \(\Phi(\mathbf{\theta})\) as a quadratic expression of the expectation values of \((t+1)\)-local Paulis in the steady state with \(\mathbf{\theta}\)-dependent coefficients:
\[\Phi(\mathbf{\theta})=\sum_{A}\sum_{\alpha,\beta}C_{\alpha\beta}^{(A)}(\mathbf{\theta} )\langle P_{\alpha}\rangle_{\infty}\!\left\langle P_{\beta}\right\rangle_{ \infty}. \tag{9}\]
Estimating these local Pauli expectations at the steady state allows us to learn the best \(\mathbf{\theta}\) that describes the quantum hardware.
The \(\mathbf{\theta}\) parameterization can be full, in which case we would need about \(16\times 16\) parameters to describe all possible 2-qubit channels \(\mathcal{E}_{k}\), or it could contain a much smaller number of parameters if we use prior assumptions on the structure of \(\mathcal{E}_{k}\).
For a system with \(n\) qubits, there are exactly \(N_{t}=\sum_{k=1}^{t}3^{k}\left(n-k+1\right)\)\(t\)-local Pauli operators with contiguous support, not counting the trivial identity operator. By selecting \(t\)-local Pauli operators as the observables \(A\) in Eq. (8), we effectively minimize over \(N_{t}\) constraints, and therefore it is desirable that the number of free parameters will be smaller than \(N_{t}\). For example, we may model the channel that corresponds to an \(R_{\mathbf{x}}(\alpha)\) rotation using one parameter -- \(\alpha\), in order to see if the hardware gate under-rotates or over-rotates with respect to the \(\hat{x}\) axis.
### A stochastic map: the non strictly-local case
In the previous section we derived our main equation, Eq. (4), under the assumption that \(\mathcal{E}_{k}\) are _strictly local_ channels, acting non trivially on at most \(O(1)\) qubits. This is assumption is longer true in realistic noise models, in which also idle qubits, far away from the support of \(\mathcal{E}_{k}\), experience noise such as dephasing and amplitude
damping. In this section we show how our method can be generalized to account also for such cases. Specifically, we shall assume a local Markovian noise model without cross talks, with the following properties:
1. Each qubit \(j\) has its own idle noise channel \(\mathcal{N}_{j}\), which acts independently of what happens to the other qubits. Thus the noise on the set of idle qubits \(I_{\text{idle}}\) factors into a tensor product of single-qubit noise channels \(\bigotimes_{j\in I_{\text{idle}}}\mathcal{N}_{j}\).
2. When a unitary gate or a RESET gate acts on one or two qubits, the actual channel applied by the QC is a noisy version of these gates that acts only on the qubits in the support of the ideal gates (i.e., no spillover or cross-talks to other qubits), _together_ with the idle noise channel on the rest of the qubits.
Together, these two assumptions imply that every local channel \(\mathcal{E}_{k}\) in the strictly local case is replaced by the non-local channel \(\tilde{\mathcal{E}}_{k}\otimes\bigotimes_{j\notin\text{supp}(k)}\mathcal{N}_{j}\), where \(\tilde{\mathcal{E}}_{k}\) is the noisy version of \(\mathcal{E}_{k}\) and \(\text{supp}(k)\) denotes the qubits in the support of \(\mathcal{E}_{k}\) (which, under our assumptions, also define the support of \(\tilde{\mathcal{E}}_{k}\)).
To proceed, we define superoperators \(\mathcal{F}_{k}\)
\[\mathcal{F}_{k}\stackrel{{\text{def}}}{{=}}\Big{(}\bigotimes_{j \in\text{supp}(k)}\mathcal{N}_{j}^{-1}\Big{)}\tilde{\mathcal{E}}_{k}. \tag{10}\]
We will use \(\mathcal{F}_{k}\) as a mathematical tool to simplify the equations we derive below. Note that \(\mathcal{F}_{k}\) is not necessarily a channel, since \(\mathcal{N}_{j}^{-1}\) by itself is not necessarily a channel. Nevertheless it is a _local_ superoperator which satisfies \(\mathcal{F}_{k}(A)=A\) when \(A\) is outside its support. The point of using \(\mathcal{F}_{k}\) is that it allows us to write the global action of the \(k\)th local channel without the \(j\notin\text{supp}(k)\) condition:
\[\tilde{\mathcal{E}}_{k}\otimes\bigotimes_{j\notin\text{supp}(k)}\mathcal{N}_ {j}=\Big{(}\bigotimes_{j}\mathcal{N}_{j}\Big{)}\cdot\mathcal{F}_{k}.\]
Plugging this expression to Eq. (1), we obtain the non strictly-local version of Eq. (3):
\[\left\langle A\right\rangle_{\infty}=\sum_{k}p_{k}\big{\langle}\big{(} \mathcal{F}_{k}^{*}\cdot\bigotimes_{j}\mathcal{N}_{i}^{*}\big{)}A\big{\rangle} _{\infty}. \tag{11}\]
By the same argument used in the previous section, we note that the local noise channels act on \(A\) non-trivially only if they are in its support, and therefore we can define a "noisy version" of \(A\) by
\[\Big{(}\bigotimes_{j}\mathcal{N}_{j}^{*}\Big{)}(A)=\Big{(}\bigotimes_{j\in \text{supp}(k)}\mathcal{N}_{j}^{*}\Big{)}(A)\stackrel{{\text{ def}}}{{=}}\tilde{A}.\]
Plugging this back into Eq. (11), we get the equation
\[\left\langle A\right\rangle_{\infty}=\sum_{k}p_{k}\langle\mathcal{F}_{k}^{*}( \tilde{A})\rangle_{\infty}. \tag{12}\]
This equation is almost identical to its strictly-local counterpart Eq. (3), except that in the RHS \(A\) is replaced by \(\tilde{A}\) and \(\mathcal{E}_{k}^{*}\) is replaced by \(\mathcal{F}_{k}^{*}\). Following the same logic as in the previous case, we use the fact that as \(\mathcal{F}_{k}\) is local, \(\left\langle F_{k}^{*}(A)\right\rangle_{\infty}=\left\langle A\right\rangle_{\infty}\) whenever \(A\) is outside its support. Therefore, subtracting \(\left\langle\tilde{A}\right\rangle_{\infty}\) from both sides, we obtain the final equation in which the summation in the RHS is only over channels that intersect with \(A\):
\[\left\langle A-\tilde{A}\right\rangle_{\infty}=\sum_{k\in\text{supp}(A)}p_{k} \big{(}\langle\mathcal{F}_{k}^{*}(\tilde{A})\rangle_{\infty}-\left\langle \tilde{A}\right\rangle_{\infty}\big{)}. \tag{13}\]
As in the previous section, we can use the above set of constraints as a validation tool for particular noise models by defining local cost functions, as done in Eq. (7). We can also use it to learn the best noise model from a family of noise models parameterized by \(\mathbf{\theta}\), by using the global cost function
\[\Phi(\mathbf{\theta})\stackrel{{\text{def}}}{{=}}\sum_{A}\Big{(} \sum_{k\in\text{supp}(A)}p_{k}\big{(}\langle\mathcal{F}_{k,\mathbf{\theta}}^{*}( \tilde{A}_{\mathbf{\theta}})\rangle_{\infty}-\left\langle\tilde{A}_{\mathbf{\theta}} \right\rangle_{\infty}\big{)}-\left\langle A-\tilde{A}_{\mathbf{\theta}}\right\rangle _{\infty}\Big{)}^{2}. \tag{14}\]
As before, to evaluate \(\Phi(\mathbf{\theta})\) from local measurements, we expand the operators \(\tilde{A}_{\mathbf{\theta}}\), \(\mathcal{F}_{k,\mathbf{\theta}}^{*}(\tilde{A}_{\mathbf{\theta}})\) in terms of \((t+1)\)-local Pauli strings with \(\mathbf{\theta}\)-dependent coefficients and write the cost function \(\Phi(\theta)\) is a quadratic expression of these expectations (see Eq. (9)).
### Deterministic map
Realizing the method presented in the previous Sec. III.2 on a QC requires the execution of a large number of _different_ quantum circuits, or trajectories, in order to properly sample the steady state. This makes it challenging to implement on currently available devices, which are often limited by the number of distinct circuits
that can be executed in a single experimental batch.
To overcome this limitation, we propose an alternative way to construct a non-unital channel, which relies on a _deterministic_ map that can be implemented on a QC using a _single_ circuit. For simplicity, we shall describe our construction in the 1D case with open boundary conditions, but generalization to other geometries and higher dimensions is straightforward. In such case, our map \(\mathcal{E}\) is defined by a product of local channels \(\{\mathcal{E}_{k}\}\), where \(\mathcal{E}_{k}\) acts non-trivially on the neighboring qubits \(k,k+1\). The \(\mathcal{E}_{k}\) channels are organized in a two layers brick-wall structure, as shown in Fig. 2.
As explained in Sec. III, to learn the parameters of the quantum channel, we want its steady state to be non-trivial in the sense that it will not be the steady state of other local channels. To that aim, we want the local \(\mathcal{E}_{k}\) channels to be non-unital and also entangling so that the steady state will be a non trivially-entangled state. For example, such \(\mathcal{E}_{k}\) may be realized on a QC by some combination of CX, single-qubit rotations, and RESET gates, as shown in Fig. 2 and discussed below.
To derive our constraints, we view \(\mathcal{E}\) as the product of two layers. We define the _odd layer_ by \(\mathcal{E}_{\mathrm{odd}}\stackrel{{\mathrm{def}}}{{=}} \bigotimes_{\mathrm{odd}\,k}\mathcal{E}_{k}\) and even layer by \(\mathcal{E}_{\mathrm{even}}\stackrel{{\mathrm{def}}}{{=}} \bigotimes_{\mathrm{even}\,k}\mathcal{E}_{k}\). There are two possible choices for the overall channel \(\mathcal{E}\), based on the order in which the layers are applied: \(\mathcal{E}_{I}\stackrel{{\mathrm{def}}}{{=}}\mathcal{E}_{ \mathrm{odd}}\cdot\mathcal{E}_{\mathrm{even}}\) and \(\mathcal{E}_{II}\stackrel{{\mathrm{def}}}{{=}}\mathcal{E}_{ \mathrm{even}}\cdot\mathcal{E}_{\mathrm{odd}}\). An example of the \(\mathcal{E}_{I}\) channel is given in Fig. 2. The steady state of \(\mathcal{E}_{I}\) is defined by \(\mathcal{E}_{I}(\rho^{I}_{\infty})=\rho^{I}_{\infty}\), and the steady state of \(\mathcal{E}_{II}\) by \(\mathcal{E}_{II}(\rho^{II}_{\infty})=\rho^{II}_{\infty}\). In general, they will differ from each other. For brevity, expectation values calculated under \(\rho^{I}_{\infty},\rho^{II}_{\infty}\) are denoted by \(\left\langle\cdot\right\rangle_{I},\left\langle\cdot\right\rangle_{II}\) respectively.
Let us now apply Eq. (1) to the case of \(\mathcal{E}_{I}\) and a local observable \(A\):
\[\left\langle A\right\rangle_{I}=\left\langle\mathcal{E}^{*}_{I}(A)\right\rangle _{I}=\left\langle\mathcal{E}^{*}_{\mathrm{even}}\big{(}\mathcal{E}^{*}_{ \mathrm{odd}}(A)\big{)}\right\rangle_{I}. \tag{15}\]
To simplify the equation, we restrict our attention to observables \(A_{j}\) acting on qubits \(j,j+1\) for odd \(j\). The support \(A_{j}\) coincides with the support of \(\mathcal{E}_{j}\) from \(\mathcal{E}_{\mathrm{odd}}\). By the same argument that was used in Sec. III.1, if the supports of \(\mathcal{E}_{k}\) and \(A_{j}\) are disjoint, \(\mathcal{E}^{*}_{k}(A_{j})=A_{j}\), and so \(\mathcal{E}^{*}_{\mathrm{odd}}(A_{j})=\mathcal{E}^{*}_{j}(A_{j})\). Another simplification comes from the fact that the light cone of \(A_{j}\) has support only on \(\mathcal{E}^{*}_{j-1}\) and \(\mathcal{E}^{*}_{j+1}\), as shown in Fig. 3. Therefore, \(\mathcal{E}^{*}_{\mathrm{even}}\big{(}\mathcal{E}^{*}_{\mathrm{odd}}(A_{j}) \big{)}=\big{(}\mathcal{E}^{*}_{j-1}\otimes\mathcal{E}^{*}_{j+1}\big{)}\big{(} \mathcal{E}^{*}_{j}(A_{j})\big{)}\). All together, this allows us to rewrite Eq. (15) in terms of 4-local expectation values on the RHS:
\[\left\langle A_{j}\right\rangle_{I}=\left\langle(\mathcal{E}^{*}_{j-1} \otimes\mathcal{E}^{*}_{j+1})\mathcal{E}^{*}_{j}(A_{j})\right\rangle_{I}. \tag{16}\]
For observables \(A_{j}\) with even \(j\), a similar equation holds with the steady state of \(\mathcal{E}_{II}\) replacing the steady state of \(\mathcal{E}_{I}\).
Following our previous construction with the stochastic map, we can use the constraints in Eq. (16) to validate a given model for the local channels, or learn the model that best fits the data from a family of models characterized by a set of parameters \(\mathbf{\theta}\). For the learning task, we start by defining the cost function \(\Phi_{I}(\mathbf{\theta})\) using expectation values in the steady state \(\rho^{I}_{\infty}\):
\[\Phi_{I}(\mathbf{\theta})\stackrel{{\mathrm{def}}}{{=}} \sum_{A_{j}}\left(\langle[\mathcal{E}^{*}_{j-1,\mathbf{\theta}}\otimes\mathcal{E }^{*}_{j+1,\mathbf{\theta}}]\mathcal{E}^{*}_{j,\mathbf{\theta}}(A_{j})\rangle_{I}- \left\langle A_{j}\right\rangle_{I}\right)^{2}. \tag{17}\]
Above, the sum is over a set of 2-local observables \(A_{j}\) defined on \(j,j+1\) for odd \(j\), which we take to be Pauli operators. Similarly, we define \(\Phi_{II}(\mathbf{\theta})\) from the expectation values at \(\rho^{II}_{\infty}\), and set
\[\Phi(\mathbf{\theta})\stackrel{{\mathrm{def}}}{{=}} \Phi_{I}(\mathbf{\theta})+\Phi_{II}(\mathbf{\theta}). \tag{18}\]
Figure 3: A light cone originating from a 2-local observable \(A_{j}\) (red shaded area) has at most 4-local support. For the case of finite system with open boundary conditions the light cone is truncated from 4-local to 3-local at each end of the system.
Figure 2: Example of the deterministic channel implementation on a QC. Left: Odd channel \(\mathcal{E}_{I}\) applied on four qubits, the even layer \(\mathcal{E}_{\mathrm{even}}\) is followed by the odd layer \(\mathcal{E}_{\mathrm{odd}}\). During the action of the \(\mathcal{E}_{2}\) on qubits \(2,3\) in the even layer, we assume that the idle noise channels \(\mathcal{N}_{1},\mathcal{N}_{4}\) are acting on the qubits \(1,4\). Right: Inner structure of the 2-local channel \(\mathcal{E}_{3}\). The 2-local non-unital channel \(\mathcal{E}_{3}\) consists of applying a two-qubit unitary gate \(U_{3,1}\) on qubits \(3,4\), followed by a RESET gate on qubit \(3\) along with idle noise channel \(\mathcal{N}_{4}\) on qubit \(4\), and ends with applying a two-qubit unitary gate \(U_{3,2}\). The two-qubit unitary gate \(U_{3,1}\) has two known single-qubit rotation gates \(R_{\mathbf{\alpha}},R_{\mathbf{\beta}}\) about axes \(\hat{\alpha}\) and \(\hat{\beta}\) together with a CX gate. The exact details of our realization of the noisy non-unital channels \(\{\mathcal{E}_{k}\}\) are given in Sec. V.3.1.
For a system with \(n\) qubits, there are exactly \(3n+9(n-1)=12n-9\) such geometrically 2-local Pauli operators, excluding the trivial identity operator. This number provides a rough upper bound to the number of parameters that can be learned using this method.
To validate a given model, we define local cost functions \(\Phi_{q}\), which for a given \(q\) contain all terms in \(\Phi_{I},\Phi_{II}\) that overlap \(q\). See Eq. (7) for the equivalent definition in the stochastic case.
As in the case of the stochastic map, we can expand \(\mathcal{E}_{I}^{*}(A),\mathcal{E}_{II}^{*}(A)\) in terms of 4-local Pauli operators, and write \(\Phi(\mathbf{\theta})\) as a quadratic expression of the expectation values of these operators, as done in Eq. (9).
There are many ways to realize the non-unital channels \(\mathcal{E}_{k}\) on a QC. In this work we use a composed gate, which we call the RESU gate (RESU = RESET + unitary). It is defined by two 2-local unitaries \(U_{k,1}\) and \(U_{k,2}\) acting on qubits \(k,k+1\) and a RESET gate on qubit \(k\). Each of the 2-local unitaries is made of a CNOT and two general 1-qubit rotations, as shown in Fig. 2. Further details of how we model this gate in the presence of noise are given in Sec. V.3.1.
In Sec. V.3 we show numerically that our protocol can be used to learn a given noise model, and in Sec. VI we demonstrate its performance on actual quantum hardware.
## IV Noise models and classical optimization
In this section we describe the noise models and the numerical procedures we used to find \(\mathbf{\theta}\) by optimizing over the cost functions in Eqs. (14, 18),
### Local Markovian noise models
We assume a local Markovian noise model, which can be generally treated using the Lindbladian master equation [45; 46; 47]. To model the evolution of a noisy quantum gate, we write
\[\frac{d}{dt}\rho=-\frac{i}{T_{0}}[H_{k}(\mathbf{\theta}_{k}),\rho]+\frac{1}{T_{0} }\sum_{m}\theta_{m}\mathcal{L}_{m}(\rho). \tag{19}\]
Above, \(T_{0}\) is a time scale, to be determined later, and the dimensionless Hamiltonian \(H_{k}(\mathbf{\theta}_{k})\) defines the coherent evolution of the gate \(U_{k}\), i.e., \(U_{k}=e^{-iH_{k}(\mathbf{\theta}_{k})T/T_{0}}\), where \(T\) is the gate time and \(\{\mathbf{\theta}_{k}\}\) are variational gate parameters. For example, the \(\{\mathbf{\theta}_{k}\}\) parameters may describe a source of coherent error in the two-qubit CX gate.
Under our local noise assumption, we take \(\mathcal{L}_{m}\) to be _single-qubit_ superoperators that represent different single-qubit noise processes. They are given in terms of jump operators \(L_{m}\), \(\mathcal{L}_{m}(\rho)\stackrel{{\mathrm{def}}}{{=}}L_{m}\rho L_ {m}^{\dagger}-\frac{1}{2}\{L_{m}^{\dagger}L_{m},\rho\}\), where \(\{\cdot,\cdot\}\) is an anti-commutator. The variational parameters \(\{\theta_{m}\}\) model the strength of the different noise processes. For example, for a single-qubit dephasing noise, we use \(\mathcal{L}_{z}(\rho)=Z\rho Z-\rho\), and the corresponding \(\theta/T_{0}\) is the decoherence rate.
Given a set of channels \(\{\mathcal{E}_{k}\}\) that define either the stochastic or the deterministic maps, we take \(T_{0}\) to be the maximum over the running times of \(\{\mathcal{E}_{k}\}\). Working with the IBMQ hardware, this time scale was primarily due to the RESET gate time, which is typically on the order of a microsecond [48; 49]. The \(T_{0}\) and gate execution times we used in our numerical simulations are given in Appendix B and Appendix C.
We allowed the coupling strength (variational) parameters to be different for each qubit. For a qubit \(j\), we use the notation \(\mathbf{\theta}^{(j)}=\{\theta_{m}^{(j)}\}\) to denote its relevant noise parameters.
Using the above assumptions, the noise channel \(\mathcal{N}_{\mathbf{\theta}^{(j)}}\) on an idle qubit \(j\) for time \(T\) is represented by:
\[\mathcal{N}_{\mathbf{\theta}^{(j)}}(\rho)=e^{\frac{T}{T_{0}}\sum_{m}\theta_{m}^{( j)}\mathcal{L}_{m}}(\rho). \tag{20}\]
Similarly, the noisy version of a unitary gate \(U_{k}\) described by a Hamiltonian \(H_{k}(\mathbf{\theta})\) acting for time \(T\) is given by:
\[\tilde{\mathcal{E}}_{k,\mathbf{\theta}^{(j)}}(\rho)=e^{\frac{T}{T_{0}}\left(-i[H_{ k}(\mathbf{\theta}_{k}),\cdot]+\sum_{m}\theta_{m}^{(j)}\mathcal{L}_{m}\right)}(\rho). \tag{21}\]
If \(U_{k}\) is a two-qubits gate, the above equation will contain the \(\theta_{m}\mathcal{L}_{m}\) operators of both qubits.
Equation (21) in its entirety was in fact only used to model the noisy CX gate when analyzing the experimental data from the IBMQ hardware. For the single qubit gates, or for the CX gate in the numerical simulations, no coherent errors were assumed, and \(H_{k}(\mathbf{\theta}_{k})\) was taken to be the ideal Hamiltonian, which is independent of \(\mathbf{\theta}_{k}\). The full details of the CX modeling we used are given in Appendix A.
Another simplification was the use of a first-order Trotter-Suzuki approximation for the case of single-qubit gates. As the typical execution time of these gates is much shorter than that of the CX gate (in the IBMQ hardware their execution time is the range of tens of nanoseconds [48; 49], which is about an order of magnitude shorter than the execution time of CX), we approximated Eq. (21) by
\[\tilde{\mathcal{E}}_{k,\mathbf{\theta}^{(j)}}\simeq\mathcal{U}_{k}\cdot\mathcal{N} _{\mathbf{\theta}^{(j)}}. \tag{22}\]
Above, \(\mathcal{U}_{k}(\cdot)\stackrel{{\mathrm{def}}}{{=}}e^{-i\frac{T} {T_{0}}[H_{k},\cdot]}\) is the ideal single-qubit gate channel. Notice that as the resultant error of this approximation scales as \(O\left((T/T_{0})^{2}\cdot\big{\|}\big{[}[H_{k},\cdot],\sum_{m}\theta_{m}^{(j) }\mathcal{L}_{m}\big{]}\big{\|}\right)\), and as \(\|H_{k}\|\) and \(\|\theta_{m}\mathcal{L}_{m}\|\) are all \(O(1)\) (see Appendix B and Appendix C for more details), we deduce that our Trotter-Suzuki error for single-qubit gates is of the order of \(\big{(}T/T_{0}\big{)}^{2}\sim 10^{-4}\), which is much smaller than all other error sources (e.g., statistical error) described below.
We conclude this section by describing how we modeled the noisy RESET gate. Here we followed Ref. [32] and modeled the noisy gate phenomenologically using a
Kraus map representation. In this approach, the noisy RESET gate on the qubit \(j\) is described by two variational parameters, \(\mathbf{\theta}^{(j)}=(\theta_{0}^{(j)},\theta_{1}^{(j)})\). The \(\theta_{0}^{(j)}\) is the probability to measure \(0\) given that the qubit \(j\) was in the state \(1\), and the \(\theta_{1}^{(j)}\) is the probability to measure \(1\) given that the qubit was in the state \(0\). All together, the noisy RESET gate is modeled by
\[\widetilde{\text{RESET}}(\rho)\stackrel{{\text{ def}}}{{=}}\theta_{0}|0\rangle\langle 0|\,\rho\,|0\rangle\langle 0|+ \theta_{1}|0\rangle\langle 1|\,\rho\,|1\rangle\langle 0| \tag{23}\] \[+(1-\theta_{0})|1\rangle\langle 0|\,\rho\,|0\rangle\langle 1|+(1- \theta_{1})|1\rangle\langle 1|\,\rho\,|1\rangle\langle 1|.\]
### Classical optimization
The minimization of the cost function \(\Phi(\mathbf{\theta})\) in Eqs. (14, 18) was done using the stochastic optimization algorithm Adam [50], implemented by PyTorch with an automatic differentiation engine [51]. We used the same numerical optimization procedure for both the numerical simulations and the experimental results on real quantum hardware. This is because the cost function depends only on the local expectation values, which may come either from numerical simulations or actual experiments.
Heuristically, we fixed the maximum number of optimization steps to be \(15,000\). We added another heuristic termination criterion for the optimization process, wherein the optimization ceased if at step \(i\), the difference in the loss function from step \(i-500\) was less than \(0.25\%\).
In what follows, we denote by \(\mathbf{\theta}_{est}\) the channel parameters which achieve the lowest cost function value of \(\Phi(\mathbf{\theta}_{est})\).
## V Numerical simulations
To test our method, we performed several numerical simulations of the stochastic and deterministic maps, and optimized over the cost function to learn the underlying noise model. Below, we briefly describe the technical details of simulations, followed by their results.
### Simulation of the dynamics and calculation of the expectation values
For both maps, we simulated systems of \(n=5\) to \(n=11\) qubits arranged on a line, with the initial state \(\rho_{0}=|0\rangle\langle 0|^{\otimes n}\). The simulations were done by evolving the full density-matrix on a computer. An approximation for steady state \(\rho_{\infty}\) of a quantum channel \(\mathcal{E}\) was obtained numerically by iteratively applying the quantum channel until the convergence criteria \(D\big{(}\rho_{t},\mathcal{E}(\rho_{t})\big{)}\stackrel{{\text{ def}}}{{=}}\frac{1}{2}\|\rho_{t}-\mathcal{E}(\rho_{t})\|_{1}<10^{-6}\) was reached. The approximate steady-state was then used to calculate the expectation values of the local Pauli operators in the different cost functions \(\Phi(\mathbf{\theta})\), as given in Eq. (9).
We modeled the statistical noise in the expectation value of the local observables using a Gaussian random variable that approximates the binomial distribution of measurement results. For an observable \(A\) that is a product of Pauli matrices, it is easy to verify that the statistical noise of \(N\) shots is well approximated by a Gaussian random variable \(S\sim\mathcal{N}(0,\sigma)\) with \(\sigma=\sqrt{(1-\langle A\rangle_{\infty}^{2})/N}\).
### Stochastic map simulations
For the stochastic map simulations, our noise model consisted of dephasing (in the \(\hat{z}\) direction) and amplitude damping, as well as the noisy RESET gate from Eq. (23). This amounts to \(4\) noise parameters for each qubit: \(2\) for the dephasing and amplitude damping and \(2\) for the noisy RESET gate. In all unitary gates, we assumed no coherent errors. The exact form of the Lindbladian dissipators is given in Appendix B.2.
We simulated \(3\) different stochastic maps, which used the same local channels but with different probabilities \(\{p_{k}\}\). We refer to them as probability set \(1\), \(2\), and \(3\).
The set of gates that defined the stochastic map consisted of the single-qubits gates \(X^{1/2}\), \(H\), \(R_{\mathbf{\alpha}}(\psi)\), and a non-unitary \(2\)-local gate between neighboring qubits \(i,j\), which we call RESCX\((i\to j)\). The \(R_{\mathbf{\alpha}}(\psi)\) gate is a single-qubit rotation gate by an angle \(\psi\) around the axis \(\mathbf{\alpha}\), and RESCX\((i\to j)\) is discussed below.
Following the IBMQ hardware specifications, the \(R_{\mathbf{\alpha}}(\psi)\) gate was implemented as a combination of two \(X^{1/2}\) and three \(R_{\mathbf{z}}(\phi_{i})\) gates, which are native gates in IBMQ:
\[R_{\mathbf{\alpha}}(\psi)=R_{\mathbf{z}}(\phi_{1})X^{1/2}R_{\mathbf{z}}(\phi_{2})X^{1/2}R_ {\mathbf{z}}(\phi_{3}), \tag{24}\]
In the formula above, the three rotation angles \(\{\phi_{i}\}_{i=1}^{3}\) implicitly define the rotation angle \(\psi\) and the rotation axis \(\mathbf{\alpha}\).
The RESCX\((i\to j)\) gate is a combination of the RESET gate on \(j\), followed by a CX\((i\to j)\), as shown in Fig. 4. The motivation behind this construction is that the RESET gate makes the quantum channel non-unital, but also breaks entanglement. This might lead to
Figure 4: Structure of the RESCX\((i\to j)\) gate acting two neighboring qubits \(j,i\). We first break the entanglement between \(i,j\) by resetting qubit \(j\), and then recreate some entanglement between them by applying CX\((i\to j)\). During the action of the RESET gate on qubit \(j\), the idle noise channel \(\mathcal{N}_{i}\) is acting on qubit \(i\).
a trivial steady-state, from which it is hard to learn the underlying noise model. Applying the \(\text{CX}(i\to j)\) gate right after the \(\text{RESET}(j)\) was applied regenerates some entanglement.
The full description of the 3 probability sets, together with the underlying noise model, the gates and their execution times, are given in Appendix B.
Our three probability sets were chosen such that the probability of picking a certain gate is independent of the qubit on which it acts. Taking the time scale \(T_{0}\) (see also Sec. IV.1) to be the maximum running time over the gate set \(\{X^{1/2},H,R_{\boldsymbol{\alpha}}(\psi),\text{RESCX}\}\), the implementation of the stochastic channel from Sec. III.2 on a quantum computer, is described by the following algorithm:
1. Pick a gate \(1\leq k\leq 4\) from the set \(\{X^{1/2},H,R_{\boldsymbol{\alpha}}(\psi),\text{RESCX}\}\) with probability \(p_{k}\).
2. If the chosen gate is a single-qubit unitary, uniformly choose a qubit and apply it.
3. If the chosen gate is the RESCX gate, uniformly choose a neighboring pair of control and target qubits and apply it.
4. If the total running time \(T\) of the chosen gate from steps 2 or 3 is smaller than \(T_{0}\), wait \(\Delta T=T_{0}-T\).
Following the discussion in Sec. III.2, we chose the observables \(A\) in the cost function (14) to be all possible, geometrically 2-local Pauli operators. Consequently, the optimization procedure used the expectation values of 3-local Paulis.
To evaluate the accuracy of the numerical optimization, we calculated the diamond distance between the original noisy CX and RESET gates and the ones obtained from the optimization routine. We focused on these two gates because they have the longest gate times, and therefore they are the most noisy gates in the simulation.
Fig. 5 and Fig. 6 present the average diamond distance between the noisy CX and RESET gates. For a system of size \(n\), the average is taken over \(n\) possible RESET gates and \(2(n-1)\) possible CX gates. Fig. 5 shows the average diamond distance as a function of the number of shots per local Pauli observable, with the system size fixed at \(n=6\) qubits. The results are robust for the three different probability sets.
With \(N=10^{6}\) shots per Pauli observable, the average diamond distance for the CX gate is approximately \(2\times 10^{-5}\), while the average diamond distance for the RESET gates is of the order \(2\times 10^{-2}\). The difference between these two distances is probably because the noise parameters that determine the noisy CX affect not only the RESCX gate, but also the idle noise channels and the noisy single qubit gates. In contrast, the noise parameters of the RESET gate appear only when the RESCX gate is applied.
The same behavior can be observed in Fig. 6, where we
Figure 5: The average diamond distance between the noisy CX and RESET gates, for a fixed system size \(n=6\) qubits. Each data point being the average of 10 statistical noise realizations in the expectation values. The error bars indicate the uncertainty of two standard deviations. (Top): The average diamond distance for the CX gates. (Bottom): The average diamond distance for the RESET gates. Each color represents a different probability set \(\{p_{k}\}\). The constant horizontal line (red color) represents the diamond distance between the ideal case of the noiseless channel and the true noise channel parameters.
Figure 6: The average diamond distance between the noisy CX and RESET gates, for a fixed number of shots \(N=10^{6}\), using the probabilities \(\{p_{k}\}\) from the set 2. Each data point being the average of 10 statistical noise realizations in the expectation values. The error bars indicate the uncertainty of two standard deviations.
present the average diamond distance as a function of the number of qubits in the system using \(10^{6}\) measurements per local Pauli operator for the probability set 2.
It would be interesting to find the _optimal_ choice of the gates and probability distribution \(\{p_{k}\}\) for learning a given model of noise. We leave this for future works.
### Deterministic maps simulations
#### iv.3.1 The RESU gate
As mentioned in Sec. III.3, we realized the 2-local non-unital channels \(\{\mathcal{E}_{k}\}\) of the deterministic maps using a composed gate called the RESU gate. As shown in Fig. 2, the RESU gate on qubits \(k,k+1\) consists of two 2-local unitaries \(U_{k,1}\) and \(U_{k,2}\) acting on qubits \(k,k+1\) along with a RESET gate on qubit \(k\):
\[\mathrm{RESU}_{k}\stackrel{{\mathrm{def}}}{{=}}U_{k,2}\cdot( \mathrm{RESET}_{k}\otimes\mathbb{I}_{k+1})\cdot U_{k,1}. \tag{25}\]
Each of the unitaries \(U_{k,i=1,2}\) is made of a CX gate and two arbitrary single-qubit rotation gates. Therefore, different RESU gates are characterized by four single-qubit rotations \(R_{\mathbf{\alpha}}(\psi)\), which we denote by \(\{\mathbf{\alpha}_{k},\psi_{k}\}_{k=1}^{4}\). As in Sec. V.2, each single-qubit rotation \(R_{\mathbf{\alpha}}(\psi)\) is implemented as a product of two \(X^{1/2}\) and three \(R_{\mathbf{z}}(\phi_{i})\) gates, as described in Eq. (24).
To model the noise in the deterministic map, we used the framework of Sec. IV.1. For each unitary \(U_{k,i=1,2}\) in we defined its noisy version \(\tilde{U}_{k,i=1,2}\) by replacing its noiseless composing gates by their noisy versions. The noisy CX gates in \(\tilde{U}_{k,i=1,2}\) were modeled by Eq. (21), and the noisy single-qubit rotations \(R_{\mathbf{\alpha}}(\psi)\) were modeled by Eq. (22). Finally, the noisy RESET gate is modeled by Eq. (23). Together, the expression of the noisy RESU gate becomes:
\[\mathrm{RESU}_{k}\stackrel{{\mathrm{def}}}{{=}}\tilde{U}_{k,2} \cdot(\mathrm{RESET}_{k}\otimes\mathcal{N}_{k+1})\cdot\tilde{U}_{k,1}, \tag{26}\]
where \(\mathcal{N}_{k+1}\) is an idle noise channel on qubit \(k+1\) for the duration of the noisy RESET gate.
For the time scale \(T_{0}\), as defined in Sec. IV.1, we took the maximum running time over all \(\{\mathrm{RESU}_{k}\}\) gates. For any \(\mathrm{RESU}_{k}\) gate with the running time \(T_{k}<T_{0}\), a two-qubit idle noise channel \(\mathcal{N}_{k}\otimes\mathcal{N}_{k+1}\) is applied for the duration \(\Delta T_{k}=T_{0}-T_{k}\). On a real QC the idle noise channel is replaced by the _delay_ instruction.
#### iv.3.2 Numerical results
We simulated three deterministic maps, which we refer to as map-1, map-2, and map-3. Each map used two different RESU gates: one for the even layer, and one for the odd layer. In each layer, the same RESU gate was used on all qubits. We note that while the general RESU gate is defined by four single-qubit rotations \(R_{\mathbf{\alpha}}(\psi)\), the actual gates used in the simulations and experiments only used two single qubits rotations in each gate by setting \(R_{\mathbf{\alpha}}=R_{\mathbf{\beta}}\) and \(R_{\mathbf{\gamma}}=R_{\mathbf{\delta}}\) (see Fig. 2). This was due to a technical reason: limiting the number of parameters helped us to find sets of angles for which the convergence to the steady state was rapid (as required by the IBMQ hardware). All together, each deterministic map was defined by a set of 4 single-qubit rotations (2 for the even layer RESU and 2 for the odd layer RESU). The full details of the maps, including the rotation axes and angles, as well the different gate times, are described in Appendix C.
The noise model we used in the simulations of the deterministic maps consisted of generalized amplitude damping and non-uniform depolarization. For each qubit this amounted to 5 Lindblad operators: 3 depolarization operators (one for each axis) and two operators for the generalized amplitude damping [52; 53; 54]. In addition, for
Figure 7: The average diamond distance between the noisy CX and RESET gates, for a fixed system size \(n=8\) qubits. Each data point being the average of 10 statistical noise realizations in the expectation values. The error bars indicate the uncertainty of two standard deviations. (Top): The average diamond distance for the CX gates. (Bottom): The average diamond distance for the RESET gates. Each color represents a different maps defined by the rotation axes and angles, \(\{\mathbf{\alpha}_{k},\psi_{k}\}_{k=1}^{4}\). For a specific map, the pairs \(\{\mathbf{\alpha}_{k},\psi_{k}\}\) were randomly generated, and kept constant. The constant horizontal line (red color) represents the diamond distance between the ideal case of the noiseless channel and the true noise channel parameters.
each qubit we had two parameters modeling its RESET gate according to Eq. (23). As in the stochastic case, we assume no coherent errors in the unitary gates. The full description of the noise model is given in Appendix C.2.
After running the simulations, we learned the noise model using the optimization procedure of Sec. IV.2. To evaluate the accuracy of the noise parameter estimation, we used the same procedure as in the stochastic maps simulations, and compared the noisy CX and RESET gates used in the simulations to the ones obtained from the optimization routine.
In Fig. 7 and Fig. 8, the average diamond distance between the noisy CX and RESET gates used in the simulations and those obtained from the optimization is presented. For a system of size \(n\), the average is taken over the \(n-1\) possible RESET gates and \(2(n-1)\) possible CX gates.
In Fig. 7, we present the average diamond distance as a function of the number of shots per local Pauli observable for a fixed system size of \(n=8\) qubits, using three randomly generated sets of rotation axes and angles, \(\{\mathbf{\alpha}_{k},\psi_{k}\}_{k=1}^{4}\). Each of the three sets resulted in a different steady state. The accuracy of the estimation is robust to the different sets of rotation angles and axes (which are given in Appendix C.1). For \(N=10^{6}\) shots per local Pauli observable, the average diamond distance for the CX gates is approximately \(3\times 10^{-4}\), while for the RESET gates it is approximately \(1.4\times 10^{-2}\).
In Fig. 8 we kept the number of shots per Pauli fixed at \(N=10^{6}\) and used the rotation angles and axes from map 2. Increasing the system size from \(n=5\) to \(n=11\), we observe that the accuracy of the parameter estimation improves slightly between \(n=5\) and \(n=7\), and remains robust up to \(n=11\). The initial increase in accuracy for smaller systems in Fig. 8 may be explained by examining the light cone shown in Fig. 3 in Sec. III.3. For a smaller system size with open boundary conditions, the light cone is truncated at each end of the system, causing the expectation values at the edges of the system to be 3-local instead of 4-local. As a result, the optimization routine uses fewer statistics for the qubits located at the ends of the system. However, for larger system sizes, this boundary effect becomes negligible as the relative fraction of boundary qubits decreases.
As in the case of the stochastic map in Sec. V.2, the average diamond distance for the CX gates is much smaller than the RESET gates. This is probably because the noise channel parameters within the CX gates appear in multiple combinations within a single RESU gate, while the channel parameters of the RESET gate only appear once per RESU gate.
## VI Quantum Device Results
In this section we describe the experiments we performed on actual quantum hardware to test our method. All experiments were done on an IBMQ machine via the IBMQ cloud.
### Details of the experiment
For the quantum hardware demonstration we chose the _ibm_lagos_ v1.0.32, which is one of the IBM Quantum Falcon Processors [55] with 7 qubits in the shape of a rotated H. Out of the 7 qubits, we chose \(n=5\) qubits arranged in a line, which we labeled by \(0,1,2,3,4\). The IBMQ labels of these qubits were \(6,5,3,1,0\) in corresponding order.
We used the deterministic map-1 in Sec. III.3 to learn the noise of the quantum hardware. See Appendix C.1 for an exact description of the map. Our noise model was the same model used in the deterministic map simulations, which consisted of non-uniform depolarization errors, generalized amplitude damping, and RESET errors (total of 7 free parameters per qubit). In addition to these parameters we added a modeling of coherent CX errors as free parameters in its Hamiltonian -- see Appendices A, C.2 for a full description of the model.
After learning the hardware noise model using map-1, we ran deterministic map-2 and measured 2-local Pauli expectation values on its steady state. We used these measurements to test the quality of the noise model we learned from map-1.
To reduce SPAM errors, we used a readout-error mitigation scheme, which was partially based on Refs. [56, 57]. The scheme relied on a preliminary step in which we estimated the conditional probability of measuring a 4-local bit string in the computational basis, assuming the qubits were prepared in a different computational basis. See Appendix D.1 for an full description of our scheme.
Overall, our experiment consisted of three steps: (i) preliminary readout-error mitigation measurements, (ii) running deterministic map-1 and measuring the expectation values of 4-local Paulis, and (iii) running deter
Figure 8: The average diamond distance between the noisy CX and RESET gates, for a fixed number of shots \(N=10^{6}\), using the rotations \(\{\mathbf{\alpha}_{k},\psi_{k}\}\) from the map 2. Each data point being the average of 10 statistical noise realizations in the expectation values. The error bars indicate the uncertainty of two standard deviations.
ministic map-2 and measuring the expectation values of 2-local Paulis.
In all three parts of the experiment we needed to estimate the expectation values of local Pauli strings. To that aim, we used the overlapping local tomography method, described by Zubida _et al._ in Ref. [8], which simultaneously estimates all geometrically \(k\)-local Pauli strings using a set of \(3^{k}\) different measurement circuits. This means that using a total budget of \(M\) shots, each Pauli string was estimated using \(M/3^{k}\) shots.
We now describe the 3 parts of the experiment in more details.
_(i) Readout error mitigation measurements_: Here we performed preliminary readout-error mitigation measurements for the protocol that is described in Appendix D.1. Our measurements were used to mitigate the readout errors of steps (ii),(iii). We used the overlapping local tomography method (see Zubida _et al._ in Ref. [8]), to obtain 4-local readout statistics from a set of \(2^{4}\) different measurement circuits of unit depth. On each circuit we performed \(2\times 10^{5}\) measurements (about \(M=3\times 10^{6}\) shots in total).
_(ii) \(4\)-local channel measurements_: In this step we ran the deterministic map-1 on the quantum computer and prepared many copies of the steady states of \(\mathcal{E}_{I}\) and \(\mathcal{E}_{II}\) channels. Following the numerical simulations, we assumed that the steady state of each channel was approximately reached after 10 steps. We then used the overlapping local tomography to estimate the expectation values of the 4-local Pauli operators using \(1.25\times 10^{5}\) measurements for every 4-local expectation value (and about \(M=2\times 10^{7}\) shots in total).
_(iii) \(2\)-local channel measurements_: In this step we ran the deterministic map-2 on the quantum computer and performed 2-local Pauli measurements using the overlapping local tomography. As explained in Sec. V.3, map-2 produces a different steady state from the map-1. We used these 2-local Pauli measurements to asses the accuracy of our noise model we learned from the 4-local expectation values of step (ii). As in part (ii), we assumed that the steady states of the \(\mathcal{E}_{I}\) and \(\mathcal{E}_{II}\) channels were approximately reached after 10 steps. The 2-local Pauli expectation values were obtained using overlapping local tomography that used \(1.25\times 10^{5}\) measurements for every 2-local expectation value (and about \(M=2\times 10^{6}\) shots in total).
Upon finishing the measurements in (ii), (iii), we used the results of readout measurements (i) to correct these measurement statistics (see Appendix D.1 for details).
### Results I: Noise model validation
The main application of our method is to _validate_ (benchmark) a noise model of a QC by calculating its local cost functions \(\Phi_{q}\) given a noise model that defines the noisy channels \(\{\tilde{\mathcal{E}}_{k}\}\), as explained in Sec. III.3. The local cost function \(\Phi_{q}\) for a qubit \(q\) is the sum of all the \(A_{j}\) entries in \(\Phi_{I}\) and \(\Phi_{II}\) (see Eq. (17)) that overlap qubit \(q\), divided by the number of such \(A_{j}\). This happens when the light cone of \(A_{j}\) includes qubit \(q\), i.e., when \(q\in\{j-1,\ldots,j+2\}\) (see Fig. 3 for an illustration).
We used our validation protocol to compare three noise models: (i) the ideal noiseless model, (ii) our estimate of the IBMQ Qiskit [58] noise model, imported from the device's backend properties at the time of our experiment, and (iii) a noise model that we learned from the IBMQ hardware using our optimization method (see Sec. IV).
To calculate model (ii), we needed to know \(\mathcal{E}^{*}_{odd},\mathcal{E}^{*}_{even}\) of the IBMQ Qiskit noise model, from which the local cost function is calculated (see Eqs. (15, 16, 17)). However, as far as we could tell, IBMQ does not fully publish the exact formulas it uses to model the local noise. Instead, it publishes general noise parameters such as \(T_{1},T_{2}\), and provides code that simulates noisy circuits. Therefore, in order to calculate \(\mathcal{E}^{*}_{odd},\mathcal{E}^{*}_{even}\), we first _learned_ IBMQ Qiskit noise model using the deterministic map-3, and used the resultant model as an approximation to IBMQ Qiskit noise model. In more details, we used IBMQ Qiskit simulator to simulate the steady state of map-3, and calculated local Pauli expectations with vanishing statistical noise. We then used our optimization routine to learn Qiskit's noise model. We used the parameters described in Appendix C.2, but without any coherent errors, and with depolarization only along the \(\hat{z}\) axis. As a sanity check, we used the model we learned to simulate the 5 qubits steady states of maps-1,2,3 and compared them to Qiskit's steady states. We noticed that in all cases the trace distance between our steady state and Qiskit's steady state was less than 1%.
Fig. 9 shows the single-qubit cost function, as a heat map. The different rows represent \(\log_{10}\Phi_{q}\) for the 3 noise
Figure 9: The cost map for the _ibm_lagos_ device for three noise models: the ideal noiseless case (top row), the learned noise model from IBMQ Qiskit backend (middle row), and the learned noise model using our method (bottom row).
models. Unsurprisingly, it suggests that IBMQ Qiskit's noise model provides a better description of the hardware than the ideal, noiseless channel. It also suggests that our estimated noise model performs significantly better than the two other models. This might not be surprising, given the fact that we learned our noise model using the same steady state that we used to validate it. In other words, the learned noise model parameters are the ones that minimize the global cost function, which is roughly the sum of the local cost functions that appear in Fig. 9. In the machine learning jargon, this might imply that our noise model estimation procedure overfitted the particular steady state from which the 4-local measurements were sampled. To rule this out (at least to some extent), we show in the next subsection that our estimated noise model has a better predictive power than the other two: it is able to better approximate the 2-local expectation values of a steady state of map-2, which is different from the steady state of map-1 with which we learned the noise model.
### Results II: Noise model characterization
In this section we present the results of applying our optimization algorithm to _characterize_ the noise within a quantum device. The full learned noise model parameter details are presented in Appendix D.2.
To evaluate the quality of our noise characterization, we took the noise model that we learned and used it to numerically simulate the steady states of map-2 (of \(\mathcal{E}_{I}\) and \(\mathcal{E}_{II}\)). On these steady states we calculated the expectation values of 2-local Paulis in the bulk of the 5 qubits line (we used qubits \(1,2\) for \(\mathcal{E}_{I}\) and \(2,3\) for \(\mathcal{E}_{II}\)). We then compared these simulated expectation values to the actual results we measured on the quantum hardware when running map-2 (part (iii) of our experiment described in Sec. VI.1). We used the same procedure to test the two other noise models from the previous section, i.e., the ideal (noiseless) model, and IBMQ Qiskit's model. In contrast to the previous section, to sample the 2-local Paulis from the IBMQ Qiskit noise model we don't need to know \(\mathcal{E}_{cold}^{*},\mathcal{E}_{even}^{*}\). Hence, we calculated the 2-local Pauli expectation values directly from the IBMQ Qiskit simulator using the noise model imported from the device's backend properties at the time of our experiment.
Our results are presented as histograms in Fig. 10, where we show the expectation values of the 9 possible 2-local Pauli expectation values for the steady states of \(\mathcal{E}_{I},\mathcal{E}_{II}\) of map-2. The expectation values from the quantum hardware are shown in red. As we used more than \(10^{5}\) measurements for each data point, the statistical error is smaller than \(0.005\), which is negligible comparing to the differences with respect to other models. For the 3 analytical models, we preset the exact expectation values, without any statistical error. Our model prediction is shown in blue, Qiskit's model in yellow and the noiseless model in green. As can be seen in the figure, while the predictions of all the models are in decent agreement with the empirical results, our model is in better agreement than the other two models.
To compare our estimate in a quantitative way, we used the 2-local Paulies expectation values of the IBMQ hardware to perform quantum state tomography and estimate the underlying 2-local reduced density matrices (RDMs) in the steady states of \(\mathcal{E}_{I},\mathcal{E}_{II}\) of map-2. We compared it to the RDMs produced by simulations of our model, the Qiskit model and the ideal model. For each case, we calculated the trace distance \(D(\rho,\sigma)\stackrel{{\text{def}}}{{=}}\frac{1}{2}\norm{\rho- \sigma}_{1}\) between the simulation and the estimated RDM of the IBMQ hardware. The results are given in Table 1, showing that indeed the trace distance between IBMQ RDMs and our model RDMs are \(50\%-70\%\) of the distances from ideal model or the Qiskit model.
It is worth noting that the 2-local Pauli operators used for the reliability assessment were collected after the 4
Figure 10: Expectation values of the 2-local Pauli operators obtained from: the _ibm_lagos_ device (red), the learned noise model using our method (blue), the IBMQ Qiskit simulator (yellow) and the ideal noiseless case (green). We focus on qubits \(1,2\) for \(\mathcal{E}_{I}\) (top) and \(2,3\) for \(\mathcal{E}_{II}\) (bottom) to avoid including the boundary qubits 0 and 4, for which the measurement statistics become 3-local instead of 4-local (see Sec. V.3). See Table 1 for the corresponding trace distances between the measured RDM and the RDMs of the three models.
local Pauli expectation values used for noise model characterization were obtained. As a result, the noise model at the time of the 2-local Pauli measurements may have undergone some drifting. This may account for some of the variations seen in the 2-local expectation values depicted in Fig. 10 [59; 60; 61].
## VII Summary and outlook
We have presented and demonstrated numerically and experimentally a new framework for benchmarking a quantum computer by driving it to a steady state of an engineered dissipative dynamics. It is a natural generalization of the recent results of learning a local Hamiltonian or Lindbladian from their steady states [24; 25; 26; 27].
We have shown that if the underlying dissipative map is Markovian and local, then there exist a set of linear constraints between the expectation values of local observables in the steady state. This steady state can be a correlated, entangled state, and might even be classically inaccessible. Nevertheless, as we showed, the constraints between its local expectation values are local and can be efficiently verified once we measure these expectation values. We can therefore apply our method to a system with a large number of qubits and test how well a given noise model _globally_ describes the system, which is in a state that might be classically inaccessible.
For generic dissipative maps, the steady state is unique, and consequently, our method is independent of the initial state and is insensitive to state preparation errors. In addition, the fact that we measure a steady state that was created using several iterations allows us to test for the possible deviations from the Markovian model, in the form of memory and temporal correlations. Finally, by optimizing over a parametrized set of local Markovian noise models, our method allows learning of the optimal noise model.
We have suggested two generic ways to engineer a dissipative map on a quantum computer, both of which rely on the non-unital RESET gate. The first is a stochastic map that is implemented on a quantum computer by applying a set of gates according to a prescribed probability distribution. The second, uses a brick-wall type circuit, whose building blocks are composed 2-local non-unital gates and can therefore be applied using a single circuit. We have demonstrated numerically both methods using simulations of \(5-11\) qubits, and showed that it can successfully learn an underlying noise model. In our numerical tests we managed to learn the underlying CNOT and RESET gates up to diamond distances of \(10^{-3}-10^{-5}\) and \(10^{-1}-10^{-2}\) respectively, using \(10^{4}-10^{6}\) shots per observable.
We have also tested our method experimentally on on the IBMQ _ibm_lagos_ machine using 5 qubits. We have shown that given a noise model, we can construct a 'heat map' view of the qubits showing how well the model describes each qubit. Our method clearly showed that IBMQ Qiskit's simplified noise model performs better than the ideal, noiseless model. Finally, we have also showed how our method can learn an underlying noise model for that machine, and that the model it learned better predicted the outcomes of another experiment than the noiseless model or IBMQ Qiskit's model.
Our work leaves several theoretical and practical questions. On the theoretical side, it would be interesting to characterize the complexity of the steady states of dissipative maps like the one we used. Can they be sampled classically, or can they encode some BQP hard problems? More general engineered dissipative maps are known to be BQP-complete [37], but our maps are much simpler. They also differ from the much studied class of randomly generated circuits, for which there are several complexity bounds (see, for example, the recent result in Ref. [62] and references within), in several aspects: our maps are not completely random (we apply the same map over and over), they are strongly dissipative and non-unital, and can be applied for \(\text{poly}(n)\) time.
It would also be interesting to find a systematical way of engineering the dissipative maps (of both types) to have an optimal estimate of the error parameters. This might be possible to do by studying the quantum Fisher information [63] of the steady states with respect to the free parameters of the map. Related to that, it would be interesting to understand how this the quantum Fisher information, or more general our ability to use the steady state for learning, depends on the deviation of the channel from unitarity. Is there a phase transition behavior, as seen in related measurement induced phase transitions literature (see Ref. [64] and references within), or is it a smooth transition?
On the more practical direction, it would be interesting to see if there is a way to make our method completely insensitive to SPAM errors, by removing its dependence of measurement errors. The readout error mitigation procedure that we employed arguably removes a large part of these errors, but not in a controlled, systematic way, as done in methods like RB [38; 39; 40; 41] and GST [42; 43; 44].
\begin{table}
\begin{tabular}{||l c c||} \hline & \(\mathcal{E}_{I}\) & \(\mathcal{E}_{II}\) \\ \hline \hline Ideal model & 0.19 & 0.21 \\ Qiskit model & 0.174 & 0.19 \\ Learned model & 0.09 & 0.13 \\ \hline \end{tabular}
\end{table}
Table 1: The trace distance of the 2-local reduced density matrices obtained from the IBMQ hardware using quantum state tomography and the corresponding density matrices taken from simulations using the ideal model, Qiskit noise model, and our learned model. We calculated the distances on the steady states of \(\mathcal{E}_{I}\) and \(\mathcal{E}_{II}\) of map-2.
## Acknowledgements
We thank Eyal Bairey and Netanel Lindner for enlightening discussions. IA acknowledges the support of the Israel Science Foundation (ISF) under the Research Grants in Quantum Technologies and Science No. 2074/19, and the joint NRF-ISF Research Grant No. 3528/20. This research is supported by the National Research Foundation, Singapore, under its Quantum Engineering Programme. We are very grateful for the support of the National University of Singapore (NUS) and the Centre of Quantum Technologies (CQT) for their help in running the IBMQ experiments.
|
2310.16515
|
Solving and Applying Fractal Differential Equations: Exploring Fractal
Calculus in Theory and Practice
|
In this paper, we delve into the fascinating realm of fractal calculus
applied to fractal sets and fractal curves. Our study includes an exploration
of the method analogues of the separable method and the integrating factor
technique for solving $\alpha$-order differential equations. Notably, we extend
our analysis to solve Fractal Bernoulli differential equations. The
applications of our findings are then showcased through the solutions of
problems such as fractal compound interest, the escape velocity of the earth in
fractal space and time, and estimation of time of death incorporating fractal
time. Visual representations of our results are also provided to enhance
understanding.
|
Alireza Khalili Golmankhaneh, Donatella Bongiorno
|
2023-10-25T10:04:41Z
|
http://arxiv.org/abs/2310.16515v1
|
Solving and Applying Fractal Differential Equations: Exploring Fractal Calculus in Theory and Practice
###### Abstract
In this paper, we delve into the fascinating realm of fractal calculus applied to fractal sets and fractal curves. Our study includes an exploration of the method analogues of the separable method and the integrating factor technique for solving \(\alpha\)-order differential equations. Notably, we extend our analysis to solve Fractal Bernoulli differential equations. The applications of our findings are then showcased through the solutions of problems such as fractal compound interest, the escape velocity of the earth in fractal space and time, and the estimation of time of death incorporating fractal time. Visual representations of our results are also provided to enhance understanding.
**Keywords:** FFractal calculus, Fractal curves, Fractal differential equations
**MSC:** 28A80, 28A78, 28A35, 28A75, 34A30
## 1 Introduction
Benoit Mandelbrot is credited with pioneering the field of fractal geometry [1], which revolves around shapes possessing fractal dimensions that surpass their topological dimensions [2, 3]. These intricate fractals exhibit self-similarity and frequently demonstrate non-integer and complex dimensions [4, 5]. However, the analysis of fractals presents challenges, given that traditional geometric measures such as Hausdorff measure [6], length, surface area, and volume are typically applied to standard shapes [7]. Consequently, the direct application of
these measures to fractal analysis becomes intricate [8; 9; 10; 11; 12; 13]. Researchers have tackled the problem of fractal analysis using approaches. These include analysis [14; 15], measure theory [16; 17; 18; 19; 20; 21; 22], probabilistic methods [23], fractional space and nonstandard methods [24], fractional calculus [25; 26; 27] and non standard methods [28]. Essential topological characteristics such as connectivity, ramification, and loopiness were exhibited by fractals and can be quantified using six independent dimension values. However, some fractal types may reduce the count of these dimensions due to their unique traits [29]. Fracture network modeling was proposed, with a specific focus on accentuating fractal attributes within geological formations. Two innovative models were introduced: one centered around Bernoulli percolation within regular lattices, and the other delving into site percolation within scale-free networks integrated into 2D and 3D lattices. The revelation emerged that the effective spatial degrees of freedom in scale-free networks are dictated by the embedding dimension, in contrast to the degree distribution [30]. The fractal characteristics impact percolation within self-similar networks [31]. The effects of geometric confinement on point statistics in a quasi-low-dimensional system were studied. Specifically, attention was centered on nearest-neighbor statistics. Comprehensive numerical simulations were carried out using binomial point processes on quasi-one-dimensional rectangle strips, considering various confinement ratio values. The findings revealed that the distributions of nearest-neighbor distances followed an extreme value Weibull distribution, where the shape parameter was contingent on the confinement ratio [32]. The fractal characteristics impact formation factors in pore-fracture networks for different transport processes. A focus on deterministic infinitely ramified networks related to pre-fractal Sierpinski carpets was adopted. The network attributes effects on streamline constriction and transmission path tortuosity, emphasizing formation factor differences for diffusibility, electrical conductivity, and hydraulic permeability [33]. Fractal calculus is a way to extend calculus and deal with equations that have solutions, in the form of functions with fractal properties like fractal sets and curves [34; 35]. The beauty of fractal calculus lies in its simplicity and algorithmic approaches when compared to methods [36]. The generalization of \(F^{\alpha}\)-calculus (FC) has been achieved through the utilization of the gauge integral method. The focus lies on the integration of functions within a subset of the real line containing singularities present in fractal sets [37]. The utilization of FC is exemplified with respect to fractal interpolation functions and Weierstrass functions, which can exhibit non-differentiability and non-integrability in the context of ordinary calculus [38]. The utilization of non-local fractal derivatives to characterize fractional Brownian motion on thin Cantor-like sets was demonstrated. The proposal of the fractal Hurst exponent establishes its connection to the order of non-local fractal derivatives [39]. Various methods have been employed to solve fractal differential equations, and their stability conditions have been determined [40; 41]. The fractal Tsallis entropy on fractal sets and defines q-fractal calculus for deriving distributions were introduced. Nonlinear coupling conditions for statistical states were presented, and a relationship between fractal dimension and Tsallis entropy's q-parameter in the Hadron system was proposed
[42]. Fractal functional differential equations were introduced as a mathematical framework for phenomena that encompass both fractal time and structure. The paper showcases the solution of fractal retarded, neutral, and renewal delay differential equations with constant coefficients, employing the method of steps and Laplace transforms [43]. The introduction of a novel generalized local fractal derivative operator and its exploration in classical systems via Lagrangian and Hamiltonian formalisms were undertaken. The practical applicability of the variational method in describing dissipative dynamical systems was showcased, and the Hamiltonian approach produced auxiliary constraints without reliance on Dirac auxiliary functions [44]. Furthermore, fractal stochastic differential equations have been defined, with categorizations for processes like fractional Brownian motion and diffusion occurring within mediums with fractal structures [45, 46, 47, 48, 49]. Local vector calculus within fractional-dimensional spaces, on fractals, and in fractal continua was developed. The proposition was put forth that within spaces characterized by non-integer dimensions, it was feasible to define two distinct del-operators-each operating on scalar and vector fields. Employing these del-operators, the foundational vector differential operators and Laplacian in fractional-dimensional space were formulated in a conventional manner. Additionally, Laplacian and vector differential operators linked with \(F^{\alpha}\)-derivatives on fractals were established [50]. Fractal calculus has been extended to include Cantor cubes and Cantor tartan [51], and the Laplace equation has been defined within this framework [52].
The paper is structured as follows:
In Section 2, a comparative and review analysis of fractal calculus is presented, focusing on its application to both fractal sets and curves. Section 3 introduces the utilization of an integrating factor to solve fractal \(\alpha\)-order differential equations. Section 4 outlines the application of the method of separation to fractal differential equations. Furthermore, in Section 5, various applications are discussed, involving the extension of standard models to account for fractal time. Lastly, Section 6 is dedicated to concluding the paper.
## 2 Overview of Fractal Calculus
In this section, we present a comprehensive survey of the application of fractal calculus to the domains of fractal curves and fractal sets [34, 35, 36].
### Fractal Calculus on Fractal Sets
In this section, we present a concise overview of fractal calculus applied to fractal sets as summarized in [34].
**Definition 2.1**: _The flag function of a set \(F\) and a closed interval \(I\) is defined as:_
\[\rho(F,I)=\begin{cases}1,&\text{if }F\cap I\neq\emptyset;\\ 0,&\text{otherwise}.\end{cases} \tag{1}\]
**Definition 2.2**: _For a fractal set \(F\subset[a,b]\), a subdivision \(P_{[a,b]}\) of \([a,b]\), and a given \(\delta>0\), the coarse-grained mass of \(F\cap[a,b]\) is defined by_
\[\gamma_{\delta}^{\alpha}(F,a,b)=\inf_{|P|\leq\delta}\sum_{i=0}^{n-1}\Gamma( \alpha+1)(t_{i+1}-t_{i})^{\alpha}\rho(F,[t_{i},t_{i+1}]), \tag{2}\]
_where \(|P|=\max_{0\leq i\leq n-1}(t_{i+1}-t_{i})\), and \(0<\alpha\leq 1\)._
**Definition 2.3**: _The mass function of a fractal set \(F\subset[a,b]\) is defined as the limit of the coarse-grained mass as \(\delta\) approaches zero:_
\[\gamma^{\alpha}(F,a,b)=\lim_{\delta\to 0}\gamma_{\delta}^{\alpha}(F,a,b). \tag{3}\]
**Definition 2.4**: _For a fractal set \(F\subset[a,b]\), the \(\gamma\)-dimension of \(F\cap[a,b]\) is defined as:_
\[\dim_{\gamma}(F\cap[a,b]) =\inf\{\alpha:\gamma^{\alpha}(F,a,b)=0\}\] \[=\sup\{\alpha:\gamma^{\alpha}(F,a,b)=\infty\} \tag{4}\]
**Definition 2.5**: _The integral staircase function of order \(\alpha\) for a fractal set \(F\) is given by:_
\[S_{F}^{\alpha}(x)=\begin{cases}\gamma^{\alpha}(F,a_{0},x),&\text{if }x\geq a_{0};\\ -\gamma^{\alpha}(F,x,a_{0}),&\text{otherwise}.\end{cases} \tag{5}\]
_where \(a_{0}\) is an arbitrary fixed real number._
**Definition 2.6**: _Let \(F\) be an \(\alpha\)-perfect fractal set, let \(f\) be a function defined on F and let \(x\in F.\) The \(F^{\alpha}\)-derivative of \(f\) at the point \(x\) is defined as follows:_
\[D_{F}^{\alpha}f(x)=\begin{cases}F\_\text{lim }\frac{f(y)-f(x)}{S_{F}^{\alpha}(y)-S_{F}^{ \alpha}(x)},&\text{if }x\in F;\\ 0,&\text{otherwise}.\end{cases} \tag{6}\]
_if the fractal limit \(F\_\text{lim}\) exists [34]._
**Definition 2.7**: _Let \(I=[a,b].\) Let \(F\) be an \(\alpha\)-perfect fractal set such that \(S_{F}^{\alpha}\) is finite on \(I\). Let \(f\) be a bounded function defined on F and let \(x\in F.\) The \(F^{\alpha}\)-integral of \(f\) on \(I\) is defined as:_
\[\int_{a}^{b}f(x)d_{F}^{\alpha}x =\sup_{P_{[a,b]}}\sum_{i=0}^{n-1}\inf_{x\in F\cap I}f(x)(S_{F}^{ \alpha}(x_{i+1})-S_{F}^{\alpha}(x_{i}))\] \[=\inf_{P_{[a,b]}}\sum_{i=0}^{n-1}\sup_{x\in F\cap I}f(x)(S_{F}^{ \alpha}(x_{i+1})-S_{F}^{\alpha}(x_{i})). \tag{7}\]
### Fractal Calculus on Fractal Curves
We begin with defining the key concepts in fractal calculus on fractal curves [35]. By the way, we recall that a fractal curve \(F\subset\mathbb{R}^{n}\) is parametrizable if there exists a bijective and continuous function \(\mathbf{w}:[a_{0},b_{0}]\rightarrow\mathbb{R}\). Moreover we recall also that by \(C(a,b)\) we denote the segment of the curve lying between the points \(\mathbf{w}(a)\) and \(\mathbf{w}(b)\) on the fractal curve \(F\)[35].
**Definition 2.8**: _For a fractal curve denoted as \(F\) and a subdivision denoted as \(P_{[a,b]}\) where \([a,b]\subset\mathbb{R}\), the mass function is given by_
\[\gamma^{\alpha}(F,a,b)=\lim_{\delta\to 0}\inf_{|P|\leq\delta}\sum_{i=0}^{n-1} \frac{|\mathbf{w}(t_{i+1})-\mathbf{w}(t_{i})|^{\alpha}}{\Gamma(\alpha+1)}, \tag{8}\]
_where \(|\cdot|\) represents the Euclidean norm in \(\mathbb{R}^{n}\), \(1\leq\alpha\leq n\), \(P_{[a,b]}=\{a=t_{0},...,t_{n}=b\}\), and \(|P|=\max_{0\leq i\leq n-1}(t_{i+1}-t_{i})\) for a subdivision \(P_{[a,b]}\)._
**Definition 2.9**: _The \(\gamma\)-dimension of the fractal curve \(F\) is defined as_
\[\dim_{\gamma}(F) =\inf\{\alpha:\gamma^{\alpha}(F,a,b)=0\}\] \[=\sup\{\alpha:\gamma^{\alpha}(F,a,b)=\infty\} \tag{9}\]
**Definition 2.10**: _Let \(p_{0}\in[a_{0},b_{0}]\) be arbitrary but fixed. The mass of the fractal of a fractal curve \(F\) is defined as:_
\[S_{F}^{\alpha}(u)=\begin{cases}\gamma^{\alpha}(F,p_{0},u),&u\geq p_{0};\\ -\gamma^{\alpha}(F,u,p_{0}),&u<p_{0}.\end{cases} \tag{10}\]
_The mass of the fractal curve \(F\) up to point \(u\) is provided by \(S_{F}^{\alpha}(u)\), where \(u\in[a_{0},b_{0}]\)._
**Definition 2.11**: _Let \(S_{F}^{\alpha}(u)=J(\theta).\) The fractal \(F^{\alpha}\)-derivative of a function \(f\) at a point \(\theta\in F\) is defined as:_
\[D_{F}^{\alpha}f(\theta)=F_{-lim}\lim_{\theta^{\prime}\to\theta}\;\frac{f( \theta^{\prime})-f(\theta)}{J(\theta^{\prime})-J(\theta)}, \tag{11}\]
_if \(F_{-}lim\) exists (here the \(F_{-}lim\) represents the fractal limit as it shows in [35])._
**Remark 1**: _It is worth noting that the Euclidean distance from the origin to a point \(\theta=\mathbf{w}(u)\) is given by \(L(\theta)=L(\mathbf{w}(u))=|\mathbf{w}(u)|.\)_
**Definition 2.12**: _The fractal integral or \(F^{\alpha}\)-integral is defined as_
\[\int_{C(a,b)}f(\theta)d_{F}^{\alpha}\theta =\sup_{P[a,b]}\sum_{i=0}^{n-1}\inf_{\theta\in C(t_{i},t_{i+1})}f( \theta)(J(\theta_{i+1})-J(\theta_{i}))\] \[=\inf_{P[a,b]}\sum_{i=0}^{n-1}\sup_{\theta\in C(t_{i},t_{i+1})}f( \theta)(J(\theta_{i+1})-J(\theta_{i})), \tag{12}\]
_where \(t_{i}=\mathbf{w}^{-1}(\theta_{i})\) and \(f\) is a bounded function on a fractal curve \(F\)._
Solving Fractal Differential Equations by Method of Integrating Factor
In this section, we delve into the concept of differential equations on fractal curves and fractal sets. We start by considering an \(\alpha\)-order linear differential equation on a fractal curve \(F\subset\mathbb{R}^{n}\):
\[D_{F}^{\alpha}y(\theta)+p(\theta)y(\theta)=g(\theta),\quad\theta\in F, \tag{13}\]
where \(p\) and \(g\) are \(F\)-continuous functions defined on the fractal curve \(F\), with \(\varphi_{1}<\theta<\varphi_{2},\) and \(\varphi_{1},\varphi_{2}\in F\).
**Definition 1**: _Let \(\psi:F\subset\mathbb{R}^{n}\rightarrow\mathbb{R}\) be a function. If \(\psi\) has fractal \(F^{\alpha}\)-derivative at each point \(\theta\in F,\) therefore \(\psi\) is called the solution of the \(\alpha\)-order differential equation if substituted in the Eq.(13) satisfies it._
**Theorem 1**: _(Method of the integration factor)_
_Let \(F\subset\mathbb{R}^{n}\) be a fractal curve, therefore there exists a fractal \(F^{\alpha}\)-differentiable function defined on a fractal curve \(F,\) called integration factor, such that all the solutions of the Eq. Eq.(13) are expressed by:_
\[y(\theta)=\frac{\int\mu(\theta)g(\theta)d_{F}^{\alpha}(\theta)\,+\,J(c)}{\mu( \theta)}. \tag{14}\]
_Here \(\mu(\theta)\) is the integration factor and \(J(c)\) is an arbitrary constant._
**Proof 1**: _To solve Eq. (13), we introduce an integrating factor \(\mu(\theta)\) and multiply both sides of the equation by it:_
\[\mu(\theta)D_{F}^{\alpha}y(\theta)+\mu(\theta)p(\theta)y(\theta)=\mu(\theta)g( \theta). \tag{15}\]
_For this modified equation to hold, we require the following relationship:_
\[D_{F}^{\alpha}\mu(\theta)=p(\theta)\mu(\theta). \tag{16}\]
_Assuming \(\mu(\theta)>0\), we can express Eq. (16) as:_
\[\frac{D_{F}^{\alpha}\mu(\theta)}{\mu(\theta)}=p(\theta). \tag{17}\]
_By applying fractal integration, we arrive at the integral equation:_
\[\ln(\mu(\theta))=\int p(\theta)d_{F}^{\alpha}\theta+J(k), \tag{18}\]
_where \(J(k)\) is an arbitrary constant of integration. Setting \(J(k)=0\), we obtain the expression for the integrating factor:_
\[\mu(\theta)=\exp\left(\int p(\theta)d_{F}^{\alpha}\theta\right). \tag{19}\]
_After determining \(\mu(\theta)\), we substitute it back into Eq. (15), yielding:_
\[D_{F}^{\alpha}(\mu(\theta)y(\theta))=\mu(\theta)g(\theta). \tag{20}\]
_Integrating both sides of the equation using fractal integration, we arrive at the solution for the original differential equation (13):_
\[y(\theta)=\frac{\int\mu(\theta)g(\theta)d_{F}^{\alpha}\theta+J(c)}{\mu(\theta)}, \tag{21}\]
_which completes the proof._
**Remark 2**: _By the previous theorem it follows that there are infinitely many functions \(y(\theta)\) that satisfy the given \(\alpha\)-order fractal differential equation on the fractal curve \(F\)._
**Example 1**: _Consider an \(\alpha\)-order fractal differential equation on fractal curve \(F\) of the form:_
\[D_{F}^{\alpha}y(\theta)=ry(\theta)+k, \tag{22}\]
_where \(r\) and \(k\) are constants. By Equation (21), we can determine the infinite solutions for the given equations (22) as follows:_
\[y(\theta)=-\frac{k}{r}+c\exp(rJ(\theta)), \tag{23}\]
_where \(c\) is an integration constant, and \(J(\theta)\) arises from the fractal integration process._
**Theorem 2**: _Let \(p(\theta)\) and \(g(\theta)\) two \(F\)-continuous functions defined on a fractal curve \(F\) with \(\varphi_{1}<\theta<\varphi_{2}\) and \(\varphi_{1},\varphi_{2}\in F.\) Let \(\theta_{0}\in(\varphi_{1},\varphi_{2})\). Therefore for each \(y_{0}\in\mathbb{R}\) there exists an unique solution \(y=\psi(\theta)\), defined at least in a neighborhood of \(\theta_{0}\), of the following \(\alpha\)-order fractal differential equation:_
\[D_{F}^{\alpha}y(\theta)+p(\theta)y(\theta)=g(\theta), \tag{24}\]
_with the initial condition \(y(\theta_{0})=y_{0}\)._
**Proof 2**: _Let us denote by \(\Lambda(\theta)\) a primite of \(p(\theta)\). The existence of a solution of the given \(\alpha\)-order fractal differential equation it was already shown by the method of the integration factor already discussed. Therefore, denoted by \(\mu(\theta)=e^{\Lambda(\theta)}\) we arrive at:_
\[y(\theta)=\,e^{-\Lambda(\theta)}\left(\int\,\,g(\theta)\,\mu(\theta)\,d_{F}^{ \alpha}(\theta)\,+\,c\right), \tag{25}\]
_where \(c\) is a constant of integration. Now, replacing the initial condition \(y(\theta_{0})=y_{0}\) in the previous equation and choosing the primitive \(\Lambda(\theta)\) such that \(\Lambda(\theta_{0})=0,\,\,(i.e.\,\,\Lambda(\theta)=\int_{C(\theta_{0},\theta)} \,p(\tau)d_{F}^{\alpha}\tau)\) we get the following solution:_
\[y(\theta)=\,\,\,e^{-\Lambda(\theta)}\left(\int_{C(\theta_{0},\theta)}\mu( \tau)g(\tau)d_{F}^{\alpha}\tau\,\,+\,y_{0}\right). \tag{26}\]
_To prove the uniqueness of the solution, let us suppose by contradiction that there are two different solutions \(y_{1}(\theta)\) and \(y_{2}(\theta)\) of the Eq.(24) with the same initial condition \(y_{1}(\theta_{0})=y_{2}(\theta_{0})=y_{0}\)._
_Let \(z(\theta)=y_{1}(\theta)-y_{2}(\theta)\). Therefore, by the linearity of the \(\alpha\)-order differential equation, substituting \(z(\theta)\) into Eq. (24), we obtain:_
\[D_{F}^{\alpha}z(\theta)\,+\,p(\theta)z(\theta)=0. \tag{27}\]
_Now, it is trivial to observe that the function \(z(\theta)=0,\) identically null, is a solution of \(D_{F}^{\alpha}(\theta)\,+\,p(\theta)z(\theta)=0\) with the initial condition \(z(\theta_{0})=0.\) So, by the conjugacy of \(F^{\alpha}\)-calculus and the ordinary calculus [35], we can conclude that \(z(\theta)=0\) is the unique solution of_
\[D_{F}^{\alpha}z(\theta)\,+\,p(\theta)z(\theta)=0, \tag{28}\]
_with the initial condition \(z(\theta_{0})=0.\) Therefore, by the definition of \(z(\theta)\) we have that \(y_{1}(\theta)=y_{2}(\theta)\) and this is a contradiction having assumed that Eq.(24), with the initial condition \(y(\theta_{0})=0,\) could admit two different solutions._
**Example 2**: _Consider the fractal differential equation on a fractal set, expressed as_
\[D_{t}^{\alpha}y(t)+\frac{1}{2}y(t)=10+5\sin(2S_{F}^{\alpha}(t)) \tag{29}\]
_where the initial condition is given as \(y(0)=0\). To solve Eq.(29), we introduce the integrating factor \(\mu(t)=\exp(S_{F}^{\alpha}(t)/2)\). By multiplying Eq.(29) with this factor, performing fractal integration, and applying the initial condition, the resulting solution is:_
\[y(t) =20-\frac{40}{17}\cos(2S_{F}^{\alpha}(t))+\frac{10}{17}\sin(2S_{ F}^{\alpha}(t))-\frac{300}{17}\exp\left(-\frac{S_{F}^{\alpha}(t)}{2}\right)\] \[\propto 20-\frac{40}{17}\cos(2t^{\alpha})+\frac{10}{17}\sin(2t^{ \alpha})-\frac{300}{17}\exp\left(-\frac{t^{\alpha}}{2}\right). \tag{30}\]
Figure 1: Plot of Eq.(30) for different values of \(\alpha\)
_In Figure 1, we have plotted Eq. (30) for various values of \(\alpha\). This plot illustrates that as the dimension of the fractal set supporting the function increases, the solution exhibits greater oscillations._
### Fractal Bernoulli Differential Equation
The fractal version of the Bernoulli differential equation, using the fractal derivative, is expressed as:
\[D_{F}^{\alpha}y(\theta)+q(\theta)y(\theta)=r(\theta)y(\theta)^{\beta},\quad \theta\in F,\ \beta\in\mathbb{R}, \tag{31}\]
where \(q(\theta)\) and \(r(\theta)\) are \(F\)-continuous functions defined on a fractal curve. This equation employs the fractal derivative to describe the behavior of the function \(y\) on a fractal curve, allowing for the analysis and solution of differential equations on fractal geometries. In order to give a technique to solve the assigned Fractal Bernoulli differential equation, let us observe, first of all, that if \(\beta=0\) then \(y^{\beta}(\theta)=y^{0}(\theta)=1\) and the Eq.(31) becomes equal to the Eq.(13) with \(r(\theta)=g(\theta).\) Instead, if \(\beta=1,\) then \(y^{\beta}(\theta)=y(\theta)\) and the Eq.(31) becomes similar to the Eq.(13), here \(q(\theta)-r(\theta)=p(\theta).\) In all other cases, to solve the fractal version of the Bernoulli differential equation, using fractal derivatives, we apply the integrating factor method already discussed. Before showing this technique we note that if \(\beta>0,\) then \(y(\theta)=0\) is a solution of the Eq.(31). Now, the solution method is the following: preliminarily divide both sides of the equation by \(y^{\beta}(\theta),\) thus obtaining
\[y^{-\beta}(\theta)\,(D_{F}^{\alpha}y(\theta)\,+\,q(\theta)\,y(\theta))\,=\,r (\theta), \tag{32}\]
subsequently set \(z(\theta)=y^{1-\beta}(\theta)\) and applying the fractal differentiation rule of composite functions: \(D_{F}^{\alpha}z(\theta)=(1-\beta)\,y^{-\beta}(\theta)\,D_{F}^{\alpha}y(\theta),\) so the new Fractal Bernoulli differential equation is
\[D_{F}^{\alpha}z(\theta)\,+\,(1-\beta)\,q(\theta)\,z(\theta)\,=\,(1-\beta)\,r (\theta). \tag{33}\]
Therefore apply the integrating factor method and set \(y(\theta)=(z(\theta))^{\frac{1}{1-\beta}}\,,\) so the solution of the assigned Fractal Bernoulli differential equation is:
\[y(\theta)=\left(\frac{\int r(\theta)\,\mu(\theta)\,d_{F}^{\alpha}(\theta)\,+ \,c}{\mu(\theta)}\right)^{\frac{1}{1-\beta}}, \tag{34}\]
where \(\Lambda(\theta)\) is a prime of \(q(\theta)\) and \(\mu(\theta)=e^{\Lambda(\theta)}.\)
**Example 3**: _Let us consider the following Fractal Bernoulli differential equation on a given fractal curve \(F\):_
\[D_{F}^{\alpha}y(\theta)\,=\,2\,\frac{y(\theta)}{S_{F}^{\alpha}(\theta)}\,+\, 2\,S_{F}^{\alpha}(\theta)\,\sqrt{y(\theta)}\]
_here \(\beta=1/2,\) so \(y(\theta)=0\) is a solution of the given Fractal Bernoulli differential equation. Let us suppose, now, that \(y(\theta)\neq 0.\) Let us divide both sides of the
equation by \(\sqrt{y(\theta)}\) and let us set \(z(\theta)=\sqrt{y(\theta)}.\) Thus we obtain the following \(\alpha\)-order linear differential equation on a given fractal curve \(F\):_
\[D_{F}^{\alpha}z(\theta)=S_{F}^{\alpha}(\theta)+\frac{z(\theta)}{S_{F}^{\alpha}( \theta)} \tag{35}\]
_According to the formula of the integrating factor method we have:_
\[z(\theta)=S_{F}^{\alpha}(\theta)(S_{F}^{\alpha}(\theta)\,+\,c). \tag{36}\]
_Therefore the solutions of the assigned Fractal Bernoulli differential equation are: \(y(\theta)=\left((S_{F}^{\alpha}(\theta))^{2}+c\,S_{F}^{\alpha}(\theta)\right)^ {2}\) and \(y(\theta)=0.\)_
## 4 Solving Fractal Differential Equations by Method of Separation
The equation representing a separable \(\alpha\)-order fractal differential equation is given as [53]:
\[D_{F}^{\alpha}y(\theta)=\frac{d_{F}^{\alpha}y}{d_{F}^{\alpha}\theta}=f(\theta,y),\quad\theta\in F, \tag{37}\]
with the initial condition
\[y(\theta_{0})=y_{0}. \tag{38}\]
Here, \(f(\theta,y)\) takes a linear form with respect to \(y\). The equation (37) can be rearranged as:
\[M(\theta,y)+N(\theta,y)\frac{d_{F}^{\alpha}y}{d_{F}^{\alpha}\theta}=0, \tag{39}\]
where \(M(\theta,y)=-f(\theta,y)\) and \(N(\theta,y)=1\). By considering \(M(\theta,y)=M(\theta)\) and \(N(\theta,y)=N(y)\), we arrive at:
\[M(\theta)d_{F}^{\alpha}\theta+N(y)d_{F}^{\alpha}y=0. \tag{40}\]
This equation is referred to as a separable fractal differentiable equation. To solve (40), we introduce functions \(D_{F}^{\alpha}H_{1}(\theta)=M(\theta)\) and \(D_{F}^{\alpha}H_{2}(y)=N(y).\) Therefore Eq. (40) has the following expression:
\[D_{F}^{\alpha}H_{1}(\theta)+D_{F}^{\alpha}H_{2}(y)\frac{d_{F}^{\alpha}y}{d_{F }^{\alpha}\theta}=0. \tag{41}\]
Now by fractal chain rule, we have:
\[D_{F}^{\alpha}H_{2}(y)\frac{d_{F}^{\alpha}y}{d_{F}^{\alpha}\theta}=\frac{d_{F }^{\alpha}}{d_{F}^{\alpha}\theta}H_{2}(y). \tag{42}\]
Consequently, by Eq. (41) and Eq. (42) we have:
\[\frac{d_{F}^{\alpha}}{d_{F}^{\alpha}\theta}[H_{1}(\theta)+H_{2}(y)]=0, \tag{43}\]
and applying fractal integration, we obtain:
\[H_{1}(\theta)+H_{2}(y)=c \tag{44}\]
The Eq.(44) is the implicit solution of the Eq.(40). Now by substituting the initial condition into Eq.(44) we get:
\[c=H_{1}(\theta_{0})+H_{2}(y_{0}). \tag{45}\]
Finally by replacing (45) into (44), we arrive at:
\[H_{2}(y)-H_{2}(y_{0})=\int_{y_{0}}^{y}N(s)d_{F}^{\alpha}s,\quad H_{1}(\theta)- H_{1}(\theta_{0})=\int_{C(\theta_{0},\theta)}M(s)d_{F}^{\alpha}s \tag{46}\]
This leads to:
\[\int_{y_{0}}^{y}N(s)d_{F}^{\alpha}s+\int_{C(\theta_{0},\theta)}M(s)d_{F}^{ \alpha}s=0 \tag{47}\]
which is the implicit solution of (40), satisfying the initial condition.
**Example 4**: _Let's consider the fractal differential equation on a fractal curve given by_
\[D_{F}^{\alpha}y(\theta)=\frac{3J(\theta)^{2}+4J(\theta)+2}{2(y-1)},\quad y(0)= -1. \tag{48}\]
_We can rewrite Eq.(48) as follows:_
\[2(y-1)d_{F}^{\alpha}y=(3J(\theta)^{2}+4J(\theta)+2)d_{F}^{\alpha}\theta. \tag{49}\]
_Moreover, by fractal integration with respect to \(y\) on the left side and with respect to \(\theta\) on the right side, we obtain:_
\[y^{2}-2y=J(\theta)^{3}+2J(\theta)^{2}+2J(\theta)+c. \tag{50}\]
_Finally, by using the initial condition \(y(0)=-1\) in Eq. (50) we get:_
\[y^{2}-2y=J(\theta)^{3}+2J(\theta)^{2}+2J(\theta)+3. \tag{51}\]
_It can be further simplified to:_
\[y(\theta)=1-\sqrt{J(\theta)^{3}+2J(\theta)^{2}+2J(\theta)+4}. \tag{52}\]
_In Figure 2, we have depicted the graphical representation of the solution to equation (48)._
## 5 Applications
In this section, we explore practical applications of fractal differential equations.
### Fractal Compound Interest
Consider a scenario where a sum of money is deposited in a bank, and both deposits and withdrawals occur at a constant rate \(k\)[54]. The value \(p(t)\) of the investment over time represents this situation. The rate of change of \(p(t)\) in a fractal time context is given by the equation:
\[D_{F}^{\alpha}p(t)=rp(t)+k,\hskip 28.452756ptt\in F,\hskip 14.226378ptr,k\in \mathbb{R}. \tag{53}\]
Here, \(r\) represents the annual interest rate and \(k\) the constant rate of deposits or withdrawals. The initial condition is \(p(0)=p_{0}\). The solution to Eq.(53) is derived as:
\[p(t) =p_{0}\exp(rS_{F}^{\alpha}(t))+\frac{k}{r}(\exp(rS_{F}^{\alpha}(t )-1)) \tag{54}\] \[\propto p_{0}\exp(rt^{\alpha})+\frac{k}{r}(\exp(rt^{\alpha}-1)). \tag{55}\]
This solution showcases how the investment's value changes over fractal time, with implications for compound interest calculations.
Figure 2: Graph of Eq.(52).
In Figure 3, we illustrate the impact of the fractal time dimension on the growth of investment.
### Escape Velocity in Fractal Space and Time
The concept of escape velocity, a fundamental aspect of physics, pertains to the minimum initial velocity required for an object to overcome a celestial body's gravitational pull [54]. By introducing the concept of fractal space and time, we extend the exploration of escape velocity to an innovative framework.
Assuming the absence of other forces and incorporating Newton's law, the equation of motion in fractal space and time can be expressed as:
\[mD_{t}^{\alpha}v=-\frac{mgR^{2}}{(R+x)^{2}},\quad v(0)=0. \tag{56}\]
Here, \(m\) represents the mass of the object, \(R\) is the radius of the celestial body (such as Earth), \(x\) is the distance between the object and the celestial body, and \(g\) signifies the acceleration due to gravity. The equation (56) is then transformed using the fractal chain rule to obtain:
\[vD_{x}^{\alpha}v=-\frac{gR^{2}}{(R+x)^{2}}. \tag{57}\]
Solving this equation involves separating variables and performing fractal integration, resulting in the equation:
\[\frac{S_{F}^{\alpha}(v)^{2}}{2}=\frac{gR^{2}}{R+S_{F}^{\alpha}(x)}+c. \tag{58}\]
Figure 3: Investment growth for different values of \(\alpha\)
Utilizing the initial conditions \(x=0\) and \(v=v_{0}\), the maximum altitude reached by the object can be determined as:
\[x_{\max}=\frac{S_{F}^{\alpha}(v_{0})^{2}R}{2Rg-S_{F}^{\alpha}(v_{0})^{2}}. \tag{59}\]
To find the initial velocity \(S_{F}^{\alpha}(v_{0})\) required to elevate the object to the altitude \(x_{\max}\), the equation \(S_{F}^{\alpha}(v)=0\) is employed, yielding:
\[S_{F}^{\alpha}(v_{0})=\sqrt{2gR\frac{x_{\max}}{R+x_{\max}}}. \tag{60}\]
As the concept of escape velocity extends to fractal space, the fractal escape velocity \(S_{F}^{\alpha}(v_{e})\) is determined by allowing \(x_{\max}\) to approach infinity:
\[S_{F}^{\alpha}(v_{e})=\sqrt{2gR}, \tag{61}\]
or equivalently,
\[v_{e}\propto(2gR)^{1/2\alpha}. \tag{62}\]
This study delves into the idea of escape velocity within the context of fractal space and time, offering fresh perspectives on the dynamics of entities in dimensions marked by intricacy and self-replication.
As illustrated in Figure 4, we observe a pattern where reducing the spatial dimension necessitates an increase in escape velocity.
### Fractal Newton's Law of Cooling
Newton's Law of Cooling is a fundamental principle in thermodynamics and heat transfer that describes how the rate of heat transfer between an object and
Figure 4: Escape Velocity vs. \(\alpha\)
its surroundings changes over time. It's commonly used to model the cooling or heating of an object through conduction, convection, or radiation [55, 56].
The concept of the fractal Newton's Law of Cooling can be presented in the following manner:
\[D_{t}^{\alpha}T=-k(T-T_{s}). \tag{63}\]
Where:
\[D_{t}^{\alpha}T\]
signifies the fractal time derivative of temperature, \[T\]
is the temperature of the object at any given time, \[T_{s}\]
is the temperature of the surrounding environment, \[k>0\]
is the cooling or heating rate coefficient.
In this context, the rate of temperature change with respect to time is described through a fractal time derivative. This approach provides a unique lens through which to comprehend how objects interact with their surroundings, considering the intricacies of fractal time.
### Estimation of Time of Death
As an example, let's estimate the time of death for a body. We assume that the temperature of the body is discovered at time \(t=0\) as \(T_{0}\), and when it died at \(t_{d}\), the temperature was \(T_{d}\). By utilizing the cooling law, we can determine \(t_{d}\) by solving Eq. (63), which can be represented as:
\[T(t)=T_{s}+(T_{0}-T)\exp(-kS_{F}^{\alpha}(t)). \tag{64}\]
Here, \(T(0)=T_{0}\). If we measure the temperature of the deceased body at time \(t=t_{1}\) and find \(T=T_{1}\), we can use Eq. (63) to derive the equation:
\[T_{1}-T_{s}=(T_{0}-T)\exp(-kS_{F}^{\alpha}(t_{1})). \tag{65}\]
From this equation, we can deduce:
\[k=-\frac{1}{S_{F}^{\alpha}(t_{1})}\ln\frac{T_{1}-T_{s}}{T_{0}-T_{s}}\propto- \frac{1}{t_{1}^{\alpha}}\ln\frac{T_{1}-T_{s}}{T_{0}-T_{s}}. \tag{66}\]
By substituting \(t=t_{d}\) and \(T=T_{d}\) in the Eq. (64), we have:
\[S_{F}^{\alpha}(t_{d})=-\frac{1}{k}\ln\frac{T_{d}-T_{s}}{T_{0}-T_{s}}. \tag{67}\]
This equation can also be written as:
\[t_{d}=\bigg{|}-\frac{1}{k}\ln\frac{T_{d}-T_{s}}{T_{0}-T_{s}}\bigg{|}^{1/\alpha}. \tag{68}\]
This methodology provides a means for estimating the time of death based on temperature measurements and the principles of the fractal Newton's Law of Cooling.
Figure 5 illustrates that during the initial 2 hours, the cooling rate of the deceased body is faster in the fractal time model compared to the standard time case. However, this trend reverses beyond the 2-hour mark.
## 6 Conclusion
In conclusion, the study delved into a comprehensive exploration of various methods in the realm of fractal calculus. By investigating the method analogues of the separable method and integrating factor technique, we addressed \(\alpha\)-order differential equations. An intriguing extension of our analysis led to the resolution of Fractal Bernoulli differential equations, further broadening the scope of our inquiry. The practical implications of our findings were exemplified through their applications in solving real-world problems. From fractal compound interest to the escape velocity of earth in fractal space and time, and even the estimation of time of death with the incorporation of fractal time, our research showcased the versatility of fractal calculus in tackling complex scenarios. To enhance the accessibility of our work, we provided visual representations of our results. These aids not only conveyed our findings more effectively but also aided in the deeper understanding of the intricate concepts discussed. A holistic perspective on the applications of fractal calculus has been offered, demonstrating its adaptability and utility in solving diverse problems across various fields.
**Declaration of Competing Interest:**
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
**CRediT author statement:**
Alireza.K.Golmankhnaeh : Investigation, Methodology, Software, Writing- Original draft preparation. Donatella Bongiorno : Investigation, Writing- Reviewing
Figure 5: Estimation of Time of Death for Different \(\alpha\) Values
and Editing.
**Declaration of generative AI and AI-assisted technologies in the writing process.** During the preparation of this work the authors used GPT in order to correct grammar and writing. After using this GPT, the authors reviewed and edited the content as needed and takes full responsibility for the content of the publication.
|
2304.05390
|
HRS-Bench: Holistic, Reliable and Scalable Benchmark for Text-to-Image
Models
|
In recent years, Text-to-Image (T2I) models have been extensively studied,
especially with the emergence of diffusion models that achieve state-of-the-art
results on T2I synthesis tasks. However, existing benchmarks heavily rely on
subjective human evaluation, limiting their ability to holistically assess the
model's capabilities. Furthermore, there is a significant gap between efforts
in developing new T2I architectures and those in evaluation. To address this,
we introduce HRS-Bench, a concrete evaluation benchmark for T2I models that is
Holistic, Reliable, and Scalable. Unlike existing bench-marks that focus on
limited aspects, HRS-Bench measures 13 skills that can be categorized into five
major categories: accuracy, robustness, generalization, fairness, and bias. In
addition, HRS-Bench covers 50 scenarios, including fashion, animals,
transportation, food, and clothes. We evaluate nine recent large-scale T2I
models using metrics that cover a wide range of skills. A human evaluation
aligned with 95% of our evaluations on average was conducted to probe the
effectiveness of HRS-Bench. Our experiments demonstrate that existing models
often struggle to generate images with the desired count of objects, visual
text, or grounded emotions. We hope that our benchmark help ease future
text-to-image generation research. The code and data are available at
https://eslambakr.github.io/hrsbench.github.io
|
Eslam Mohamed Bakr, Pengzhan Sun, Xiaoqian Shen, Faizan Farooq Khan, Li Erran Li, Mohamed Elhoseiny
|
2023-04-11T17:59:13Z
|
http://arxiv.org/abs/2304.05390v2
|
# HRS-Bench: Holistic, Reliable and Scalable Benchmark
###### Abstract
In recent years, Text-to-Image (T2I) models have been extensively studied, especially with the emergence of diffusion models that achieve state-of-the-art results on T2I synthesis tasks. However, existing benchmarks heavily rely on subjective human evaluation, limiting their ability to holistically assess the model's capabilities. Furthermore, there is a significant gap between efforts in developing new T2I architectures and those in evaluation. To address this, we introduce HRS-Bench, a concrete evaluation benchmark for T2I models that is **H**olistic, **R**eliable, and Scalable. Unlike existing benchmarks that focus on limited aspects, HRS-Bench measures 13 skills that can be categorized into five major categories: accuracy, robustness, generalization, fairness, and bias. In addition, HRS-Bench covers 50 scenarios, including fashion, animals, transportation, food, and clothes. We evaluate nine recent large-scale T2I models using metrics that cover a wide range of skills. A human evaluation aligned with 95% of our evaluations on average was conducted to probe the effectiveness of HRS-Bench. Our experiments demonstrate that existing models often struggle to generate images with the desired count of objects, visual text, or grounded emotions. We hope that our benchmark help ease future text-to-image generation research. The code and data are available at [https://eslambakr.github.io/hrsbench.github.io/](https://eslambakr.github.io/hrsbench.github.io/).
## 1 Introduction
Text-to-Image Synthesis (T2I), one of the essential multi-modal tasks, witnessed remarkable progress starting from conditional GANs [58, 44, 74, 29, 78], which are shown to work on simple datasets [46, 69, 72, 36], to recently diffusion models [20, 61, 18, 10, 57, 55, 77, 45, 59], which are trained on large-scale datasets, e.g., LAION [63, 62].
Despite the rapid progress, the existing models face several challenges, e.g., they cannot generate complex scenes with the desired objects and relationship composition [25, 37]. Furthermore, assessing the T2I models should include more than just fidelity, e.g., the ability to compose multiple objects and generate emotionally grounded or creative images. Therefore, some recent efforts are focusing on improving the existing metrics [23] or proposing new metrics that cover new aspects, such as bias [75], compositions [37, 25, 49]. Moreover, some other works propose new benchmarks, summarized in Table 1, that assess different aspects, e.g., counting [61, 12, 51], social-bias [12], and object fidelity [23, 51]. Even with various benchmarks available, they tend to only cover a limited range of aspects while overlooking crucial evaluation criteria such as robustness,
Figure 1: An overview of our proposed benchmark, HRS-Bench, measures 13 skills which could be grouped into five major categories; accuracy, robustness, generalization, fairness, and bias.
fairness, and creativity.
To bridge this gap, we propose our Holistic, Reliable, and Scalable benchmark, dubbed HRS-Bench. In contrast to existing benchmarks, we measure a wide range of different generative capabilities, precisely 13 skills which can be grouped into five major categories, as demonstrated in Figure 1; accuracy, robustness, generalization, fairness, and bias. Most of these skills have never been explored in the T2I context, such as creativity, fairness, anonymization, emotion-grounding, robustness, and visual-text generation. Even the other previously explored skills were studied from a limited perspective, for instance, DALL-EVAL [12] studied the social bias by generating limited template-based prompts; only 145 prompts. This limited evaluation scope may result in immature or sometimes misleading conclusions. In addition, to facilitate the evaluation process for existing and future architectures, we heavily rely on automatic evaluations, where a wide range of metrics are utilized in the evaluation criteria. Moreover, HRS-Bench covers 50 scenarios, e.g., fashion, animals, transportation, and food. Figure 2 demonstrates the top 15 applications and their object distribution. We evaluate nine recent large-scale T2I models, i.e., Stable-Diffusion V1 [59] and V2 [3], DALL-E 2 [55], GLIDE [45], CogView-V2 [21], Paella [57], minDALL-E [2], DALL-E-Mini [1], and Struct-Diff [25]. In addition, our benchmark is scalable with automatic evaluation, and thus can be extended for any new architectures. To probe the effectiveness of our HRS-Bench, we conduct a human assessment that aligns well with our evaluations by 95% on average. Our contributions can be summarized as follows:
* We develop a Holistic, Reliable, and scalable T2I benchmark called HRS-Bench, depicted in Figure 1, which assess 13 skills covering 50 scenarios.
* Propose a new T2I alignment metric, called AC-T2I, which overcomes the composition limitations of existing large Vision-Language Models (VLMs) [31, 73].
* Nine T2I models are assessed based on our benchmark, including commercial and open-sourced ones.
* We verify the effectiveness of our HRS-Bench metric by conducting a human evaluation for 10% of our data per skill that shows excellent alignment.
* Driven by these holistic evaluations, several conclusions and findings are discussed. For instance, existing models often struggle to generate images with the desired count of objects, visual text, or grounded emotions.
## 2 Revisiting Text-to-Image Benchmarks
Recently, there have been rapid efforts toward designing better image-generation models [20, 61, 18, 10, 57, 55, 77, 45, 59]. However, most existing metrics suffer from several limitations that make them unreliable [23, 7, 22, 6, 75, 49, 66, 73]. Table 1 summarizes the existing T2I benchmarks.
**-DrawBench.** Imagen [61] proposes DrawBench to evaluate the T2I models from other aspects along with the image quality. Whereas, DrawBench covers four skills; counting, compositions, conflicting, and writing, by collecting 200 prompts in total. Despite the simplicity and the limited scope of DrawBench, their efforts are appreciated as the first attempt to assess other aspects rather than the image quality.
**-DALL-EVAL. [12]** It proposes a toolkit called PAINTSKILLS, which assesses three simple visual
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{8}{c}{**Eval Type**} & \multicolumn{2}{c}{**Prompt Type**} & \multirow{2}{*}{\begin{tabular}{c} **Hardness** \\ **Levels** \\ \end{tabular} } \\ & \# Models & \# Skills & \# Metrics & Human & Auto \# Prompts & Template & Human &
\begin{tabular}{c} **Hardness** \\ **Levels** \\ \end{tabular} & \# Annotators \\ \hline
**DrawBench**[61] & 5 & 4 & 0 & ✓ & ✗ & 200 & ✗ & ✓ & ✗ & 25 \\
**DALL-EVAL**[12] & 4 & 5 & 3 & ✓ & ✗ & 7330 & ✓ & ✗ & ✗ & 6 \\
**T2I-T2I**[51] & 2 & 3 & 0 & ✓ & ✗ & 90 & ✗ & ✓ & ✓ & 20 \\
**TISE**[23] & 7 & 3 & 5 & ✗ & ✓ & N/A & ✗ & ✓ & ✗ & N/A \\
**HRS-Bench (Ours)** & **9** & **13** & **17** & ✓ & ✓ & **45000** & ✓ & ✓ & ✓ & **1000** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of text-to-image benchmarks in terms of: 1) The number of evaluated models. 2) The number of the covered skills. 3) The number of utilized metrics. 4) The evaluation type, whether human or metric based or both. 5) The number of prompts. 6) The prompt generation type, whether a template, human-based or both. 7) Whether there are different hardness levels are included. 8) Number of annotators contributed to the evaluation.
Figure 2: Pie chart demonstrates the wide range of covered scenarios by our proposed benchmark, termed HRS-Bench, and their object distribution.
reasoning skills; object recognition, object counting, and spatial relation understanding, alongside two social bias skills; gender and racial bias. To facilitate the automatic evaluation, they built a unity simulator including limited objects to collect the datasets, approximately 80 object classes. In contrast, we cover more than 700 object classes. DETR [9] is utilized for visual reasoning skills evaluation after being fine-tuned on the synthetic dataset collected from the simulator. However, DALL-EVAL evaluates only four models using only 7k prompts and six annotators. Therefore, there is a need for a more comprehensive benchmark that can take into account a broader range of models, prompts, and annotators to provide a more thorough evaluation.
**-HE-T2I. [51]** It proposes 32 possible aspects to benchmark T2I models. However, only three are evaluated; counting, shapes, and faces, and the rest are left unexplored. The three aspects are evaluated using human evaluations, where twenty annotators have contributed to the evaluation.
**-TISE. [23]** It introduces a bag of metrics to evaluate T2I models from three aspects; positional alignment, counting, and fidelity. In addition, three fidelity metrics are introduced, i.e., \(IS^{*}\), \(OIS\), and \(OFID\), and two alignment metrics PA for positional alignment, and CA for counting alignment.
## 3 HRS-Bench
In this section, we first dissect the skills definition, then demonstrate the prompts collection pipeline.
### Skills
#### 3.1.1 Accuracy
**-Counting.** A reliable T2I model should be able to ground the details in the prompt. One form of these details is objects binding with a specific frequency, e.g., "four cars are parked around five benches in a park".
**-Visual Text.** Another essential aspect of assessing the model is generating high-quality text in wild scenes, e.g., "a real ballroom scene with a sign written on it, "teddy bear on the dining table!". The importance of such skill comes from intervening in many scenarios, e.g., education applications, preparing illustration content, and designing billboards.
**-Emotion.** We measure to what extent the model can generate emotion-grounded images [5, 4, 41, 40], e.g., "a rainy scene about cake, which makes us feel excitement."
**-Fidelity.** Image fidelity indicates how accurately an image represents the underlying source distribution [64].
#### 3.1.2 Robustness
To assess the T2I model's robustness, we cover two types of transformations; invariance and equivariance.
**Invariance.** Two skills are introduced to measure the invariance robustness: consistency and typos.
_-Consistency._ We measure the sensitivity of different T2I models towards prompt variations while keeping the same meaning and semantics, i.e., paraphrasing. For instance, generated images from these prompts, "a woman is standing in front of a mirror, carefully selecting the perfect handbag" and "in front of a mirror, a woman is selecting the perfect handbag for the day" should hold the same semantics.
_-Typos._ Two natural perturbations are utilized to assess the models against possible sensibleness noise that users could cause during inference, i.e., typos and wrong capitalization.
**Equivariance.** Three different compositions are explored for the equivariance robustness. Specifically, we study three types of compositions, i.e., spatial, attribute-specific, and action compositions.
_-Spatial composition._ In contrast to the counting skill which only measures the models' ability to compose multiple objects into a coherent scene, spatial composition additionally measures their ability to ground the detailed spatial relationship instructions mentioned in the input prompt, e.g., "a person and a dog in the middle of a cat and a chair".
_-Attribute-specific composition._ Two types of attributes are controlled to study the attribute binding ability, i.e., colors and size attributes. For instance, "an orange cat, a red dog, and a blue chair" and "a banana which is smaller than a person and bigger than a car", for colors and size attribute binding, respectively.
_-Action composition._ It incorporates different subjects that doing different actions, e.g., "a woman is playing in the water, and an elephant is walking through woods".
#### 3.1.3 Generalization
**-Creativity.** In this skill, models aim to generate images that represent not only the textual description but are also imaginative and novel. The creativity skill can be regarded as an out-of-distribution generation [30]. Accordingly, we devised innovative text prompts that are conceptually plausible but may need to be more readily available in standard training data sources, detailed later in Section 3.2.
#### 3.1.4 Fairness
We define fairness as the performance disparity among different sub-groups [24, 52]. A fair model should achieve the same performance on an arbitrary metric since no correlation exists between the metric and the protected attribute. Two attributes have been studied, i.e., gender and style. Following [12], gender refers to sex [32, 54] not the gender identity [43, 17]. We use two gender categories; male and female. The styles are animation, real, sketch, black and white, and weather conditions; sunny, rainy, and cloudy.
#### 3.1.5 Bias
We assess the spurious correlation of the model towards pre-defined attributes, i.e., gender, race, and age. Using agnostic prompts towards a specific attribute, e.g., gender, the model should produce balanced generations of different classes of this attribute. For instance, the gender agnostic prompt could be, "Two persons are working on their laptops".
### Prompts Collection
For each skill, we collect 3k prompts. To ensure our benchmark is holistic enough, we split the prompts equally into three hardness levels, i.e., easy, medium, and hard. In addition, half of the prompts are human-based, and the other half is template-based, depicted in Figure 3. We filter human prompts manually from existing datasets [70, 33, 31]. We use the foundation model GPT-3.5 [47], text-davinci-003+ to facilitate prompts generation, which will be abbreviated as GPT-3.5 later for convenience.
**-Fidelity.** The human prompts are sifted from [70]. Whereas, the template-based prompts are created by defining a template that describes a styled scene that contains some objects, as shown in Figure 3. Then, we create the meta-prompt by sampling the styles from pre-defined styles and the objects from LVIS dataset [26]. Finally, GPT-3.5 [47] is utilized to generate the final prompts.
**-Consistency and Typos.** Consequently, the fidelity prompts are fed to Parrot [14] and NLU augmenter [19] to produce augmented prompts for consistency and typos, respectively. For consistency, we differentiate between the three hardness levels based on the similarity between the fidelity prompt and the augmented prompts using RoBERTa [38]. For typos, the number of introduced typos controls the three hardness levels, i.e., 1-2, 3-4, and 5-6, respectively.
**-Counting.** Given a meta-prompt, GPT-3.5 generate a realistic scenario that contains N objects. To generate the meta-prompts, we randomly samples the number of objects and the objects classes from LVIS dataset [26].
**-Visual Text.** We utilize GPT-3.5 to generate short descriptions which fit on a sign in a crowded scene. Then, we control the hardness levels by the text length and the surrounding scene complexity.
**-Emotion.** We sample random objects from LVIS [26], then append an emotion indicator word forming the meta-prompt. Finally, GPT-3.5 is utilized to generate the final prompts.
**-Creativity.** We craft text prompts that are challenging yet still within the realm of imagination. For easy level, we obtain subject, object, and relationship from Visual Genome [33] and obtain triplets by different combinations. Then sift through all the combinations from triplets extracted from LAION [62] dataset, and retain only the uncommon triplets. For the medium level, we fed the uncommon triplet to GPT-3.5 with the instruction: "Describe subject, relation and object in an imaginative way that will never be seen in the real world" and manually filter the undesirable sentences. Finally, to generate challenging prompts for the hard level, we experiment with various prompts to encourage GPT-3.5 to generate counterproductive sentences, as shown in Figure 3.
**-Compositionality.** We study three composition types, i.e., spatial, attribute-binding, and actions. The spatial prompts are collected using a pre-defined template, where a wide range of relations is utilized, e.g., "on the right of", "above", and "between." For attribute-binding, two attributes are exploited, i.e., colors and size. For each hardness level, the number of objects' compositions increased, ranging from 2 to 4. For the action-level compositional generation, we design prompts with multiple combinations of actions starting from ComCLIP [31]. We combine two sentences from ComCLIP [31] for the easy level. Then, for medium and hard levels, we randomly choose one sentence and feed 'Extend text to let the subject have at least three actions.' and 'Extend text with other subjects doing other actions' into GPT-3.5, respectively, to obtain the final prompt. Detailed examples are demonstrated in Figure 3.
**-Bias.** Random objects are sampled from LVIS datasets [26], combined with a pre-defined template, creating a meta-prompt. Then, the meta-prompt is fed to GPT-3.5 to produce the final prompt, as depicted in Figure 3. To ensure the prompts are agnostic towards the protected attributes, i.e., gender, race, and age, we manually validate them.
**-Fairness.** We adapt the bias prompts. For gender fairness, we replace gender-agnostic words, such as a person, with gender-specific words, such as man and woman. Whereas for style fairness, a style indicator is appended to the beginning of the bias prompt, as shown in Figure 3.
## 4 Evaluation for our Benchmark
As shown in Figure 4, we categorize the skills based on the evaluation criteria, one-to-many mapping. Thus the same skill may be assigned to several metrics.
### Detection-Based Metrics
We utilize UniDet [76] for counting, spatial and attribute compositions because it supports a wide range of object classes, i.e., exceeding 700.
**-Counting.** We adopt the traditional detection measures, Precision, Recall, and F1-score, where Precision assesses the accuracy of additional objects, and Recall assesses the accuracy of missing objects.
**-Spatial Compositions.** Using a simple geometry module, we use the predicted bounding boxes to validate whether the spatial relation is grounded correctly. For instance, given the prompt "A cat above a car.", the predicted bounding boxes will be \(\{x^{1}_{min},y^{1}_{min},x^{1}_{max},y^{1}_{max}\}\)
and \(\{x_{min}^{2},y_{min}^{2},x_{max}^{2},y_{max}^{2}\}\) for the cat and the car, respectively, and the grounded spatial relation is above. Then, our geometry module will assess whether the spatial relation, i.e., above, is grounded correctly based on the following condition: (\(y_{min}^{1}<y_{min}^{2}\)) or (\(y_{max}^{1}<y_{max}^{2}\)).
**-Attributes Compositions.** The predicted bounding boxes' sizes are used for the size composition to validate whether the size relation is grounded correctly. Whereas for color composition, first, we convert the image to the hue color space, then calculate the average hue value within the box and compare it to the pre-defined color space.
**-Visual Text.** We adopt Textsnake [39] and SAR [34] for text detection and recognition, respectively. The recognition accuracy is measured by the Character Error Rate (CER) [42] and the Normalized Edit Distance (NED) [65].
### Alignment-Based Metrics
Three alignment paradigms are explored; Text-to-Image (T2I) (Sec. 4.2.1), Text-to-Image-to-Text (TIT) (Sec. 4.2.3), and Image-to-Image I2I (Sec. 4.2.4). In addition, we introduce our novel Augmented Captioner-based **T2I** Alignment metric, termed **AC-T2I** (Sec. 4.2.2).
#### 4.2.1 T2I Alignment
One possible solution to assess the T2I model's grounding ability is measuring the text and image correlation, e.g., CLIPScore [27, 53]. While CLIP is widely used, its effectiveness has been repeatedly questioned, as it is not sensitive to fine-grained text-image alignment and fails to understand compositions [66, 73]. For instance, [73] shows that CLIP [53] can not distinguish between "the horse is eating the grass" and "the grass is eating the horse". This motivates us to propose our novel augmented captioner-based T2I alignment metric, termed AC-T2I, depicted in Figure 4.
#### 4.2.2 AC-T2I Alignment Metric.
We propose a new T2I alignment metric, called AC-T2I, which overcomes the compositional relationship's limitations of existing large Vision-Language Models (VLMs) [31, 73], by utilizing the n-grams based metric, e.g., CIDEr [68] and BLEU [48]. To this end, we decompose our metric into two steps; first, we transform the image embedding to text space using an image captioning model, then augment the generated caption to make the metric comprehensive enough for different perturbations.
**-Reformatting T2I as TIT.** We reformat T2I as a TIT alignment by transforming the image features to text feature space, using an arbitrary function \(G(\cdot)\). The function \(G(\mathcal{I})\) could be interpreted as an image captioner, e.g., BLIP2 [35]. As shown in Figure 4, given a text prompt \(P^{org}\), \(N_{i}\) images \(\mathcal{I}=\{I_{k}\}_{k=1}^{N_{i}}\) are generated, which are fed to an image captioner \(G(\mathcal{I})\) producing \(N_{c_{i}}\) captions \(\mathcal{C}=\{C_{k}\}_{k=1}^{N_{c_{i}}}\), where \(N_{c_{i}}\) is the number of generated captions per image. Finally, the \(N_{c_{i}}\) captions are automatically evaluated using CIDEr [68] and BLEU [48] against the input prompt \(P^{org}\).
**-Comprehensive TIT.** Instead of considering only the prompt \(P^{org}\) as the GT caption, \(N_{t}\) augmented prompts
Figure 4: On the left is our evaluation taxonomy. On the right, we demonstrate our metric Augmented Captioner-based T2I alignment metric.
Figure 3: Our prompt generation pipeline. First, we create a meta-prompt, which is a template-based prompt (in blue). Then, we sample the skill-related attributes (in orange). Finally, we generate the final prompt using ChatGPT (in green).
\(\mathcal{P}^{aug}=\{P_{k}^{aug}\}_{k=1}^{N_{t}}\) are generated using GPT-3.5, to measure the similarities comprehensively. To this end, we must ensure the GT is holistic enough; therefore, the rephrased version of the prompt \(P^{org}\) should be considered correct. Accordingly, the whole GT-prompt set for each image is defined as \(\mathcal{P}=\{P^{org},\mathcal{P}^{aug}\}=\{P_{k}\}_{k=1}^{N_{t}+1}\), i.e., one original prompt plus \(N_{t}\) augmented prompts.
Finally, for each prompt \(P^{org}\) we calculate the alignment score for each generated prompt-caption pair and select the highest score as the final alignment score, Eq. 1.
\[O_{t}=\frac{1}{N_{i}}\sum_{i=1}^{N_{i}}\max_{1\leq j\leq N_{c_{i}},1\leq k\leq N _{t}+1}S_{t}(C_{i,j},P_{k})), \tag{1}\]
where \(S_{t}(,)\) is the text similarity scoring function, e.g., CIDEr [68], BLEU [48].
#### 4.2.3 TIT Alignment
**-Emotions.** We explore a visual emotion classifier as illustrated in Sec. 4.3. Moreover, we apply our proposed metric, AC-T2I (Eq. 1), to avoid the aforementioned CLIP limitations, detailed in Section 4.2.1 and Section 4.2.2. The number of generated images per prompt \(N_{i}\), generated captions per image \(N_{c_{i}}\), and the augmented prompts \(N_{t}\) are set to 3, 5, and 9, respectively.
**-Creativity.** We assessed the generated images to deviate from the training data while simultaneously adhering to the provided text prompts using our novel metric AC-T2I and the deviation metric (Sec. 4.2.4). We set \(N_{i}\) and \(N_{c_{i}}\) to 3 and 5, respectively. Since it is hard to rephrase our novel prompts while maintaining its creative intent correctly, there will be no augmented prompts for it (\(N_{t}=0\)).
**-Gender and styles fairness and action compositions.** The fairness score is defined as the disparities in subgroups' performance [24, 52], Eq. 2
\[Fairness_{score}=\frac{1}{N_{s}C_{2}}\sum_{i=1}^{N_{s}}\sum_{j=i+1}^{N_{s}} \frac{100\times|A(i)-A(j)|}{max(A(i),A(j))}, \tag{2}\]
where \(\frac{100}{N_{s}C_{2}\times max(A(i),A(j))}\) is a normalization factor, \(N_{s}\) is the number of sub-groups, e.g., two for the gender and \(A\) is the accuracy measure, e.g., AC-T2I or CLIP scores. The less is, the better for \(Fairness_{score}\). Consequently, for the action composition, we exploit the AC-T2I metric, where \(N_{i}\), \(N_{c_{i}}\), and \(N_{t}\) are set to 3, 5, and 9, respectively.
#### 4.2.4 I2I Alignment
**-Creativity.** In addition to AC-T2I, we measure the deviation from the training data to indicate creativity. Accessing large models' training data is challenging, however, most of them are trained on LAION [63]. Accordingly, we use LAION image-text retrieval tools [8] to fetch training data, which search among the dataset using CLIP [53] and a KNN index to seek top-100 nearest images, denoted as \(\mathcal{I}^{train}\) for each prompt. The deviation score is calculated based on Eq. 3.
\[\triangle(\mathcal{I}^{train},I_{i})=\frac{1}{2}-\frac{1}{2N_{i}}\sum_{i=1}^{ N_{i}}S_{v}(\mathcal{I}^{train},I_{i}), \tag{3}\]
where \(S_{v}(,)\) is the visual similarity scoring function, i.e., CLIP [53], \(N_{i}\) number of generated images per prompt. The similarity can be regarded as the Nearest Neighbour (NN) distance from the training dataset, similar to [30].
**-Consistency and typos.** Given a prompt \(P^{org}\), augmented prompt \(\mathcal{P}^{aug}\) are generated using Parrot [14] for consistency and NLU-augmenter [19] for typos. Simultaneously, \(N_{i}\) images \(\mathcal{I}\) and \(N_{i}\) augmented images \(\mathcal{I}^{aug}\) are generated for \(P^{org}\) and \(\mathcal{P}^{aug}\), respectively. Then the cosine similarity is calculated based on Eq. 4.
\[O_{v}=\frac{1}{2N_{i}}\sum_{i=1}^{N_{i}}\sum_{j=1}^{N_{i}}S_{v}(I_{i},\mathcal{ I}^{aug}_{j}), \tag{4}\]
where \(S_{v}(,)\) is visual similarity scoring function; CLIP [53].
### Miscellaneous
**-Emotion.** To comprehensively measure a T2I model's ability to generate images with grounded emotional tones, three evaluation metrics are proposed; AC-T2I (Sec. 4.2.3), T2I alignment (Sec. 4.2.1) and visual emotion classification accuracy. Regarding the visual emotion classifier, we train a ResNet-101 classifier based on combined datasets; FI [71] and ArtEmis [5], to ensure the model can handle diverse domains and scenarios.
**-Fidelity.** We rely on human evaluation, using Amazon Mechanical Turk (AMT) [13]. The annotators are asked to rate each image from 1-5, where 1 is the worst and 5 is the best. For a fair comparison, all the models' output images are shown in the same grid.
**-Bias.** Three bias attributes are assessed, i.e., gender, race, and age. First, the human faces are detected using ArcFace [15], and RetinaFace [16], then the facial attributes are detected using Dex [60]. Finally, the bias score is defined as the distribution skew, i.e., mean absolute deviation (MAD) [50]; Eq. 5, where the balanced case is \(\frac{1}{N_{b}}\).
\[MAD=\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\left|\hat{N}_{b}-\frac{1}{N_{b}}\right|, \tag{5}\]
where \(N_{b}\) is the number of protected attribute groups, e.g., 2 genders, and \(\hat{N}_{b}\) is the Dex output normalized count.
## 5 Experimental Results
### Evaluated Methods
We comprehensively evaluate the performance of nine recent large-scale T2I models introduced as follows. _Transformer-based_: minDALL-E [2] and DALL-E-Mini [1]
are two different publicly available implementations of original DALL-E [56], which uses VQVAE [67] to encode images with grids of discrete tokens and a multimodal transformer for next token prediction. In addition, CogView2 [21] extend to a hierarchical transformer for fast super-resolution synthesis, and Paella [57] improve parallel token sampling based on MaskGIT [11]. _Diffusion-based_: GLIDE [45] and DALLE-V2 [55] decode images via diffusion with CLIP [53] embedding. Stable-Diffusion V1 [59] and V2 [3], (dubbed as SD-V1 and SD-V2) speed up the training of diffusion models [28] by leveraging the latent space of a powerful pre-trained VQVAE [67]. Finally, Struct-Diff [25] tackles the stable-diffusion compositions limitation by manipulating the cross-attention representations based on linguistic insights to preserve the compositional semantics.
### Accuracy Skills Results
**-Counting.** We adopt the traditional detection measures, Precision, Recall, and F1-score. As shown in Figure 5 part A, DALLE-V2 [55] is the best in terms of precision. However, its recall is very poor, as it misses many objects. Whereas jointly considering recall and F1-score, SD-V1 [59] performs the best, despite its worst precision.
**Finding #1. No agreement between precision and recall.** We can select the appropriate model based on the application, which metric is preferred.
**Finding #2. The more detailed prompt, the more accurate is counting performance.** We explore three levels of prompts; 1) Vanilla prompt. The simplest form, e.g., two cups. 2) Meta-prompt. Intermediate level, e.g., describes a
Figure 5: Quantitative results for nine skills are grouped into five sub-figures based on the evaluation criteria.
Figure 6: Qualitative results produced by DALLE-V2. Green and red boxes, respectively, frame the success and failure cases. More qualitative results for all models are demonstrated in the supplementary material.
scene containing two cups. 3) Detailed. The meta-prompt is fed to GPT-3.5 to generate a detailed description including the desired objects, e.g., two cups filled with hot coffee sitting side-by-side on a wooden table. In general, we may think that the simpler and straightforward prompts may lead to better results for the counting skill. Surprisingly, as shown in Figure 8, the Recall and F1-score always increase when the detailed prompt is used.
**Finding #3. Composition-based solution is limited.** We explore Struct-Diff [25] which tackle the compositionality limitation in SD-V1 [59]. As shown in Figure 5 part A, it increases the precision compared to SD-V1 [59]. However the recall and F1-score are decreased drastically.
**-Visual Text.** We utilize two text recognition metrics, CER [42] and NED [65], which are highly correlated (95%).
**Finding #4. All models can not generate visual text even for the simplest case.** As shown in Figure 5 part B, the best model is DALLE-V2 [55], which achieves a 75% error rate. However, the performances of all the models are far from acceptable, i.e., 10-20% error rate.
**Finding #5. Confusion between picturing and writing.** The models show a good language understanding of the mentioned semantics. However, they lean toward drawing them instead of visually writing them. For instance, in Figure 6, in the visual text column and first row, the model draws the "potted plant" instead of writing the words. Consequently, the model prefers to draw the "vessel" in the second row.
**-Emotion. Finding #6. All models suffer from generating emotion-grounded images.** Figure 10 shows the T2I and TIT alignment scores, i.e., BLEU [48], CIDEr [68], and CLIPScore [27], which are almost equally low and far from the acceptance range among the different models. To further validate our observation, we exploit an image-to-emotion classifier trained on combined datasets as discussed in Sec. 4.3. In addition, a human evaluation experiment is conducted. In both evaluations, the classifier and the human evaluation, we simplify the problem as a binary classification, where they are asked to classify the emotion, given the generated image, as a positive or negative emotion. Both report almost 50% accuracy across the entire models, precisely the random performance, where the number of classes is only two.
**-Fidelity.** We generate three distinct images using varying seeds for each of the models. Then, the annotators evaluate them on a scale of 1-5, where 5 is the best, and 1 is the worst. The normalized scores are reported in Figure 7. The best model is SD-V2 [3], where it achieves 62.4% while the worst one is minDALL-E [2] which achieves 52.2%. However all the models are far away from the accepted threshold, i.e., 80% which corresponds to 4 on our rating system (1-5).
### Robustness and Generalization Results
**-Consistency and typos.** We measure the alignment between the images generated from the original prompt and paraphrased or perturbed prompts using CLIPScore [27], as discussed in Section 4.2.4.
**Finding #7. Models are robust against language perturbations.** As shown in Figure 5 part C, all the models perform well against language perturbations and achieve between 70% to 82% alignment score. Specifically, DALLE-V2 [55] jointly achieves the best average performance for both skills, i.e., consistency and typos.
**-Spatial, Size, and Colors composition.**
**Finding #8. The medium and hard levels are unattainable.** For each skill, we define three hardness levels, i.e., easy, medium, and hard, and the reported results in the whole paper are the average accuracy for the three levels. However, we report only the easy level accuracy for the spatial, size, and color composition as the entire models suffer even from the easy level. Moreover, all models fails on the medium and hard levels, where they almost got zeros. Consequently, this raises the severe limitation of the models' composition ability. As shown in Figure 5 part D, the best model, DALLE-V2 [55], achieves 28.3%, 29.9%, and 38% for spatial, size, and colors composition, respectively.
**Finding #9. Composition-based solution is limited.** Similar to finding #3, we explore Struct-Diff [25], where it enhances the SD-V1 [59] performance by almost 3% on the easy level. However, it still fails on the challenging levels.
**-Action composition** Regarding the action compositionality, as illustrated in Figure 5 part E, DALLE-V2 [55] performs the best in generating compositions based on actions, according to both TIT alignment (i.e., highest CIDEr score 1.4538) and human evaluation result. Furthermore, all the scores align well with human evaluation results, confirming our metric's accuracy in evaluating this skill.
**-Creativity.** Since we retrieve the top-100 nearest training data with the CLIPScore[27], we obtain the average score of how the text prompt deviates from the training data, which is 0.4173. Since CLIP [53] maps images and text to shared space, the deviation score should be close if the generated image is aligned with the text. However, experimental results show that all the models fail to generate novel images, and the best model is SD-V2 [3], which achieves the highest deviation score of 0.3433. Due to creativity's very nature, it thrives on deviation. However, if the deviation becomes
Figure 7: Quantitative results for Fidelity and Bias skills.
excessive, the resulting generation may veer toward adverse hedonic and meaningless outcomes. Therefore, we also evaluate TIT (BLEU [48], CIDEr [68]) alignment to ensure the generation keeps the semantic meaning with the text prompt. For example, as Figure 5 part E) shows, Paella achieves a relatively high deviation score but performs poorly in terms of BLEU [48] and CIDEr [68], thus it deviates too much and lose original semantic meaning of corresponding text prompt, which is aligned with human evaluation result. Therefore, both metrics are indispensable for creativity evaluation, and more than one alone is required.
### Fairness and Bias Results
**Finding #10. The models are fair.** As demonstrated in Figure 9, the maximum fairness score is 3.5% by Cogview 2 [21], which indicates that the difference in performance between sub-groups is negligible.
**Finding #11. The models are slightly biased.** In contrast to fairness, the models tend to be biased towards gender, as the average mean absolute deviation, Eq. 5, for DALLE-V2 [55], Cogview2 [21], and minDALLE [2] is 20%, as shown in Figure 7. However, SD-V1 [59] achieves the best results where the deviation is less than 8%. GLIDE [45], by design, is trained not to generate humans. Consequently, DALLE-Mini [1], and Paella [57] perform poorly regarding face generation. Therefore, GLIDE [45], DALLE-Mini [1], and Paella [57] are excluded from the bias measure.
### Human Evaluation
To prove the effectiveness of our benchmark, we conduct a human evaluation using Amazon Mechanical Turk (AMT) [13] over 10% of our data across the entire skills. The human evaluation criteria are divided into two main groups: 1) Modular-based. The core blocks in each metric are evaluated. 2) End-to-End based. Using score-based evaluation.
**-Modular-based.** The UniDet [76] is the core block for counting, visual-text, spatial, color, and size composition. First, we ask humans to visually inspect its performance and report the true positives, false positives, and false positives. Then, we measure Person-correlation between the human F1-score and our F1-score, which shows a high correlation between our calculations and human evaluation, i.e., 93%.
Similarly, we measure the detection and recognition accuracy of Textsnake [39] and SAR [34], respectively. Again, the correlation is in our favor, 98% and 96%, respectively. Moreover, regarding the emotion, the annotators are asked to binary classify the emotions, i.e., positive or negative, given only the images. The results are highly aligned with our measure, where both agree that the models generate natural images, indicating that the prompt's emotion indicator is ignored. Regarding consistency and typos, the core module is the augmenter [14, 19]. To further ensure that all the generated prompts have the same meaning as the original prompt, we conduct a human study where we ask the users to rate the prompt pair (original and augmented) on a scale of 1 (not similar at all) to 5 (precisely same meaning). The results show a great alignment, i.e., 94%.
**-End-to-End based.** To assess the creativity metric, we equally select 100 images per model for each hardness level and let annotators score from 1 to 5 for each generated image considering: (1) whether the generated image is creative; (2) whether the image is aligned with the given prompt. For action composition skills, annotators are requested to assign a score between 1 to 5 based on the accuracy of the generated subject and actions in response to text prompts.
## 6 Conclusion
We introduce a comprehensive and reliable benchmark, dubbed HRS-Bench, for evaluating text-to-image (T2I) models. Our benchmark measures 13 skills, categorized into five major categories, and covers 50 applications, providing a holistic evaluation of T2I models. Through our evaluation of nine recent large-scale T2I models, we have identified areas where state-of-the-art models struggle to tackle these skills, highlighting the need for continued research and development. Our human evaluation results confirm the effectiveness and reliability of our benchmark. Further, our benchmark will help ease future T2I research and progress on improving the skills covered in this benchmark.
|
2308.14148
|
Production of the newly observed $\bar{T}_{c\bar{s}0}$ by kaon-induced
reactions on a proton/neutron target
|
Recently, a new doubly charged tetraquark $T^{a}_{c\bar{s}0}(2900)^{++}$ and
its neutral partner $T^{a}_{c\bar{s}0}(2900)^0$ at the invariant mass spectrum
of $\pi{}D_s$ were observed by the LHCb Collaboration. According to its
properties, such as the mass and decay width, the
$T^{a}_{c\bar{s}0}(2900)^{++/0}$ have been suggested to be a compact
multi-quark state or a hadron molecule. In order to distinguish the various
interpretations of the $T^{a}_{c\bar{s}0}(2900)^{++/0}$, we investigate the
possibility to study the $\bar{T}^{a}_{c\bar{s}0}(2900)$ [the antiparticle of
$T^{a}_{c\bar{s}0}(2900)$] by kaon-induced reactions on a proton target in an
effective Lagrangian approach. The production mechanism is characterized by the
$t$-channel $D$ meson exchange. Our theoretical approach is based on the
assumption that $\bar{T}^{a}_{c\bar{s}0}(2900)$ can be either a
$K^{*}D^{*}-D_s^{*}\rho$ molecule or a compact tetraquark state. Using the
coupling constants of the $\bar{T}^{a}_{c\bar{s}0}(2900)$ to $KD$ channel
obtained from molecule or compact tetraquark picture of the
$\bar{T}^{a}_{c\bar{s}0}(2900)$, we compute the cross-sections for the process
$K^{-}n\to{}\bar{T}^{a}_{c\bar{s}0}(2900)^{--}\Lambda^{+}_c$. The $\bar{K}N$
initial state interaction mediated by Pomeron and Reggeon exchanges is also
included, which reduces the production of the $\bar{T}^{a}_{c\bar{s}0}(2900)$.
Our calculations show that whether $\bar{T}^{a}_{c\bar{s}0}(2900)$ is a
molecule or a compact tetraquark state, the cross-sections for the
$K^{-}n\to{}\bar{T}^{a}_{c\bar{s}0}(2900)^{--}\Lambda^{+}_c$ reaction are of
similar magnitude, ranging from approximately 0.075 nb to 0.270 nb.
|
Yin Huang, Hao Hei, Jing-wen Feng, Xurong Chen, Rong Wang
|
2023-08-27T16:12:30Z
|
http://arxiv.org/abs/2308.14148v1
|
Production of the newly observed \(\bar{T}_{c30}\) by kaon-induced reactions on a proton/neutron target
###### Abstract
Recently, a new doubly charged tetraquark \(T_{c30}^{a}(2900)^{++}\) and its neutral partner \(T_{c30}^{a}(2900)^{0}\) at the invariant mass spectrum of \(\pi D_{s}\) were observed by the LHCb Collaboration. According to its properties, such as the mass and decay width, the \(T_{c30}^{a}(2900)^{++/0}\) have been suggested to be a compact multi-quark state or a hadron molecule. In order to distinguish the various interpretations of the \(T_{c30}^{a}(2900)^{++/0}\), we investigate the possibility to study the \(T_{c30}^{a}(2900)\) [the antiparticle of \(T_{c30}^{a}(2900)\)] by kaon-induced reactions on a proton target in an effective Lagrangian approach. The production mechanism is characterized by the \(r\)-channel \(D\) meson exchange. Our theoretical approach is based on the assumption that \(T_{c30}^{a}(2900)\) can be either a \(K^{*}D^{*}-D_{s}^{*}\rho\) molecule or a compact tetraquark state. Using the coupling constants of the \(T_{c30}^{a}(2900)\) to \(KD\) channel obtained from molecule or compact tetraquark picture of the \(T_{c30}^{a}(2900)\), we compute the cross-sections for the process \(K^{*}n\to\bar{T}_{c30}^{a}(2900)^{--}\Lambda_{c}^{*}\). The \(\bar{K}N\) initial state interaction mediated by Pomeron and Reggeon exchanges is also included, which reduces the production of the \(T_{c30}^{a}(2900)\). Our calculations show that whether \(T_{c30}^{a}(2900)\) is a molecule or a compact tetraquark state, the cross-sections for the \(K^{*}n\to\bar{T}_{c30}^{a}(2900)^{--}\Lambda_{c}^{*}\) reaction are of similar magnitude, ranging from approximately 0.075 nb to 0.270 nb. A clear comparison can be made by computing the cross-section of the \(K^{*}n\to\bar{T}_{c30}^{a}(2900)^{--}\Lambda_{c}^{*}\to\pi^{-}D_{s}^{*}\Lambda_ {c}^{*}\) reaction. The results indicates that the cross-section for the molecule assignment of \(\bar{T}_{c30}^{a}(2900)\) can reach up to \(1.83\times 10^{-3}\) nb, which is significantly smaller than that of 0.122 nb by assuming \(\bar{T}_{c30}^{a}(2900)\) as a compact tetraquark state. Those results can be measured in future experiments, and can be used to test the nature of the \(\bar{T}_{c30}^{a}(2900)\). Last, we also propose to search for the unreported charged tetraquark \(T_{c30}^{a}(2900)^{+}\) in the \(K^{*}p\to\bar{T}_{c30}^{a}(2900)^{-}\Lambda_{c}^{*}\) reaction.
## I Introduction
Studying hadrons with more complex internal structures than quark states, where mesons are composed of quark-antiquark pairs [1], and baryons are constructed from three quarks [2; 3], is a prominent topic in particle physics. We call them exotic hadrons. For an extended period, little noteworthy advancement has been made in the exploration of exotic states. Only a few phenomena suggesting that quarks \(u/d/s\) can form exotic hadrons. For example, In constituent quark models, the mass of the strange quark is approximately 50% heavier than that of the \(u/d\) quarks. This leads to questions regarding why the \(\Lambda(1405)\) has a significantly lower mass than the \(N(1535)\). Moreover, it is puzzling to observe that the \(N(1440)\), as a \(N=2\) baryon, is much lighter than the nucleon resonance \(N(1535)\) with \(N=1\), where \(N\) is the main quantum number. These issues led Zou and his collaborators to propose the existence of significant five-quark components in the nucleon and its resonances [4; 5; 6]. The mass inversion problem could be easily understood if there are substantial five-quark \(uuds\bar{s}\) components in the \(N(1535)\)[7; 8]. Furthermore, the five-quark configurations also provide a natural explanation for its large strange decay [9; 10].
However, it was in 2003 when the LHCb collaboration observed the \(X(3872)\) in the \(\pi^{+}\pi^{-}J/\psi\) mass spectra [11], that this field entered a new era. From its observed decay mode, \(X(3872)\) is known to consist of a pair of hidden-charm quarks and two pairs of light quarks. Subsequent discoveries of several hidden-charm pentaquark states, including \(P_{c}(4380)\), \(P_{c}(4440)\), \(P_{c}(4457)\), \(P_{c}(4312)\), \(P_{cs}(4338)\), and \(P_{cs}(4459)\), have further strengthened the belief in significant progress in the research of exotic hadrons [12; 13; 14; 15; 16; 17]. The recent observation of a doubly charged tetraquark and its neutral partner by the LHCb collaboration in the analysis of the \(B^{0}\to\bar{D}^{0}D_{s}^{*}\pi^{-}\) and \(B^{*}\to\bar{D}^{*}D_{s}^{*}\pi^{-}\) reactions [18] marks an another significant advancement in the study of exotic hadrons. Their masses and widthes were measured to be
\[T_{c30}^{a}(2900)^{0}: M =2.892\pm 0.014\pm 0.015\quad\text{GeV},\] \[\Gamma =0.119\pm 0.026\pm 0.013\quad\text{GeV},\] \[T_{c30}^{a}(2900)^{++}: M =2.921\pm 0.017\pm 0.020\quad\text{GeV},\]
\[\Gamma=0.137\pm 0.032\pm 0.017\quad\text{GeV}, \tag{1}\]
respectively. Supposing the states belong to the same isospin triplet, the experiment also gave the shared mass and width,
\[T_{c30}: M =2908\pm 23\quad\text{MeV}, \tag{2}\] \[\Gamma =136\pm 25\quad\text{MeV},\]
Similar to the challenges faced with other exotic hadrons, the true internal structure of these two newly observed mesons cannot be completely determined based on existing experimental data. Due to the close proximity of the mass of \(T^{a}_{c30}(2900)\) to the \(DK\) threshold, the authors of Ref.[19] proposed a novel interpretation for the recently discovered \(T^{a}_{c30}(2900)\). They suggest that these two states could be an isovector \(D^{*}K^{*}\) molecular state with quantum numbers \(I(J^{P})=1(0^{+})\). In an independent study conducted by the authors of Ref.[20], it was found that if the \(T^{a}_{c30}(2900)^{0}\) is indeed a molecular state formed by \(D^{*0}K^{*0}\), its primary decay mode would likely be into \(D^{0}K^{0}\), rather than the observed experimental decay channel \(D^{*}_{s}\pi^{-}\). The authors in Ref. [21] support the \(T^{a}_{c30}(2900)\) as a \(D^{*}K^{*}\) molecule based on the analysis of the mass spectrum and the partial widths using the QCD light-cone sum rule approach and soft-meson approximation. The mass of the \(T^{a}_{c30}\) was studied in the coupled-channel approach and it was shown that \(T_{c30}\) might be \(D^{*}K^{*}-D^{*}_{s}\rho\) couple molecular state [22]. However, Ref. [23] reached a strikingly different conclusion, arguing that \(T^{a}_{c30}\) should out be considered as a \(D^{*}K^{*}\) bound state, but instead might be compact tetraquarks.
Indeed, the newly observed two mesons can be assigned to be the lowest \(1S\) -wave tetraquark states [24] within the framework of a nonrelativistic potential quark model. Their analysis indicates that the dominant decay mode is the \(D^{*}_{s}\rho\). Furthermore, the compact tetraquark explanation of \(T^{a}_{c30}\) are also supported by estimates obtained from the multiquark color flux-tube model [25]. QCD sum rules, informed by the examination of mass spectrum and the two-body strong decays, have lead Refs. [26; 27] to classify \(T^{a}_{c30}\) as compact tetraquark states. And the studies also reveal that the primary decay models for \(T^{a}_{c30}\) involve \(D_{s}\pi\) and \(DK\) channels [27]. The compact tetraquark candidates for \(T^{a}_{c30}\) gains additional support from the Refs. [28; 29; 30]. We note that the threshold effect from the interaction between the \(D^{*}K^{*}\) and \(D^{*}_{s}\rho\) channels and the kinetic effect from a triangle singularity for \(T^{a}_{c30}\) are also proposed in Ref. [31] and Ref. [32], respectively.
In addition to analyzing the mass spectrum and decay width, exploring the production mechanism provides a more effective approach to evaluating the nature of \(T^{a}_{c30}\). This is mainly due to the production process strong dependence on the internal structure of \(T^{a}_{c30}\). We find that whether \(T^{a}_{c30}\) is a molecular state or a compact multi-quark state, it exhibits a significant \(KD\) decay width. This motivates our quest to search for \(\bar{T}^{a}_{c30}(2900)^{--}\) in the \(K^{-}n\to\bar{T}^{a}_{c30}(2900)^{--}\Lambda^{+}_{c}\) and \(K^{-}n\to T^{a}_{c30}(2900)^{--}\Lambda^{+}_{c}\to\pi^{-}D^{-}_{s}\Lambda^{+} _{c}\) reactions. Notably, high-energy kaon beams are available at OKA@U-70 [33], SPS@CERN [34], CERN/AMBER [35], and potential upgrades to the J-PARC kaon beam, enabling us to reach the necessary energy range for \(\bar{T}^{a}_{c30}(2900)^{--}\) production [36]. Consequently, the searching for \(\bar{T}^{a}_{c30}(2900)^{--}\) in the \(K^{-}n\to\bar{T}^{a}_{c30}(2900)^{--}\Lambda^{+}_{c}\) and \(K^{-}n\to\bar{T}^{a}_{c30}(2900)^{--}\Lambda^{+}_{c}\) and \(K^{-}n\to\bar{T}^{a}_{c30}(2900)^{--}\Lambda^{+}_{c}\) reactions because feasible. This approach facilitates a straightforward differentiation between molecular and compact tetraquark states through the production process.
In this study, we examine the recently observed \(\bar{T}^{a}_{c30}(2900)\) production in the \(K^{-}n\to\bar{T}^{a}_{c30}(2900)^{--}\Lambda^{+}_{c}\) and \(K^{-}n\to\bar{T}^{a}_{c30}(2900)^{--}\Lambda^{+}_{c}\to\pi^{-}D^{-}_{s}\Lambda^ {+}_{c}\) reactions by considering \(T^{a}_{c30}\) as a molecular state and a compact multiquark state, respectively. A conclusive determination of the inner structure of \(T^{a}_{c30}\) can be attained by comparing the obtained cross-section with future experimental data. To enhance the reliability of our predictions, the effect from the \(\bar{K}N\) initial state interaction (ISI) must be taken into account due to there exist plenty of experimental information about the \(\bar{K}N\) elastic interaction in the considering energy region. Moreover, we also proposed to search for its isospin partner \(\bar{T}^{a}_{c30}(2900)^{-}\) in the \(K^{-}p\to\bar{T}^{a}_{c30}(2900)^{-}\Lambda^{+}_{c}\) reaction. This paper is organized as follows. In Sec. II, we will present the theoretical formalism. In Sec. III, the numerical result will be given, followed by discussions and conclusions in the last section.
## II Formalism and ingredients
In this study, we investigate the feasibility of measuring \(\bar{T}^{a}_{c30}(2900)^{--}\) in the \(K^{-}n\to\bar{T}^{a}_{c30}(2900)^{--}\Lambda^{+}_{c}\) reaction. We consider two scenarios for \(T^{a}_{c30}\): one as a molecular state and the other as a compact multi-quark state. The considered Feynman diagrams are illustrated in Fig. (1), which includes only the \(t\)-channel \(D^{-}\) meson exchange diagram. And the ISI is represented by the red circle. This production process differs from the complex proton-proton collisions [18], as \(\bar{T}^{a}_{c30}(2900)^{--}\) production in \(K^{-}n\to\bar{T}^{a}_{c30}(2900)^{--}\Lambda^{+}_{c}\) occurs more simply. The reason lies in the significantly lower required center-of-mass energies compared to proton-proton collisions. At these lower energies, we can neglect contributions from the \(s\)- and \(u\)-channels, which involve the creation of an additional \(c\bar{c}\) quark pair in kaon-induced production and are typically strongly suppressed. Hence, the \(K^{-}n\to\bar{T}^{a}_{c30}(2900)^{--}\Lambda^{+}_{c}\) reaction is expected to be primarily governed by Born terms through the \(t\)-channel \(D^{-}\) exchanges, resulting in minimal background interference.
To calculate the diagrams depicted in Fig. (1), it is necessary to determine the effective Lagrangian densities corre
Figure 1: Feynman diagram for the \(K^{-}n\to\bar{T}^{a}_{c30}(2900)^{--}\Lambda^{*}_{c}\) reaction. The contributions from the \(t\)-channel \(D^{-}\) meson exchange. We also show the definition of the kinematics (\(p_{1},p_{2},q_{1},q_{2}\)) that we use in the present calculation.
sponding to the relevant interaction vertices. In the case of \(\Lambda_{c}ND\) coupling, we adopt the Lagrangian densities employed in Ref.[37; 42]
\[\mathcal{L}_{\Lambda,ND}=ig_{\Lambda,ND}\bar{\Lambda}_{c}\gamma_{5}ND+H.c., \tag{3}\]
where the coupling constant \(g_{\Lambda,ND}=-13.98\) is established from the SU(4) invariant Lagrangians [39] in terms of \(g_{\pi NN}=13.45\) and \(g_{\rho NN}=6.0\). \(N\), \(D\), and \(\Lambda_{c}\) are the nucleon, \(D\) meson, and \(\Lambda_{c}^{+}\) baryon fields, respectively.
To calculate the diagrams depicted in Fig. (1), it is also to determine the effective Lagrangian densities for the interaction vertex involving \(\bar{T}^{a}_{c30}(2900)^{--}K^{-}D^{-}\). Since the spin-parity of \(T^{a}_{c30}\) is established as \(I(J^{P})=1(0^{+})\), the coupling between \(T^{a}_{c30}\) and \(KD\) predominantly occurs through \(S\)- and \(D\)-wave interactions. Given our focus on studying the production rate of \(\bar{T}^{a}_{c30}(2900)^{--}\) near the threshold region, the contribution from the lowest angular momentum state is most significant. This phenomenon can be attributed to the higher energy requirement for \(D\)-wave production cross-section than that of the \(S\)-wave production cross-section. Thus, in this study, we will employ the effective Lagrangian densities corresponding to the \(S\)-wave \(\bar{T}^{a}_{c30}(2900)^{--}K^{-}D^{-}\) interaction vertex. It is important to highlight that \(S\)-wave effective Lagrangians are always characterized by fewer derivatives. This leads us to express the Lagrangian densities for the \(S\)-wave coupling between \(\bar{T}^{a}_{c30}(2900)^{--}\) and \(K^{-}D^{-}\) as follows [42]
\[\bar{\mathcal{L}}_{\bar{T}^{a}_{c30}}=g_{\bar{T}^{a}_{c30}}\bar{K}\bar{\tau} \cdot\bar{T}_{c30}D, \tag{4}\]
where the \(\tau\) is the corresponding pauli matrix reflecting the isospin of the \(\bar{T}^{a}_{c30}(2900)\). Please note that we have
\[\bar{\tau}\cdot\bar{T}^{a}_{c30}=\left(\begin{array}{cc}T^{a}_{c30}(2900)^{ +}&\sqrt{2}T^{a}_{c30}(2900)^{++}\\ \sqrt{2}T^{a}_{c30}(2900)^{0}&-T^{a}_{c30}(2900)^{+}\end{array}\right) \tag{5}\]
where the state \(T^{a}_{c30}(2900)^{+}\) has not yet been discovered. It only indicates the signal of \(T^{a}_{c30}(2900)^{+}\) in the \(B^{0}\to D^{-}D^{0}K^{+}\) reaction [40].
In Eq. (4), the coupling constant \(g_{\bar{T}^{a}_{c30}}\) is determined from the partial decay width of \(T^{+\prime}_{c30}\to K^{+}D^{+}\), which is obtained as follow,
\[\Gamma_{T^{a}_{c30}(2900)^{++}\to K^{+}D^{+}}=\frac{g_{T_{c30}}^{2}}{4\pi} \frac{|\vec{p}_{D^{*}}^{\times,m}|}{m_{T^{*}_{c30}}^{2}}, \tag{6}\]
where \(\vec{p}_{D^{*}}^{\times,m}\) is the three-vector momentum of the \(D^{+}\) in the \(T^{a}_{c30}(2900)^{++}\) meson rest frame. Unfortunately, there is no experimental information on the decay widths for \(\Gamma(T^{a}_{c30}(2900)^{++}\to K^{+}D^{+})\), as this is very difficult to determine. Thus, it is necessary to rely on theoretical predictions, such as those of Refs. [27; 24]. Assuming the \(T^{a}_{c30}(2900)^{++}\) as a compact multi-quark state, the partial decay width of the \(T^{a}_{c30}(2900)^{++}\to K^{+}D^{+}\) is predicted to be \(\Gamma(T^{++}_{c30}\to K^{+}D^{+})=56.8\pm 33.4\) MeV [27]. Using the corresponding experimental masses of the relevant particles given in Ref. [41], we obtain \(g_{T_{c30}}=2.836^{+0.739}_{-1.016}\). Note that the partial decay width of the \(T^{a}_{c30}(2900)^{++}\to K^{+}D^{+}\) is also evaluated in Ref. [24], adopting the compact multi-quark state assignment for \(T^{a}_{c30}(2900)^{++}\), and found that the obtained partial decay width falls within the range reported in Ref. [27].
Considering \(T^{a}_{c30}(2900)^{++/0}\) as an \(S\)-wave \(DK-D^{*}_{\pi}\rho\) molecule [22], we analyze the partial decay width of \(T^{a}_{c30}(2900)^{++}\) into the \(K^{+}D^{+}\) final state through hadronic loops with the help of the effective Lagrangians. The loop diagrams are depicted in Fig. (2), and the resulting partial decay widths are provided in Table. 1 (additional details can be found in the Appendix. IV). Utilizing these decay widths, coupling constants are evaluated and collected in Table. 1.
Due to the hadrons are not pointlike particles, it becomes imperative to incorporate form factors when evaluating the scattering amplitudes of the \(K^{-}n\to\bar{T}^{a}_{c30}(2900)^{--}\Lambda_{c}^{+}\) reaction. For the \(t\)-channel \(D^{-}\) meson exchange diagram, we adopt a widely used approach found in many previous studies [42; 43; 44] with the expression
\[\mathcal{F}_{D^{-}}(q_{D^{-}}^{2},m_{D^{-}})=\frac{\Lambda_{D^{-}}^{2}-m_{D^{-} }^{2}}{\Lambda_{D^{-}}^{2}-q_{D^{-}}^{2}}, \tag{7}\]
where \(q_{D^{-}}^{2}\) and \(m_{D^{-}}\) represent the four-momentum and mass of the exchanged \(D^{-}\) meson, respectively. The parameter \(\Lambda_{D^{-}}\) serves as a hard cutoff, directly linked to the size of the hadron. Empirically, \(\Lambda_{D^{-}}\) should exceed the \(m_{D^{-}}\) mass by several hundred MeV at least. Therefore, we choose \(\Lambda_{D^{-}}=m_{D^{-}}+\alpha\Lambda_{QCD}\), following the precedent set by prior works [42; 43; 44]. The parameter \(\alpha\) reflects the nonperturbative property of QCD at the low-energy scale, which will be taken as a parameter and discussed later.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(\sqrt{s}_{pole}\) & \([g_{D^{*}K^{+}}]\) & \([g_{T^{a}_{c30}}]\) & \(\Gamma(T^{++}_{c30}\to K^{+}D^{+})\) & \(g_{T^{a}_{c30}}\) \\ \hline
2885 & 5531 & 5379 & 48.72-54.59 & 2.647-2.802 \\
2887 & 2198 & 2082 & 7.37-8.25 & 1.029-1.089 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Coupling constants \(g_{T^{a}_{c30}}\) and the \(K^{+}D^{*}\) decay width (in units of MeV) of the \(T^{a}_{c30}(2900)^{++/0}\). The pole positions and effective couplings evaluated in Ref. [22].
In this study, we will examine the impact of the \(K^{-}n\) initial state interaction on the cross-section of \(\bar{T}^{a}_{c0}(2900)^{--}\) production in the \(K^{-}n\to\bar{T}^{a}_{c\bar{\}0}(2900)^{--}\Lambda^{+}_{c}\) reaction. A straightforward approach utilized in Ref. [45] yields a satisfactory representation of the existing experimental data for \(K^{-}p\) and \(K^{-}n\) scattering at high energies. Consequently, we employ this methodology to estimate the initial state interaction (ISI) in the \(K^{-}n\to K^{-}n\) reaction at high energies.
The pertinent Feynman diagram depicting the ISI for \(K^{-}N\to K^{-}N\) reaction is illustrated in Fig. (3), where exchanges involving the Pomeron and \(f_{2}\), \(a_{2}\), \(\rho\), and \(\omega\) Reggeons (IR) are considered. The total amplitude \({\cal T}_{K^{-}N\to K^{-}N}\), incorporating Pomeron and Reggeon exchanges, can be expressed as a sum of individual contributions, as indicated by [45]
\[{\cal T}_{K^{-}N\to K^{-}N}(s,t) ={\cal A}_{IP}(s,t)+{\cal A}_{f_{2}}(s,t)\pm{\cal A}_{a_{2}}(s,t)\] \[+{\cal A}_{\omega}(s,t)\pm{\cal A}_{\rho}(s,t), \tag{8}\]
where \(s=(p_{1}+p_{2})^{2}\) and \(t=(p_{1}-q_{1})^{2}\). The \((+)\) and \((-)\) are for the \(K^{-}p\to K^{-}p\) and \(K^{-}n\to K^{-}n\) interactions, respectively. For large center-of-mass energies \(\sqrt{s}\), the individual contribution to the \(KN\to\bar{K}N\) amplitude can be parameterized as follows
\[{\cal A}_{t}(s,t)=\eta_{i}sC^{KN}_{i}(\frac{s}{s_{0}})^{\alpha_{i}(t)-1}\exp( \frac{{\cal B}^{i}_{KN}}{2}t), \tag{9}\]
where \(i=IP\) represents the Pomeron and \(f_{2}\), \(a_{2}\), \(\omega\), and \(\rho\) Reggeons. The energy scale is \(s_{0}=1\) GeV\({}^{2}\). The coupling constants \(C^{KN}_{i}\), the parameters of the Regge linear trajectories \(\alpha_{i}(t)=\alpha_{i}(0)+\alpha_{i}t\), the signature factors \(\eta_{i}\), and the \({\cal B}^{i}_{\bar{K}N}\) utilized in Ref.[45] offer a suitable description of the experimental data. The parameters determined in Ref.[45] are outlined in Table 2.
Using the effective Lagrangians mentioned above and taking the ISI of the \(K^{-}p\) system into account, the full amplitude of the \(K^{-}n\to\bar{T}^{a}_{c\bar{\}0}(2900)^{--}\Lambda^{+}_{c}\) reaction can be derived as
\[{\cal M}^{full}={\cal M}^{Born}+{\cal M}^{K^{-}n-ISI}, \tag{10}\]
where the Born amplitude is written as
\[{\cal M}^{Born} =if_{l}g_{\bar{T}_{c\bar{}0}}g_{\Lambda,ND}\bar{u}(q_{2},s_{ \Lambda^{+}_{c}})\gamma_{\bar{}}u(p_{2},s_{p})\] \[\times\frac{i}{(q_{1}-p_{1})^{2}-m_{D^{-}}^{2}}{\cal F}_{D^{-}}[ (q_{1}-p_{1})^{2}_{D^{-}},m_{D^{-}}], \tag{11}\]
and the corrections to the Born amplitude due to \(K^{-}n\) interactions were taken into account in [45; 46] as
\[{\cal M}^{K^{-}n-ISI}=\frac{i}{16\pi^{2}s}\int d^{2}\bar{T}^{a}_{K^{-}}{\cal T} _{K^{-}n\to K^{-}n}(s,k^{2}_{t}){\cal M}^{Born}. \tag{12}\]
where \(k_{t}\) is the momentum transfer in the \(K^{-}n\to K^{-}n\) reaction. \(\bar{u}(q_{2},s_{\Lambda^{+}_{c}})\) and \(u(p_{2},s_{p})\) are the Dirac spinors, with \(s_{\Lambda^{+}_{c}}(q_{2})\) and \(s_{p}(p2)\) being the spins (the four-momenta) of the outgoing \(\Lambda^{+}_{c}\) and the initial proton, respectively.
With the scattering amplitudes of the \(K^{-}n\to\bar{T}^{a}_{c\bar{}0}(2900)^{--}\Lambda^{+}_{c}\) reaction obtained in the previous section, the differential cross section in the center of mass (cm) frame for the process \(K^{-}n\to\bar{T}^{a}_{c\bar{}0}(2900)^{--}\Lambda^{+}_{c}\) can be calculated [41]
\[\frac{d\sigma}{d\cos\theta}=\frac{m_{N}m_{\Lambda^{+}_{c}}}{8\pi s}\frac{| \vec{q}_{1cm}|}{|\vec{p}_{1cm}|}(\frac{1}{2}\sum_{s_{c},s_{\Lambda^{+}_{c}}}| {\cal M}|^{2}), \tag{13}\]
where the \(\theta\) is the scattering angle of the outgoing \(\bar{T}^{a}_{c\bar{}0}\) meson relative to the beam direction, while \(\vec{p}^{1cm}\) and \(\vec{q}^{1cm}\) are the \(K^{-}\) and \(\bar{T}^{a}_{c\bar{}0}\) meson three momenta in the center-of-mass frame, respectively, which are
\[|\vec{p}_{1cm}|=\frac{\lambda^{1/2}(s,m_{K^{-}},m_{n})}{2\sqrt{s}};\ |\vec{q}_{1cm}|=\frac{\lambda^{1/2}(s,m_{\pi^{-}_{c\bar{}0}},m_{\Lambda^{+}_{c} })}{2\sqrt{s}}. \tag{14}\]
Here \(\lambda(x,y,z)=(x-y-z)^{2}-4yz\) is the Kallen function.
## III Results and discussions
With the formalism and ingredients given above, the cross-section as a function of the beam momentum \(P_{K^{-}}\) for \(K^{-}n\to\bar{T}^{a}_{c\bar{}0}(2900)^{--}\Lambda^{+}_{c}\) reaction can be easily obtained. Before presenting the results, it is important to discuss the parameter \(\alpha\) that relates to the form factor. This is because the value of the cross-section is highly sensitive to the model parameter \(\alpha\). However, determining the value of \(\alpha\) from first principles is currently not feasible. Instead, it can be better determined from the experimental data. Indeed, it has been established that the free parameter \(\alpha=1.5\) or \(1.7\) were fixed by fitting the experimental data of the processes \(e^{+}e^{-}\to D\bar{D}\)[47] and \(e^{+}e^{-}\to\gamma ISRD\bar{D}\)[48]. The procedures for this fitting is outlined in Ref.[49]. For this study, we adopt the values \(\alpha=1.5\) or \(1.7\), as they have been determined from the experimental
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**i** & \(\eta_{i}\) & \(\alpha_{i}(t)\) & \({\cal C}^{KN}_{i}(\text{mb})\) & \({\cal B}^{KN}_{i}(\text{GeV}^{-2})\) \\ \hline \(IP\) & \(i\) & 1.081+(0.25 GeV\({}^{-2}\))t & 11.82 & 5.5 \\ \(f_{2}\) & \(-0.861+i\) & 0.548+(0.93 GeV\({}^{-2}\))t & 15.67 & 4.0 \\ \(\rho\) & \(-1.162-i\) & 0.548+(0.93 GeV\({}^{-2}\))t & 2.05 & 4.0 \\ \(\omega\) & \(-1.162-i\) & 0.548+(0.93 GeV\({}^{-2}\))t & 7.055 & 4.0 \\ \(a_{2}\) & \(-0.861+i\) & 0.548+(0.93 GeV\({}^{-2}\))t & 1.585 & 4.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Parameters of Pomeron and Reggeon exchanges determined from elastic and total cross sections in Ref. [45].
Figure 3: Feynman diagram for the mechanism of the initial state interaction of the \(K^{-}n\to K^{-}n\).
data of Refs. [47; 48] using the same \(D\) form factors employed in our current work.
With the obtained \(\alpha\) value, the cross-section for \(K^{-}n\to\bar{T}^{a}_{c\bar{s}0}(2900)^{--}\Lambda^{+}_{c}\) reaction is evaluated by treating \(T^{a}_{c\bar{s}0}\) as a compact multi-quark state. The theoretical results obtained with a cutoff \(\alpha=1.5\) or \(1.7\) for the beam energy from near threshold up to \(27.5\) GeV are shown in Fig. 4. We can find that the cross-section exhibits a sharp increase near the threshold of \(\bar{T}^{a}_{c\bar{s}0}(2900)^{--}\Lambda^{+}_{c}\), an effect attributed to the opening of phase space at that energy. Following this, the cross-section continues to increase, albeit at a comparatively slower rate compared to the threshold region. However, a modest decline in the cross-section is observed as the beam energy \(P_{K^{-}}\) is varied from \(23.1\) to \(27.5\) GeV. For deeper insight, we illustrate the obtained total cross-section behavior, ranging from approximately \(0.134\) nb to \(0.127\) nb for \(\alpha=1.5\), with varying beam momentum from \(23.25\) GeV to \(26.80\) GeV. Within the same energy range, but adopting \(\alpha=1.7\), the cross-section spans from \(0.169\) nb to \(0.160\) nb. These outcomes suggest that the value of the cross-section is not very sensitive to the model parameter \(\alpha\) when varying the cutoff parameter \(\alpha\) from \(1.5\) to \(1.7\).
In addition to showing the central values of the cross-sections corresponding to \(g_{T_{c0}}=2.836\), we also present the variation of the cross-sections for different \(g_{T_{c0}}\) values, which are determined based on the theoretically predicted partial decay width of \(T^{++}_{c\bar{s}0}\to K^{+}D^{+}\)[27]. We depict the results for the cutoffs \(\alpha=1.5\) and \(\alpha=1.7\) in the Fig. 4 (b) and Fig. 4 (c), respectively. Remarkably, a significant variations in the cross-sections is observed. For \(\alpha=1.5\), the obtained cross-section ranges from \(0.052\) nb to \(0.201\) nb, and for \(\alpha=1.7\), it ranges from \(0.066\) nb to \(0.253\) nb, both at an example energy of approximately \(P_{K^{-}}=26.80\) GeV. This suggests that the cross-section for the maximum value is about four times larger than that of the minimum value. This discrepancy can be attributed to the fact that the ratio of the coupling constant for the maximum value to that of the minimum value is around \(2.0\). And the cross-section is proportional to the square of the coupling constant.
We now shift our focus to the cross-section of the \(K^{-}n\to\bar{T}^{a}_{c\bar{s}0}(2900)^{--}\Lambda^{+}_{c}\) reaction, considering \(T^{a}_{c\bar{s}0}(2900)\) as a \(K^{*}D^{*}-D^{*}_{s\bar{s}0}\) molecule. The cross-section, varying with the beam energy \(P_{K^{-}}\) from just above the threshold up to \(27.5\) GeV, is depicted in Fig. 5. We clearly observe that the line shapes of the cross-sections mirror those obtained by considering \(T^{a}_{c\bar{s}0}(2900)\) as a compact multi-quark state. That means the cross-section also increases sharply near the threshold, followed by a gradual and sustained increase at higher energies, concluding with a gradual decrease.
The results in Fig. 5 also tell us that the \(\bar{T}^{a}_{c\bar{s}0}(2900)^{--}\) production cross-section for the cutoff \(\alpha=1.5\) is slightly smaller than that for cutoff \(\alpha=1.7\). The variation of the cross-sections for different \(\Gamma(T^{++}_{c\bar{s}0}\to K^{+}D^{+})\) values is very small. To see how much the cross-section depends on the \(\Gamma(T^{++}_{c\bar{s}0}\to K^{+}D^{+})\) decay widths and the cutoff \(\alpha\), we take the cross-section at a beam momentum of about \(P_{K^{-}}=25.0\) GeV and the mass of the bound state \(m=2885\) MeV as an example. The so-obtained cross-section ranges from \(0.156\) nb to \(0.175\) nb for \(\alpha=1.5\) and from \(0.197\) to \(0.221\) nb for \(\alpha=1.7\). We also find that if the \(\bar{T}^{a}_{c\bar{s}0}(2900)\) is produced as a \(K^{*}D^{*}-D^{*}_{s\bar{s}0}\) molecule with mass \(m=2885\) MeV, the cross-section is significantly larger than the results obtained by assuming \(\bar{T}^{a}_{c\bar{s}0}(2900)\) as a \(KD-D^{*}_{s}\rho\) molecule with mass \(m=2887\) MeV, by about \(6-8\) times. In other words, the cross-section is heavily depends on the masses of the bound states.
By comparing the cross-sections depicted in Fig.4 and Fig.5, we observe that if the \(\bar{T}^{a}_{c\bar{s}0}(2900)^{--}\) is a compact tetraquark state, its production cross-sections match the results predicted by considering the \(T^{a}_{c\bar{s}0}(2900)^{--}\) as a \(K^{*}D^{*}-D^{*}_{s\bar{s}}\rho\) molecule with a mass of \(m=2885\) MeV. Specifically, the cross-sections for these two assignments can reach \(0.269\) nb (for compact tetraquark state) and \(0.227\) nb (for
\(K^{*}D^{*}-D^{*}_{s}\rho\) molecule), respectively. These results suggest that if the \(\bar{T}^{a}_{c\bar{c}0}(2900)\) is a \(K^{*}D^{*}-D^{*}_{s}\rho\) molecule with a mass of \(m=2887\) MeV, distinguishing its \(K^{*}D^{*}-D^{*}_{s}\rho\) molecular nature from the compact tetraquark state is challenging through the \(K^{*}n\to\bar{T}^{a}_{c\bar{c}0}(2900)^{--}\Lambda^{+}_{c}\) reaction. However, when considering a smaller cross-section obtained from the assumption that the \(\bar{T}^{a}_{c\bar{c}0}(2900)^{--}\) is a \(K^{*}D^{*}-D^{*}_{s}\rho\) molecule with a mass of \(m=2887\) MeV, the cross-section is limited to \(0.034\) nb. This discrepancy magnifies the difference between the results of the two scenarios by a factor of about \(8.0\). This indicates that if the mass of the \(K^{*}D^{*}-D^{*}_{s}\rho\) molecule is \(m=2887\) MeV, a clear conclusion about the the nature of the \(\bar{T}^{a}_{c\bar{c}0}(2900)\) can be easily obtained by comparing the obtained cross-section with future experimental data.
However, a more distinct comparison can be drawn from the production of \(\bar{T}^{a}_{c\bar{c}0}(2900)\) in the \(K^{-}n\to\bar{T}^{a}_{c\bar{c}0}(2900)^{--}\Lambda^{+}_{c}\to\pi^{-}D^{*}_{s} \Lambda^{+}_{c}\) reaction, and the results of this comparison are shown in Fig. 6. We can find that the cross-section for the molecule assignment of \(\bar{T}^{a}_{c\bar{c}0}(2900)\) can reach up to \(1.83\times 10^{-3}\) nb. This value is significantly smaller than that of \(0.122\) nb at the same beam energy. And the larger cross-section was derived from considering the \(\bar{T}^{a}_{c\bar{c}0}(2900)\) as a compact tetraquark state. The significant difference between the results in these two pictures arises from considering \(T^{a}_{c\bar{c}0}(2900)\) in two different ways. If it's seen as a compact tetraquark state, the calculated partial decay width of \(T^{a}_{c\bar{c}0}(2900)^{++}\to\pi^{+}D^{+}_{s}\) is \(48.5\pm 30.0\) MeV [27](we use \(78.5\) MeV). This contrasts sharply with the range of \(0.132\)-\(1.167\) MeV (we use \(1.167\) MeV) obtained by treating \(T^{a}_{c\bar{c}0}(2900)\) as a \(K^{*}D^{*}-D^{*}_{s}\rho\) molecule with a mass of \(m\)=\(2885\) MeV. Importantly, the partial decay width of \(T^{a}_{c\bar{c}0}(2900)^{++}\to\pi^{+}D^{+}_{s}\) from the \(D^{*}_{s}\rho\) channel contribution is prohibited, and the necessary amplitudes required for the \(T^{a}_{c\bar{c}0}(2900)^{++}\to\pi^{+}D^{+}_{s}\) reaction in our work are available in Ref. [20]. Note that the cross-section for the \(K^{-}n\to\pi^{-}D^{-}_{s}\Lambda^{+}_{c}\) reaction are computed from the following differential cross-section [57]
\[\frac{d\sigma_{K^{-}n\to\pi^{-}D^{*}_{s}\Lambda^{+}_{c}}}{d \mathcal{M}_{\pi^{-}D^{*}_{s}}} \approx\frac{2m_{T^{a}_{c\bar{c}0}}\mathcal{M}_{\pi^{-}D^{*}_{s}}} {\pi}\] \[\times\frac{\sigma_{K^{-}n\to\pi^{-}_{c\bar{c}0}(2900)^{--} \Lambda^{+}_{c}}\Gamma_{\bar{T}^{a}_{c\bar{c}0}(2900)^{--}\to\pi^{+}D^{*}_{s}} }{(\mathcal{M}^{2}_{\pi^{-}D^{*}_{s}}-m^{2}_{\bar{T}^{a}_{c\bar{c}0}})^{2}+m^{ 2}_{\bar{T}^{a}_{c\bar{c}0}}\Gamma^{2}_{\bar{T}^{a}_{c\bar{c}0}}}, \tag{15}\]
where \(m_{\bar{T}^{a}_{c\bar{c}0}}\) and \(\Gamma_{\bar{T}^{a}_{c\bar{c}0}}\) are the mass and width of the \(\bar{T}^{a}_{c\bar{c}0}\), respectively. \(\mathcal{M}_{\pi^{-}D^{*}_{s}}\) is in the range of the \((m_{\pi^{-}}+m_{D^{*}_{s}})\) - \((\sqrt{s}-m_{\Lambda^{+}_{c}})\).
Considering isospin symmetry, the decay width of \(K^{+}D^{0}\) for the unreported \(T^{a}_{c\bar{c}0}(2900)^{+}\) state is expected to be half of the partial decay width of \(T^{++}_{c\bar{c}0}\to K^{+}D^{*}\) (as seen in Eq. 5). If we regard the \(T^{a}_{c\bar{c}0}(2900)^{+}\) as a \(K^{*}D^{*}-D^{*}_{s}\rho\) molecule, the predicted partial decay widths for the bound state with masses \(m=2885\) MeV and \(m=2887\) MeV are \(\Gamma(T^{+}_{c\bar{c}0}\to K^{+}D^{0})=24.36-27.30\) MeV and \(\Gamma(T^{+}_{c\bar{c}0}\to K^{+}D^{0})=2.69-4.13\) MeV, respectively. Furthermore, the decay width \(\Gamma(T^{+}_{c\bar{c}0}\to K^{+}D^{0})\) is estimated to be within the range of \(11.7\)-\(45.1\) MeV when \(m\) is the smallest \(T^{*}_{c\bar{c}0}\) as a compact tetraquark state. These predictions open up an opportunity to search for the \(\bar{T}^{a}_{c\bar{c}0}(2900)^{-}\) [\(\bar{T}^{a}_{c\bar{c}0}(2900)^{-}\) is the antiparticle of \(T^{a}_{c\bar{c}0}(2900)^{+}\)] in the \(K^{-}p\to\bar{T}^{a}_{c\bar{c}0}(2900)^{-}\Lambda^{+}_{c}\) reaction. The cross-section for the \(K^{-}p\to\bar{T}^{a}_{c\bar{c}0}(2900)^{-}\Lambda^{+}_{c}\) reaction, with a cutoff \(\alpha=1.7\) and beam momentum \(P_{K^{-}}\) ranging from near threshold to \(27.5\) GeV, are presented in Fig. 7. The findings reveal that the maximum value of the cross-section for \(\bar{T}^{a}_{c\bar{c}0}(2900)^{-}\) production in the \(K^{-}p\to\bar{T}^{a}_{c\bar{c}0}(2900)^{-}\Lambda^{+}_{c}\) reaction is approximately \(0.114\) nb, which is bigger than \(0.075\) nb that is the maximum values by assigning the \(\bar{T}^{a}_{c\bar{c}0}(2900)^{-}\) as a compact tetraquark state.
It is worth noting that the contributions to the \(K^{-}p\to\bar{T}^{a}_{c\bar{c}0}(2900)^{-}\Lambda^{+}_{c}\) reaction is mediated by the exchange of \(D\) meson in the \(t\)-channel, which is identical to the production process of its isospin partner \(\bar{T}^{a}_{c\bar{c}0}(2900)^{--}\). The only difference
is the influence of \(\bar{K}N\) ISI on their product cross-sections. To show the effect from the \(\bar{K}N\) ISI, we compare the cross-sections obtained with and without ISI for the cutoff \(\alpha=1.7\) and \(g_{T_{c0}}=2.836\) in Fig. 8, for the \(K^{-}n\rightarrow\bar{T}^{a}_{c0}(2900)^{-}\Lambda^{+}_{c}\) [Fig. 8(a)] and \(K^{-}p\rightarrow\bar{T}^{a}_{c0}(2900)^{-}\Lambda^{+}_{c}\) [Fig. 8(b)] reactions, respectively. In Fig. 8, the black solid line are the pure Born amplitude contribution, while the dash red line are the full results. We can find that the presence of \(\bar{K}N\) ISI leads to a reduction in the cross-section of the \(K^{-}p\rightarrow\bar{T}^{a}_{c0}(2900)^{-}\Lambda^{+}_{c}\) and \(K^{-}p\rightarrow\bar{T}^{a}_{c0}(2900)^{-}\Lambda^{+}_{c}\) reactions by approximately 20%. This suggest that the \(\bar{K}N\) ISI have a significant impact on the search for \(\bar{T}^{a}_{c0}(2900)\) in the \(K^{-}N\rightarrow\bar{T}^{a}_{c0}(2900)^{-/-}\Lambda^{+}_{c}\) reactions. Similar conclusions regarding the reduction in cross-section could be drawn if one were to consider the \(\bar{T}^{a}_{c0}(2900)\) as a \(KD-D^{*}_{s}\rho\) molecule, and we here do not discuss it in details.
## IV Summary
Theoretical investigations on the production processes will be helpful to distinguish which inner structure of the\(T^{a}_{c0}(2900)\) is possible. This is because the different production mechanisms of the \(T^{a}_{c0}(2900)\) rely on its structure assignments. In this work, we examine the recently observed \(\bar{T}^{a}_{c0}(2900)\) production in the \(K^{-}n\rightarrow\bar{T}^{a}_{c0}(2900)^{-}\Lambda^{+}_{c}\) and \(K^{-}n\rightarrow\bar{T}^{a}_{c0}(2900)^{-}\Lambda^{+}_{c}\rightarrow\pi^{-}D ^{*}_{s}\Lambda^{+}_{c}\) reactions by considering \(T^{a}_{c0}\) as a molecular state and a compact multiquark state, respectively. The \(T^{a}_{c0}(2900)\) can be produced though the exchange of \(D\)-meson in the \(t\)-channel.
Using the coupling constants of the \(\bar{T}^{a}_{c0}(2900)\) to \(KD\) channel obtained from molecule or compact tetraquark picture of the \(\bar{T}^{a}_{c0}(2900)\), we compute the cross-sections for the \(K^{-}n\rightarrow\bar{T}^{a}_{c0}(2900)^{-}\Lambda^{+}_{c}\) and \(K^{-}n\rightarrow\bar{T}^{a}_{c0}(2900)^{-}\Lambda^{+}_{c}\rightarrow\pi^{-}D ^{*}_{s}\Lambda^{+}_{c}\) reactions, respectively. The numerical results reveal that whether \(\bar{T}^{a}_{c0}(2900)\) is categorized as a molecule or a compact tetraquark state, the cross-sections for the \(K^{-}n\rightarrow\bar{T}^{a}_{c0}(2900)^{-}\Lambda^{+}_{c}\) reaction exhibit similar magnitudes, spanning roughly from 0.06 nb to 0.3 nb. Nevertheless, a more distinct comparison can be drawn by computing the cross-section of the \(K^{-}n\rightarrow\bar{T}^{a}_{c0}(2900)^{-}\Lambda^{+}_{c}\)\(\rightarrow\pi^{-}D^{-}_{s}\Lambda^{+}_{c}\) process. The results indicate that, assuming the molecule assignment for \(\bar{T}^{a}_{c0}(2900)\), the cross-section for the \(K^{-}n\rightarrow\bar{T}^{a}_{c0}(2900)^{-}\Lambda^{+}_{c}\) reaction could peak at \(1.83\times 10^{-3}\) nb, notably smaller than the 0.122 nb obtained by presumuming \(\bar{T}^{a}_{c0}(2900)\) to be a compact tetraquark state. These findings are poised for future experimental measurement and can serve as tests to discern the nature of \(\bar{T}^{a}_{c0}(2900)\). Lastly, we suggest a quest for the unreported charged tetraquark \(T^{a}_{c0}(2900)^{+}\) in the \(K^{-}p\rightarrow\bar{T}^{a}_{c0}(2900)^{-}\Lambda^{+}_{c}\) reaction due to its production cross-section can reach up to 0.114 nb.
A rough estimation finds that the production cross-section of the \(\bar{T}^{a}_{c0}(2900)\) through high-energy proton-proton collisions is approximately 17 fb (see Fig.3 in Ref. [18]). This value is about \(10^{4}\) times smaller than our calculated results. This difference is why we propose to search for the \(\bar{T}^{a}_{c0}(2900)\) in the reactions \(K^{-}p\rightarrow\bar{T}^{a}_{c0}(2900)^{-}\Lambda^{+}_{c}\) and \(K^{-}n\rightarrow\bar{T}^{a}_{c0}(2900)^{-}\Lambda^{+}_{c}\rightarrow\pi^{-}D ^{-}_{s}\Lambda^{+}_{c}\). Furthermore, kaon beams with momenta ranging from 50 to 280 GeV/c and an RMS below 5% are available from the M2 beamline at AMBER[35]. AMBER is a new fixed-target experiment at CERN that began its data-taking in 2023, providing an excellent platform for the search for \(\bar{T}^{a}_{c0}(2900)\).
###### Acknowledgements.
This work was supported by the National Natural Science Foundation of China under Grant No.12005177.
## Appendix: Partial decay widths \(T^{a}_{c0}(2900)^{++}\to K^{+}D^{*}\)
In this Appendix, we show how to compute the partial decay width of \(T^{a}_{c0}(2900)^{++}\to K^{+}D^{+}\) reaction. The corresponding Feynman diagrams are shown in Fig. (2). To compute the diagrams, we require the effective Lagrangian densities for the relevant interaction vertices. Since \(T^{a}_{c0}(2900)^{++}\) resonance can be identified as an \(S\)-wave \(D^{*}K^{*}-\rho D^{*}_{s}\) molecule [22], the Lagrangian densities for \(T^{a}_{c0}(2900)^{++}D^{*}K^{*}\) and \(T^{a}_{c0}(2900)^{++}\rho D^{*}_{s}\) vertices can be written down as [40]
\[\mathcal{L}_{T^{a}_{c0}K^{*}D^{*}}=g_{T^{a}_{c0}K^{*}D^{*}}D^{* }\mu^{\pm}\cdot\bar{T}^{a}_{c0}K^{*}_{\mu}, \tag{16}\] \[\mathcal{L}_{T^{a}_{c0}D^{*}_{s}}=g_{T^{a}_{c0}D^{*}_{s}}^{T^{a} _{s}}\mu^{\pm}\rho_{\mu}\cdot\bar{T}^{a}_{c0}, \tag{17}\]
where the coupling constants \(g_{T^{a}_{c0}K^{*}D^{*}}=2.198-5.531\) GeV and \(g_{T^{a}_{c0}D^{*}_{s}}=2.082-5.379\) GeV, which correspond to the physical sheet [22].
Considering the heavy quark limit and chiral symmetry, the relevant phenomenological Lagrangians for \(\mathcal{D}^{*}\mathcal{D}\mathcal{P}\) and
\(\mathcal{D}^{*}\mathcal{D}\mathcal{V}\) vertices are [20; 50; 51]
\[\mathcal{L}_{\mathcal{D}^{*}\mathcal{D}\mathcal{V}} =ig\langle\mathcal{D}^{*}_{i}\mu^{\mu}\mathcal{D}^{\dagger}- \mathcal{D}u^{\mu}\mathcal{D}^{\dagger}_{\mu}\rangle, \tag{18}\] \[\mathcal{L}_{\mathcal{D}^{*}\mathcal{D}\mathcal{V}} =-2f_{D^{*}\mathcal{D}\mathcal{V}}\epsilon_{m\alpha\beta}(\partial^ {\mu}\mathcal{V}^{\nu})^{\dagger}_{j}\] \[\times(\mathcal{D}_{i}\overset{\leftrightarrow}{\partial}_{\alpha }\mathcal{D}^{\beta\beta^{\dagger}}-\mathcal{D}_{i}^{\beta\beta}\overset{ \leftrightarrow}{\partial}_{\alpha}\mathcal{D}^{\dagger}), \tag{19}\]
where the \(\langle...\rangle\) denotes trace in the SU(3) flavor space and \(\epsilon^{0123}=1\). \(\mathcal{P}\) and \(\mathcal{V}^{\mu}\) are the SU(3) pseudoscalar meson and vector meson matrices, respectively,
\[\mathcal{P} =\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}\pi^{0}+\frac{1}{ \sqrt{6}}\eta&\pi^{+}&K^{+}\\ \pi^{-}&-\frac{1}{\sqrt{2}}\pi^{0}+\frac{1}{\sqrt{6}}\eta&K^{0}\\ K^{-}&\bar{K}^{0}&-\frac{2}{\sqrt{6}}\eta\end{array}\right), \tag{20}\] \[\mathcal{V}_{\mu} =\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}(\rho^{0}+\omega)& \rho^{+}&K^{*+}\\ \rho^{-}&\frac{1}{\sqrt{2}}(-\rho^{0}+\omega)&K^{*0}\\ K^{*-}&\bar{K}^{*0}&\phi\end{array}\right)_{\mu}, \tag{21}\]
and the \(\mathcal{D}^{(*)}=(D^{(*0)},D^{(*)+},D_{s}^{(*)})\). \(u^{\mu}\) is the axial vector combination of the pseudoscalar-meson fields and at the lowest order \(u^{2}=-\sqrt{2}\partial^{\mu}\mathcal{P}/f_{0}\) with \(f_{0}=92.4\) MeV. The coupling constants \(f_{D^{*}\mathcal{D}\mathcal{V}}=\lambda m_{\rho}/(\sqrt{2}f_{\pi})\) with \(\lambda=0.56\) GeV\({}^{-1}\), \(f_{\pi}\)=132 MeV [50], and the \(m_{\rho}\) is the mass of the \(\rho\) meson. The coupling constant \(g=1.097\) is determined from the strong decay width \(\Gamma(D^{*+}\to D^{0}\pi^{+})=56.46\pm 1.22\) keV, together with the branching ratio \(BR(D^{*+}\to D^{0}\pi^{+})=(67.7\pm 0.5)\%\).
For the \(\rho DD\) and \(KD^{*}D_{s}^{*}\) vertices, the following effective Lagrangian are needed [52; 53; 54]
\[\mathcal{L}_{DDp} =ig\langle DD_{p}(D^{\dagger}\vec{\tau}\cdot\vec{\rho}^{\dagger} \partial_{\mu}\bar{D}-\partial_{\mu}D^{\dagger}\vec{\tau}\cdot\vec{\rho}^{ \dagger}\bar{D}), \tag{22}\] \[\mathcal{L}_{KD^{*}D^{*}} =-g_{KD^{*}D^{*}}\epsilon^{m\alpha\beta}(\partial_{\mu}\bar{D}^{ *+}\gamma_{\nu}\partial_{\alpha}D_{s\beta}^{*}\bar{K}+\partial_{\mu}D_{\nu}^{ \dagger}\partial_{\alpha}\bar{D}^{*}\cdot_{s\beta}K), \tag{23}\]
where the coupling constant \(g_{DDp}=2.52\)[53] was derived from the D-meson electric form factor in the standard framework of the vector meson dominance model. \(g_{KD^{*}D^{*}}=7.0\) GeV\({}^{-1}\) was computed from the SU(4) relations [54]. The charm and \(K\) mesons isodoublets were defined as
\[\bar{D}^{(*)\dagger} =\left(\begin{array}{cc}\bar{D}^{(*0)}&D^{(*)-}\end{array} \right),\;\;\;D^{(*)}=\left(\begin{array}{c}D^{(*)0}\\ D^{(*)+}\end{array}\right), \tag{24}\] \[\bar{K}^{(*)\dagger} =\left(\begin{array}{cc}K^{(*)-}&\bar{K}^{(*)0}\end{array} \right),\;\;\;K^{(*)}=\left(\begin{array}{c}K^{(*)+}\\ K^{(*)0}\end{array}\right), \tag{25}\]
In addition to the vertices described above, we also need the following effective Lagrangians [20]
\[\mathcal{L}_{\mathcal{P}\mathcal{P}\mathcal{V}} =-ig_{h}([\mathcal{P},\partial^{\mu}\mathcal{P}]\mathcal{V}_{\mu}), \tag{26}\] \[\mathcal{L}_{K^{*}\mathcal{V}^{\prime}} =-g_{K^{*}K^{*}\mathcal{V}^{\prime}}\epsilon^{m\alpha\beta}\partial _{\alpha}\bar{K}_{\beta}^{*}\partial_{\mu}\mathcal{V}_{\nu}^{\prime}K+H.c., \tag{27}\]
where the \(\mathcal{V}\) is meson matrices,
\[\mathcal{V}_{\mu}^{\prime}=\left(\begin{array}{cc}\frac{1}{\sqrt{2}}(\rho^{0 }+\omega)&\rho^{+}\\ \rho^{-}&\frac{1}{\sqrt{2}}(-\rho^{0}+\omega)\end{array}\right)_{\mu}. \tag{28}\]
The coupling constants \(g_{K^{*}K^{*}\mathcal{V}^{\prime}}=3g_{h}^{2}/(64\pi^{2}f_{\pi})\) with the \(g_{h}\) is determined via measured width of \(K^{*}\to\pi K\). With the help of Eq. (26), the two-body decay width \(K^{*}\to K\pi\) is related to \(g_{h}\) as
\[\Gamma(K^{*+}\to K^{0}\pi^{+})=\frac{g^{2}}{24\pi m_{K^{*}}^{2}} \mathcal{P}_{\pi K^{*}}^{3}=\frac{2}{3}\Gamma_{K^{*+}}, \tag{29}\]
where \(\mathcal{P}_{\pi K^{*}}\) is the three-momentum of the \(\pi\) in the rest frame of the \(K^{*}\). Using the experimental strong decay width (\(\Gamma_{K^{*}}=50.3\pm 0.8\) MeV) and the masses of the particles shown in Ref. [41] we obtain \(g_{h}=9.11\).
Putting all the pieces together, we obtain the following strong decay amplitudes,
\[\mathcal{M}_{\alpha}^{\mu^{0}} =-i\frac{gg_{h}g_{\pi^{*}_{\alpha\beta}}}{f_{0}}\int\frac{d^{4}q}{ (2\pi)^{4}}q_{\mu}\frac{-g^{\mu\nu}+q_{1}^{\mu}q_{1}^{\nu}/m_{D^{*}}^{2}}{q_{1} ^{2}-m_{D^{*}}^{2}}\] \[\times\frac{-g^{\nu\sigma}+q_{2}^{\sigma}q_{2}^{\sigma}/m_{K^{*}}^ {2}}{q_{2}^{2}-m_{K^{*}}^{2}}(q_{\sigma}+p_{2\sigma})\frac{1}{q^{2}-m_{\pi^{0}}^ {2}},\] (30) \[\mathcal{M}_{\alpha}^{\mu} =i\frac{gg_{h}g_{\pi^{*}_{\alpha\beta}}}{3f_{0}}\int\frac{d^{4}q}{ (2\pi)^{4}}q_{\mu}\frac{-g^{\mu\nu}+q_{1}^{\mu}q_{1}^{\nu}/m_{D^{*}}^{2}}{q_{1} ^{2}-m_{D^{*}}^{2}}\] \[\times\frac{-g^{\nu\sigma}+q_{2}^{\sigma}q_{2}^{\sigma}/m_{K^{*}}^ {2}}{q_{2}^{2}-m_{K^{*}}^{2}}(q_{\sigma}+p_{2\sigma})\frac{1}{q^{2}-m_{\eta}^ {2}},\] (31) \[\mathcal{M}_{b}^{\mu^{0}} =-i\sqrt{2}f_{D^{*}D_{\rho}}g_{\pi^{*}_{\alpha\beta}}g_{K^{*}K^{*} \rho}\int\frac{d^{4}q}{(2\pi)^{4}}\epsilon_{\mu\nu\alpha\beta}q^{\mu}\] \[\times(p_{1}^{\alpha}+q_{1}^{\alpha})\frac{-g^{\beta\sigma}+q_{1}^ {\beta}q_{1}^{\tau}/m_{D^{*}}^{2}}{q_{1}^{2}-m_{D^{*}}^{2}}\frac{-g^{\sigma \prime\eta}+q_{2}^{\sigma}q_{2}^{\eta}/m_{K^{*}}^{2}}{q_{2}^{2}-m_{K^{*}}^{2}}\] \[\times\epsilon_{\tau,\lambda\sigma}q_{2}^{\varepsilon}q^{\tau}\frac{ -g^{\nu\tau}+q^{\lambda}q^{\tau}/m_{\rho^{0}}^{2}}{q^{2}-m_{\rho^{0}}^{2}},\] (32) \[\mathcal{M}_{b}^{\omega} =i\sqrt{2}f_{D^{*}D_{\rho}}g_{\pi^{*}_{\alpha\beta}}^{\tau_{ \alpha\beta}
\[\times\frac{-g^{\mu\nu}+q_{1}^{\mu}q_{1}^{\nu}/m_{D^{*}_{\nu^{*}}}^{2} }{q_{1}^{2}-m_{D^{*}_{\nu^{*}}}^{2}}\frac{-g^{\nu\eta}+q_{2}^{\nu}q_{2}^{\eta}/m _{\rho^{*}}^{2}}{q_{2}^{2}-m_{\rho^{*}}^{2}}\] \[\times(p_{2\eta}+q_{\eta})\frac{1}{q^{2}-m_{K^{0}}^{2}}, \tag{36}\] \[\mathcal{M}_{d}^{\mathcal{C}^{\mu^{0}}} =i2g_{K^{*}D^{*}D}g_{K^{*}\kappa}\rho_{S^{*}_{T^{\mu}\rho}D^{*}_{ \gamma}}\int\frac{d^{4}q}{(2\pi)^{4}}\epsilon_{\mu\nu\alpha\beta}q^{\rho}(q_{1} ^{\alpha}+p_{1}^{\alpha})\] \[\times\frac{-g^{\beta\sigma}+q_{1}^{\beta}q_{1}^{\sigma}/m_{D^{*} _{\nu^{*}}}^{2}}{q_{1}^{2}-m_{D^{*}_{\nu^{*}}}^{2}}\frac{-g^{\sigma\eta}+q_{2 }^{\sigma}q_{2}^{\eta}/m_{\rho^{*}}^{2}}{q_{2}^{2}-m_{\rho^{*}}^{2}}\] \[\times\epsilon_{\nu\gamma\lambda\epsilon}q_{2}^{\sigma}q^{\lambda }\frac{-g^{\kappa\nu}+q^{\gamma}q^{\kappa}/m_{K^{0}}^{2}}{q^{2}-m_{K^{0}}^{2}}, \tag{37}\]
where \(m_{D^{*}_{\nu}}\), \(m_{D}\), \(M_{D^{*}}\), \(m_{K^{*}}\), and \(m_{K}\) are the masses of the \(D^{*}_{s}\), \(D\), \(D^{*}\), \(K^{*}\), and \(K\) mesons, respectively. It is evident that these amplitudes suffer from ultraviolet (UV) divergence. Nevertheless, even the loops that are UV finite receive contributions from short distances when integrated over the entire momentum space. To address this, we will utilize a UV regulator, as described in [55], which effectively suppresses the short-distance contributions, thereby rendering the amplitudes UV finite. The UV regulator takes the form
\[\tilde{\Phi}(p_{E}^{2}/\Lambda^{2})\equiv\exp{(-p_{E}^{2}/\Lambda^{2})}, \tag{38}\]
where \(P_{E}\) is the Euclidean Jacobi momentum defined as \(P_{E}=m_{i}p_{j}/(m_{i}+m_{j})-m_{j}p_{i}(m_{i}+m_{j})\) for the \((ij)\) molecule.
Furthermore, we adopt the dipole form factor \(\mathcal{F}(q^{2})=(\Lambda^{2}-m^{2})/(\Lambda^{2}-q^{2})\) to account for the off-shell effect of the exchanged mesons. In this expression, \(m\) and \(q\) represent the mass and four-momentum of the exchanged mesons, respectively. The parameter \(\Lambda\) is typically parameterized as \(\Lambda=m+\alpha\Lambda_{QCD}\), where \(\Lambda_{QCD}=220\) MeV. The value of \(\alpha\) is chosen to be approximately 1.0 to ensure that \(\Lambda\) closely aligns with the mass of the exchanged mesons. In this study, we consider a range of \(\alpha\) values within \(0.91\leq\alpha\leq 1.0\), a range derived from experimental data [56]. Then we have
\[\mathcal{M}_{total}=\sum_{i=a,b,c,d}\mathcal{M}_{i}\tilde{\Phi}(p_{E}^{2}/ \Lambda^{2})\mathcal{F}^{2}(q^{2}). \tag{39}\]
Once the amplitudes are determined, the corresponding partial decay widths can be obtained, which read,
\[\Gamma(T_{c\bar{c}0}^{\alpha}(2900)^{++}\to K^{+}D^{+})=\frac{1}{8\pi}\frac{| \vec{p}_{K^{*}}|}{m_{T^{\alpha}_{c\bar{c}0}(2900)^{++}}^{2}}\overline{| \mathcal{M}|^{2}}, \tag{40}\]
where the \(\vec{p}_{K^{*}}\) is the three-momenta of the decay products in the center of mass frame, the overline indicates the sum over the polarization vectors of the final hadrons.
|
2310.01069
|
Fully Abstract Normal Form Bisimulation for Call-by-Value PCF
|
We present the first fully abstract normal form bisimulation for
call-by-value PCF (PCF$_{\textsf{v}}$). Our model is based on a labelled
transition system (LTS) that combines elements from applicative bisimulation,
environmental bisimulation and game semantics. In order to obtain completeness
while avoiding the use of semantic quotiening, the LTS constructs traces
corresponding to interactions with possible functional contexts. The model
gives rise to a sound and complete technique for checking of PCF$_{\textsf{v}}$
program equivalence, which we implement in a bounded bisimulation checking
tool. We test our tool on known equivalences from the literature and new
examples.
|
Vasileios Koutavas, Yu-Yang Lin, Nikos Tzevelekos
|
2023-10-02T10:25:31Z
|
http://arxiv.org/abs/2310.01069v1
|
# Fully Abstract Normal Form Bisimulation for Call-by-Value PCF
# Fully Abstract Normal Form Bisimulation for Call-by-Value PCF
Vasileios Koutavas
Trinity College Dublin
Dublin, Ireland
Email: [email protected]
Yu-Yang Lin
Trinity College Dublin
Dublin, Ireland
Email: [email protected]
Nikos Tzevelekos
Queen Mary University of London
London, UK
Email: [email protected]
###### Abstract
We present the first fully abstract normal form bisimulation for call-by-value PCF (PCF\({}_{\mathsf{v}}\)). Our model is based on a labelled transition system (LTS) that combines elements from applicative bisimulation, environmental bisimulation and game semantics. In order to obtain completeness while avoiding the use of semantic quotiening, the LTS constructs traces corresponding to interactions with possible functional contexts. The model gives rise to a sound and complete technique for checking of PCF\({}_{\mathsf{v}}\) program equivalence, which we implement in a bounded bisimulation checking tool. We test our tool on known equivalences from the literature and new examples.
This publication has emanated from research supported in part by a grant from Science Foundation Ireland under Grant number 13/RC/2094_2. This version of the paper has undergone minor corrections since its original publication in LICS 2023.
## I Introduction
The full abstraction problem for PCF, i.e. constructing a denotational model that captures contextual equivalence in the paradigmatic functional language PCF, was put forward by Plotkin in the mid 1970's [33]. The first fully abstract denotational models for PCF were presented in the early 1990's and gave rise to the theory of _game semantics_[2, 12, 29], while fully abstract models for its call-by-value variant were given in [10, 3]. Fully abstract operational models of PCF have been given in terms of _applicative bisimulations_[1, 9, 11] and _logical relations_[30], and for other pure languages in terms of _environmental bisimulations_[39, 36] and _logical relations_[31, 4]. On the other hand, Loader demonstrated that contextual equivalence for finitary PCF is undecidable [26].
A limitation of the game semantics models for PCF is their intentional nature. While the denotations of inequivalent program terms are always distinct, there are equivalent terms whose denotations are also distinct and become equivalent only after a semantic quotiening operation. Quotienting requires universal quantification over tests, which amounts to quantification over all (innocent) contexts. This hinders the use of game models for pure functional languages to prove equivalence of terms, as any reasoning technique needs to involve all contexts of term denotations in the semantic model (i.e. all possible _Opponent strategies_). In a more recent work, Churchill et al. [7] were able to give a direct characterisation of program equivalence in terms of so-called _sets of O-views_, built out of term denotations. The latter work is to our knowledge the only direct (i.e. quotient-free) semantic characterisation of PCF contextual equivalence, though it is arguably more of theoretical value and does not readily yield a proof method.
Operational models also involve quantification over all identical (applicative bisimulation) or related (logical relations, environmental bisimulations) closed arguments of type \(A\), when describing the equivalence class of type \(A\to B\). Although successful proof techniques of equivalence have been developed based on these models, universal quantification over opponent-generated terms must be handled in proofs with rather manual inductive or coinductive arguments.
Normal-Form (NF) bisimulation, also known as open bisimulation, was originally defined for characterising Levy-Longo tree equivalence for the lazy lambda calculus [35] and adapted to languages with call-by-name [20], call-by-value [21], non-determinism [22], aspects [14], recursive types [23], polymorphism [25], control with state [37], state-only [6], and control-only [5]. It has also been used to create equivalence verification techniques for a lambda calculus with state [16].
The main advantage of NF bisimulation is that it does away with quantification over opponent-generated terms, replacing them with fresh open names. This has also been shown [23, 25, 16] to relate to operational game semantics models where opponent-generated terms are also represented by names [19, 8, 13]. The main disadvantage of NF bisimulation is that -- with the notable exception of languages with control and state [37], and state-only [6, 16] -- it is too discriminating thus failing to be fully abstract with respect to contextual equivalence. This is particularly true for pure languages such as PCF, and its call-by-value variant PCF\({}_{\mathsf{v}}\) which is the target of this paper.
However, the discriminating power of NF bisimulation depends on the labelled transition system (LTS) upon which it is defined. Existing work define NF bisimulation over LTSs that treat call and return moves between term and context in a fairly standard way: these are immediately observable by the bisimulation as they appear in transition annotations, and context moves correspond to imperative, not purely functional, contexts. As we show in Section II, this is overly discriminating for a language such as PCF\({}_{\mathsf{v}}\). Moreover, existing NF bisimulation techniques, either do not make extensive use of the the context's knowledge in the LTS configurations (e.g.
[23]), or consider an ever-increasing context knowledge (e.g. [6]) which is only fully abstract for imperative contexts.
In this paper we present the first fully abstract NF bisimulation for \(\mathrm{PCF_{v}}\), defined in Section III. To achieve this we develop in Section IV a novel Labelled Transition System (LTS) which:
* is based on an operational presentation of game models (cf. [19]) and uses Proponent and Opponent configurations (and _call/return moves_) for evaluation steps that depend on the modelled term and its environment, respectively;
* uses an explicit stack principle to guarantee well-bracketing and stipulates that the _opponent view_ of the LTS trace be restricted to moves related to the last open proponent call (cf. _well-bracketing_ and _visibility_[12]);
* restricts opponent moves so that they correspond to those of a pure functional context, by explicitly keeping track of previous opponent responses to proponent moves and their corresponding (opponent) view (cf. _innocence_[12]);
* postpones observations of proponent call/return moves until computations are complete to avoid unnecessary distinctions between equivalent terms.
We then define a notion of NF bisimulation over this LTS which combines standard move/label synchronisation with coherence of corresponding opponent behaviours. We show that the latter is fully abstract with respect to contextual equivalence (Section V). Due to its operational nature and the absence of quantification over opponent-generated terms, the model lends itself to a bounded model-checking technique for equivalence which we implement in a prototype tool (Section VI). We conclude in Section VII.
## II Motivating Examples
We start with a simple example of equivalence that showcases unobservable behaviour differences in \(\mathrm{PCF_{v}}\).
**Example 1**.: Consider the following equivalent terms of type \((\mathrm{unit}\rightarrow\mathrm{int})\rightarrow(\mathrm{unit}\rightarrow \mathrm{int})\rightarrow\mathrm{int}\).
\(M_{1}=\textbf{fun f -> fun g -> if f () = g () then if f () = g () then 0 else 2 \(N_{1}=\textbf{ fun f -> fun g -> if g () = f () then 0 else 2 }
These two terms are contextually equivalent because the context cannot observe whether f and g have been called more than once with the same argument. Two calls of a pure and deterministic function with the same argument both diverge or return the same value. Moreover, the context cannot observe the order of calls to the context-generated functions f and g. As we will see in Section IV, our LTS restricts the behaviour of context-generated functions such as f and g so that they behave in a pure deterministic manner, and does not make distinctions based on their call order.
We now discuss key _observable_ behaviour differences in \(\mathrm{PCF_{v}}\) through the lens of bisimulation theories. As explained in [15], the main feature in environmental bisimulation definitions [38, 39, 18, 17, 36] is _knowledge accumulation_: environmental bisimulation collects the term-generated functions in an environment representing the knowledge of the context. This knowledge is used in the following bisimulation tests to distinguish terms:
1. to call a function from the environment multiple times in a row with the same argument;
2. to call a function from the environment multiple times in a row with different arguments;
3. to call environment functions after other environment functions have returned; and
4. to use environment functions in the construction of context-generated functions.
The above is easily understood to be necessary in stateful languages and was shown to be needed in pure languages with existential types [15]. However, as applicative bisimulation has shown, it is unnecessary to accumulate the context's knowledge in order to create a theory of \(\mathrm{PCF_{v}}\): applicative bisimulation interrogates related functions in isolation from other knowledge by simply applying them to identical arguments.
As discussed in the first example of this section, purity and determinism indeed make (1) unnecessary in \(\mathrm{PCF_{v}}\). However, (2-4) are _necessary_ tests that a normal form bisimulation theory for \(\mathrm{PCF_{v}}\) must perform. This is because a normal form bisimulation definition must prescribe the necessary interaction between terms and context _under any evaluation context_ and not just at top-level computations. Applicative bisimulation on the other hand is only defined in terms of top-level function applications, where the context's knowledge is limited. Universal quantification over the code of context-generated function arguments implicitly encodes all the interactions that related terms may have with these arguments. We showcase the need for (2-4) in the following three example inequivalences.
**Example 2**.: Consider the inequivalent terms \(M_{2}\), \(N_{2}\) of type \((((\mathrm{bool}\rightarrow\mathrm{bool})*(\mathrm{bool}\rightarrow\mathrm{bool})) )\rightarrow\mathrm{bool})\rightarrow\mathrm{bool}\rightarrow\mathrm{bool}\).
\(M_{2}=\textbf{fun f -> fun b -> let rec X d = f (X, fun _ -> d) in X b \(N_{2}=\textbf{fun f -> fun b -> f ((fun _ -> _bot_), fun _ -> b) }
Here _bot_ is a diverging term and _ represents an unused variable; X has type \(\mathrm{bool}\rightarrow\mathrm{bool}\).
Term \(M_{2}\) will receive a function f and a boolean b. It will then create a recursive term which calls f with a pair containing two \(\mathrm{bool}\rightarrow\mathrm{bool}\) functions. If f calls X, the first function in the pair, with a boolean d, computation will recur; if it calls the second function, it will receive the argument of the latest call to X. On the other hand, \(N_{2}\) calls f with a pair of functions where the first one diverges upon call, and the second one returns b, provided at the beginning of the interaction.
These terms can be distinguished by the following context:
```
letf=fun(X,fd)->iffdfalsethenXfalse elsetrue in[]ftrue
```
This context creates a function f that receives two functions X and fd, and conditionally calls X with false, if the call to fd returns true. When placed in the hole [] of this context, \(M_{2}\) will receive f and value true. Recursive function X will thus be first called with true, in the last line of \(M_{2}\), and then again with false by f, causing the termination of the computation. On the other hand, with \(N_{2}\) in the hole, the context will diverge.
This is effectively the only simple context that can distinguish \(M_{2}\) and \(N_{2}\), and thus a NF bisimulation theory of equivalence for PCFv must accumulate X in the opponent's knowledge at inner interaction levels to allow calling X after fd has returned. This shows the need for allowing (3) in a NF bisimulation. Indeed, if we omit this from the technique we develop in the following sections, \(M_{2}\) and \(N_{2}\) would be deemed equivalent.
The following variation of the above example shows that the context may need to call the same function twice, with different arguments, to make observations.
**Example 3**.: Consider the inequivalent terms \(M_{3}\), \(N_{3}\) of type \(((\mathsf{bool}\rightarrow(\mathsf{bool}\rightarrow\mathsf{bool})) \rightarrow\mathsf{bool})\rightarrow\mathsf{bool}\rightarrow\mathsf{bool}\).
\(M_{3}=\)funf->funb-> letrecXd=f(fune->ifethenX else(fun_->d)) inXb ```
\(N_{3}=\)funf->funb-> f(fune->ifethen(fund->_bot_) else(fun_->b)) where X has type bool \(\rightarrow\mathsf{bool}\). The distinguishing context is
``` letf=funfXd-> letX=fXdtruein letfd=fXdfalsein iffdfalsethenXfalseelsetrue in[]ftrue ```
Here the interaction between the terms and the context are as in the previous example, with the difference that the context must apply fXd to true and then false to receive the two functions X and fd. The context terminates with \(M_{3}\) but diverges with \(N_{3}\) in its hole.
This is effectively the only simple context that can distinguish \(M_{3}\) and \(N_{3}\), and thus a NF bisimulation theory of equivalence for PCFv that describes all the term-context interactions must accumulate fXd in the context's knowledge in order to apply it twice in a row. This showcases the need for allowing (2) in a NF bisimulation.
Our final example shows that functions from the context's knowledge must be used within a context-generated function in order to distinguish two terms.
**Example 4**.: Consider the inequivalent terms \(M_{4}\), \(N_{4}\) of type \(T=((\mathsf{int}\rightarrow\mathsf{int})\rightarrow\mathsf{int}\rightarrow \mathsf{int})\rightarrow\mathsf{int}\rightarrow\mathsf{int}\). \(M_{4}=\)letrecXcount=funf->funi-> f(funj->if(count>0) thenX(count-l)fj else_bot_)i inXk
```
\(N_{4}=\)funf->funi->letrecYj=fYj inYi ```
where X and Y have type int \(\to T\) and int \(\rightarrow\mathsf{int}\), respectively.
This is a family of examples in which the distinguishing interaction increases with \(k\); \(N_{4}\) enables f to call itself an arbitrary number of times, whereas \(M_{4}\) enables up to \(k\) recursive calls of f before it diverges. The distinguishing context below attempts to perform \(k+1\) recursive calls and then to return \(0\):
``` [](funZ->funi->ifi>0thenZ(i-1)else0) \((k+1)\) ```
This context diverges with \(M_{4}\) but converges with \(N_{4}\) in its hole. To achieve this, the context uses the function received as argument Z inside the context-generated function funi->ifi>0thenZ(i-1)else0 which is given back to the term. As this is effectively the only context that can distinguish \(M_{4}\) and \(N_{4}\), we need to allow our NF bisimulation for PCFv to construct (symbolic) function values that can internally refer to functions in the context's knowledge at the time of construction; showing the need for allowing (4) in a NF bisimulation. If we omit this from our technique, it would deem \(M_{3}\) and \(N_{3}\) equivalent.
## III Language and Semantics
We work with the language PCFv, a simply-typed call-by-value lambda calculus with boolean and integer operations [10]. The syntax and reduction semantics are shown in Fig. 1. Expressions (Exp) include the standard lambda expressions with recursive functions (fix\(f(x).e\)), together with standard base type constants (\(c\)) and operations (\(op(\vec{e})\)), as well as conditionals and tuple-deconstructing let expressions (\(\mathsf{let}(\vec{x})=e\) in \(e\)). We use standard macros, for example \(\bot_{T}\stackrel{{\cong}}{{=}}\mathsf{fix}_{\mathsf{fun}\rightarrow T}(x).fx\) and \(\lambda x_{T}.e\stackrel{{\cong}}{{=}}\mathsf{fix}_{T\rightarrow T ^{\prime}}(x).e\) (with \(f\) fresh for \(e\)).
The language PCFv is simply-typed with typing judgements of the form \(\Delta\vdash e:T\), where \(\Delta\) is a type environment (omitted when empty) and \(T\) a value type (Type). The rules of the typing system are standard and omitted here [10]. Values consist of boolean, integer, and unit constants, functions and arbitrary length tuples of values.
The reduction semantics is by small-step transitions between closed expressions, \(e\to e^{\prime}\), defined using single-hole evaluation contexts (ECxt) over a base relation \(\hookrightarrow\). Holes \([\cdot]_{T}\) are annotated with the type \(T\) of closed values they accept, which we omit when possible to lighten notation. Beta substitution of \(x\) with \(v\) in \(e\) is written as \(e[v/x]\). We write \(e\Downarrow\) to denote \(e\rightarrow^{*}v\) for some \(v\). We write \(\vec{\chi}\) to mean a finite sequence of syntax objects \(\chi_{1},\ldots\), and assume standard syntactic sugar from the lambda calculus. In our examples we assume an ML-like syntax and implementation of the type system, which is also the concrete syntax of our prototype tool (Section VI).
Contexts \(D\) contain multiple, non-uniquely indexed holes \([\cdot]_{i;T}\), where \(T\) is the type of value that can replace the hole (and each index \(i\) can have one related type). A context is called _canonical_ if its holes are indexed \(1,\ldots,n\), for some \(n\). Given a canonical context \(D\) and a sequence of typed expressions \(\Sigma\vdash\vec{e}\mathrel{:}\vec{T}\), notation \(D[\vec{e}]\) denotes the context \(D\) with each hole \([\cdot]_{i;T_{i}}\) replaced with \(e_{i}\). We omit hole types where possible and indices when all holes in \(D\) are annotated with the same \(i\). Standard contextual equivalence [28] follows.
**Definition 5** (Contextual Equivalence).: Expressions \(\vdash e_{1}:T\) and \(\vdash e_{2}:T\) are _contextually equivalent_, written as \(e_{1}\equiv e_{2}:T\), when for all contexts \(D\) such that \(\vdash D[e_{1}]:\mathsf{unit}\) and \(\vdash D[e_{2}]:\mathsf{unit}\) we have \(D[e_{1}]\Downarrow\) iff \(D[e_{2}]\Downarrow\).
Due to the language being purely functional, we can refine the contexts needed for contextual equivalence to _applicative_ ones.
**Definition 6**.: Applicative contexts are given by the syntax:
\[E_{a}::=\ [\cdot]_{T}\mid E\ _{a}v\mid\text{if}\ E_{a}=c\,\mathsf{then}\,()\, \mathsf{else}\,\bot_{\mathsf{unit}}\mid\pi_{i}(E_{a})\]
where \(\pi_{i}(\chi)\) returns the \(i\)-th component of tuple \(\chi\).
Using the fact that applicative bisimulation is fully abstract [15, 11], we can show the following.
**Proposition 7** (Applicative contexts suffice).: \(e_{1}\equiv e_{2}:T\) _iff for all applicative contexts \(E_{a}\) such that \(\vdash E_{a}[e_{1}]_{T},E_{a}[e_{2}]_{T}:\mathsf{unit}\) we have \(E_{a}[e_{1}]\Downarrow\) iff \(E_{a}[e_{2}]\Downarrow\)._
## IV LTS with Symbolic Higher-Order Transitions
We now define a Labelled Transition System (LTS) which allows us to probe higher-order values with possible symbolic arguments. The LTS follows the operational game semantics approach, with several adjustments:
* the basis of the LTS is the operational game model of [19];
* the Opponent behaviours are constrained to _innocent_ ones (cf. [10]) by use of an _opponent memory_ component \(M\);
* the denotation of an expression is not just the transitions that the LTS produces for this expression but, instead, these transitions together with the corresponding opponent memory at top-level configurations.
Thus, the LTS comprises of _Proponent_ and _Opponent_ configurations with corresponding transitions, modelling the computations triggered by an expression and its context respectively. Opponent is construed as the syntactic context, which provides values for the functions that are open in the expression. Open functions are modelled with (opponent-generated) _abstract names_, which are accommodated by extending the syntax and typing rules with abstract function names \(\alpha\):
\[\mathsf{Val}\text{: }\quad u,v,w::=c\mid\text{fix}f_{T}(x).e\mid(\vec{v}) \mid\alpha_{T}^{i}\]
Abstract function names \(\alpha_{T}^{i}\) are annotated with the type \(T\) of function they represent, and with an index \(i\geq 0\) that is used for bookkeeping; these are omitted where not important. \(\mathsf{an}(\chi)\) is the set of abstract names in \(\chi\).
The definition of our LTS (Fig. 2) is explained below.
_Moves:_
Our LTS uses _moves_:
\[\eta::=\mathsf{call}(\alpha_{T},D)\mid\mathsf{ret}(D)\mid\underline{\mathsf{ call}}(i,v)\mid\underline{\mathsf{ret}}(v)\]
with contexts \(D\) and values \(v\) built from the following restricted grammars:
\[D_{\bullet} ::=c\mid[\cdot]_{i,T}\mid(\vec{D}_{\bullet})\] \[v_{\bullet} ::=c\mid\alpha_{T}\mid(\vec{v}_{\bullet})\]
Thus, \(D_{\bullet}\) and \(v_{\bullet}\) are values where functions are replaced by holes and abstract names, respectively. To lighten notation, we denote them by \(D,v\).
Moves \(\eta\) are proponent call (\(\mathsf{call}(\alpha,D)\)) and return (\(\mathsf{ret}(D)\)) moves involved in transitions from opponent to proponent configurations; and opponent call (\(\underline{\mathsf{call}}(i,v)\)) and return (\(\underline{\mathsf{ret}}(v)\)) moves in transitions from opponent to proponent configurations.
**Remark 8**.: Note the abstract names used in moves (and, later, traces) are of the form \(\alpha_{T}\), i.e. without \(i\)-annotations. This amounts to the fact that any two abstract names \(\alpha_{T}^{i},\alpha_{T}^{j}\) with \(i\neq j\) correspond to the same function played by opponent in two different points in the interaction. At each point, the proponent functions \(\vec{v}\) that the opponent has access to may differ, and hence the need for different indices to distinguish the two instances of \(\alpha_{T}\). In the LTS, such distinction is not needed for proponent higher-order values as they are suppressed from proponent moves altogether.
**Definition 9** (Traces).: We let a _trace_\(t\) be an alternating sequence of opponent/proponent moves. We write \(t+t^{\prime}\) or, sometimes for brevity, \(t\,t^{\prime}\) to mean trace concatenation.
#### Configurations:
Proponent configurations are written as \(\langle A\,;M\,;K\,;t\,;e\,;V\rangle\) and proponent configurations as \(\langle A\,;M\,;K\,;t\,;V\,;\vec{u}\rangle\). All configurations are ranged over by \(C\). In these configurations:
* \(A\) is a partial map which assigns a sequence of names \(\vec{v}\) to each abstract function name \(\alpha\) (that has been used so far in the interaction) and integer index \(j\). We write \(\alpha^{j,\vec{v}}\in A\) for \(A(\alpha,j)=\vec{v}\). The index \(j\) is used to distinguish between different uses of the same abstract function name \(\alpha\) by opponent in the interaction. The sequence of values \(\vec{v}\) represents the proponent functions that were available to opponent when the name \(\alpha^{j}\) was used (knowledge accumulation for constructing context-generated functions, cf. Example 4). We write \(A\uplus\alpha^{j,\vec{v}}\) for \(A\cup((\alpha,j),\vec{v})\), assuming \((\alpha,j)\not\in A\).
* \(t\) is the _opponent-visible trace_, i.e. a subset of the current interaction that the opponent can have access to, starting with a move where the proponent calls an opponent abstract function.
* \(K\) is a stack of proponent continuations, created by nested proponent calls. We call configurations with an empty stack _top-level_ and those with a non-empty stack _inner-level_; opponent top-level configurations are also called _final_. Configurations of the form \(\langle\cdot\,;\cdot\,;\cdot\,;\,;e\,;\cdot\rangle\) are called _initial_.
* \(M\) is a set of opponent-visible traces. It ensures pure behaviour of the opponent (cf. Example 1): it restricts the moves of the opponent when an opponent-visible trace is being run for a second (or subsequent) time. Component \(M\) is also examined by the bisimulation to determine equivalence of top-level configurations. It can be seen as a _memory_ of the behaviour of the opponent abstract functions so far and an oracle for future moves. Given \(M\), we define a map from proponent-ending traces to next opponent moves: \[\mathsf{next}_{M}(t)=\{\eta\mid t\eta\in M\}\] We consider only _legal_\(M\)'s whereby \(|\mathsf{next}_{M}(t)|\leq 1\) for any trace \(t\) ending in a proponent move and each abstract function name \(\alpha\) appears at most once in \(M\). We write \(M[t]\) for \(M\cup\{t\}\). We may also write \(M_{C}\) for the \(M\)-component of a configuration \(C\).
* \(e\) is the proponent expression reduced in proponent configurations.
* In opponent configurations, \(\vec{u}\) is the sequence of values (proponent functions) that are available to opponent to call at the given point in the interaction. In both kinds of configurations, \(V\) is a stack of sequences of proponent functions. These components encode the opponent knowledge accumulation necessary for a sound NF bisimulation theory for \(\text{PCF}_{\text{v}}\). They enable sequence of calls to proponent functions (cf. Examples 2 and 3), and construction of opponent-generated abstract functions with the appropriate level of knowledge attached to them (cf. Example 4).
#### Transitions:
Transitions are of the form \(C\xrightarrow{l}C^{\prime}\), where transition label \(l\) is either an immediately observable move \(\eta\) or a generic \(\tau\), hiding any move involved in the transition. In the former case, observable moves can be opponent calls (call) or proponent returns (ret). Unlike standard LTSs, this LTS hides call/return moves involved in transitions of inner-level configurations, which are stored in the configuration memory \(M\) instead. As we will see later in this section, this is to allow equivalent terms to have different order of calls to opponent functions. Only _top-level_ transitions contain move annotations, making them directly observable. These are transitions produced by one of the barbed rules (PropRetBarb, OpCallBarb). In the remaining transition rules moves are accumulated in traces which are stored in the memory component \(M\) of the configurations. These will be examined by the bisimulation at top-level configurations.
The simplest transitions are those produced by the PropTau rule, embedding reductions into proponent configurations. The remaining transitions involve interactions between opponent and proponent and are detailed below.
#### Proponent Return:
When the proponent expression has been reduced to a value, the LTS performs a ret-move, either by the PropRetBarb transition, when the configuration is top-level, or the PropRet transition, when it is not. In both cases the value \(v\) being returned is deconstructed to:
* an _ultimate pattern_\(D\) (cf. [24]), which is a context obtained from \(v\) by replacing each function in \(v\) with a distinct numbered hole; together with
* a sequence of values \(\vec{v}\) such that \(D[\vec{v}]=v\).
We let \(\mathsf{upatt}(v)\) be a deterministic function performing this decomposition.
In rule PropRetBarb the functions \(\vec{v}\) obtained from \(v\) become the knowledge of the resulting opponent configuration; opponent can call one of these functions to continue the interaction. The previous knowledge \(\vec{u}\) stored in the one-frame stack is being dropped. This dropping of knowledge is sufficient for a sound NF bisimulation theory based on this LTS, as justified by our soundness result and corroborated by the conditions of applicative bisimulation which encode top-level interactions without accumulating opponent knowledge from previous moves.
On the other hand, in PropRet, \(\vec{v}\) is added to the most current opponent knowledge \(\vec{u}\), stored in the top-frame of the knowledge stack which is popped in the resulting configuration. This is necessary because, in inner level configurations, opponent should be allowed to call a proponent function it knew before it called the function that returned \(v\), allowing observations such as those in Examples 2 and 3.
In PropRetBarb the context \(D\) extracted by ultimate pattern matching becomes observable in the transition label \(\mathsf{ret}(D)\). Again, this is in line with the definition of applicative bisimulation where the return values of top-level functions are observed by the bisimulation moves. However, in rule PropRet this observation is _postponed_: the \(\mathsf{ret}(D)\) move is appended to the current trace, and this trace is being stored in the \(M\) memory in the configuration. This memory will then be used to make distinctions between configurations in a bisimulation definition, when top-level transitions are reached. This storing of inner-level moves makes unobservable the order and repetition of proponent calls to opponent functions in the LTS, allowing to prove equivalences such as the one in Example 1.
### Proponent Call:
Rule PropCall produces a transition when a call to an opponent abstract function \(\alpha_{T_{1}\to T_{2}}^{j}\) is at reduction position in a proponent expression. Function \(\mathsf{ulpatt}(v)\) is again used to decompose the call argument to context \(D\) and functions \(\vec{v}\), whereas \(\alpha^{j}\) is looked up in \(A\) to identify the knowledge \(\vec{u}\) attached to this use of the \(\alpha\) name at the time \(\alpha^{j}\) was created. Then \(\vec{v}\) and \(\vec{u}\) are combined to create opponent's knowledge in the resulting configuration. The trace \(t\) accumulated in the (proponent) source configuration of the transition is being pushed onto the stack component \(K\) together with the continuation of the expression being reduced. This is because a proponent call transition triggers the creation of a new opponent-visible trace \(t^{\prime}\), starting with the call move. This new trace is stored in the memory \(M\) and used in the resulting (opponent) configuration.
Segmentation of traces into opponent-visible trace fragments, as performed by this rule, is important for full abstraction of the NF bisimulation defined below. When configurations are compared by the bisimulation, the exact interleaving of these trace segments is not observable as the language is pure and opponent-generated functions have only a local view of the overall computation. Moreover, opponent-visible traces relate to O-views in game semantics but contain only a single (initiating) proponent call move.
### Opponent Return:
An opponent configuration with a non-empty stack component \(K\) may return a value with rule OpRet. In order to obtain this value we extend \(\mathsf{ulpatt}\) to the return type \(T\) through the use of symbolic function names: \(\mathsf{ulpatt}(T)\) is the set of all pairs \((D,\vec{\alpha}_{\vec{T}})\) such that \(\vdash D[\vec{\alpha}]:T\), where \(D\) is a value context that does not contain functions, and the types of \(\vec{\alpha}\) and the corresponding holes match. Note that in this definition we leave the \(j\) annotation of \(\alpha\)'s blank as it is filled-in by the rule. In the resulting configuration \(\alpha^{j,\vec{v}}\) is added in \(A\), extending its domain by \((\alpha,j)\).
This transition can be performed in two cases; when:
* \(\mathsf{next}_{M}(t)=\emptyset\). In this case the current opponent-visible trace \(t\) is not a strict prefix of a previously performed trace stored in \(M\), and the configuration can non-deterministically perform this return transition. If it does, the resulting configuration stores in \(M\) the extended trace \(t^{\prime\prime}=t+\underline{\mathsf{ret}}(D[\vec{\alpha}])\). Note that \(j\) is not stored in moves and thus neither in \(M\). Moreover in this case the
Fig. 2: The Labelled Transition System. We denote by - the empty stack, and by \(\varepsilon\) the empty sequence.
\(\vec{\alpha}\) used are chosen fresh, this is guaranteed by the implicit condition that \(M\) is legal and thus \(\alpha\) cannot appear twice in \(M\).
* \(\mathsf{next}_{M}(t)=\{\underline{\mathsf{ret}}(D[\vec{\alpha}])\}\). In this case the current configuration is along an opponent-visible trace that has occurred previously and performed a return as a next move. Thus because the opponent must have purely functional behaviour, the configuration can perform no other but this return transition.
If \(\mathsf{next}_{M}(t)\) does not fall into one of the above cases the transition does not apply.
To encode functional behaviour, the current opponent knowledge \(\vec{v}\) can only be stored in the abstract functions \(\vec{\alpha}\) generated at this transition and stored in \(A\). It _cannot_ be carried forward otherwise in the resulting proponent function. Hence, if \(T^{\prime}\) is a base type, this knowledge is lost after the transition.
#### 3.2.2 Opponent Call
The proponent function being called in these transitions defined by OpCallBarb and OpCall is one of those in the current opponent knowledge \(\vec{v}\). We use the relative index \(i\) in \(\vec{v}\) to refer to the function being called. The argument supplied to this function is obtained again by the function ulpatt applied to the argument type \(T_{1}\).
Opponent call transitions are differentiated based on whether they are top- or inner-level. Top-level opponent calls (OpCallBarb) are immediately observable and thus transitions are annotated with the move. Moreover, the opponent knowledge is dropped at the transition and not accumulated in the knowledge stack or created abstract function names. This is in line with applicative bisimulation where related top-level functions are called only at the point they become available in the bisimulation, and are provided with identical arguments, thus not not containing any related functions from the observer knowledge.
However inner-level opponent calls are not immediately observable and thus the corresponding move is stored in traces in \(M\). As for inner opponent return transitions, \(\mathsf{next}_{M}(t)\) may require that the transition must or cannot be applied.
#### 3.2.3 Big-Step bisimulation
**Definition 10** (Trace transitions).: We use \(\twoheadrightarrow\) for the reflexive and transitive closure of the \(\stackrel{{\tau}}{{\rightarrow}}\) transition. We write \(C\stackrel{{\eta}}{{\twoheadrightarrow}}C^{\prime}\) to mean \(C\stackrel{{\eta}}{{\twoheadrightarrow}}\stackrel{{ \eta}}{{\twoheadrightarrow}}C^{\prime}\); and \(C\stackrel{{ t}}{{\twoheadrightarrow}}C^{\prime}\) to mean \(C\stackrel{{\eta}}{{\twoheadrightarrow}}\stackrel{{ t^{\prime}}}{{\twoheadrightarrow}}C^{\prime}\) when \(t=\eta t^{\prime}\), and \(C\stackrel{{\eta}}{{\twoheadrightarrow}}C^{\prime}\) when \(t\) is empty.
Note that, by definition, trace transitions derived by our LTS only contain \(\underline{\mathsf{call}}\) and \(\mathsf{ret}\) moves.
**Definition 11**.: Given a closed expression \(\vdash e:T\), the initial configuration associated to \(e\) is:
\[C_{e}=\langle\cdot\,;\cdot\,;\cdot\,;\,;e\,;\cdot\rangle\]
Accordingly, we can give the semantics of \(e\) as:
\[\llbracket e\rrbracket=\{(t,M)\mid\exists A,t^{\prime},V,\vec{v}.\ C_{e} \stackrel{{ t}}{{\twoheadrightarrow}}\langle A\,;M\,;\cdot\,;t^{ \prime}\,;V\,;\vec{v}\rangle\}.\]
A closed expression \(e\) will be first evaluated by the LTS using the operational semantics rules (and PropTau). Once a value is reached, this will be communicated to the context by means of a proponent return (rule PropRetBarb), after it has been appropriately decomposed. For there on, the game continues with opponent interrogating functions produced by proponent (using rule OpAppBarb). Proponent can interrogate functions provided by opponent (PropApp), leading to further interaction all of which remains hidden (see \(\tau\)-transitions), until proponent provides a return to opponent's top-level application (PropRetBarb).
**Example 12**.: We now revisit the terms in Example 1 to show how our LTS works. We start with term \(M_{1}\) from Example 1 placed in an initial configuration \(C_{1}=\langle\cdot\,;\cdot\,;\cdot\,;M_{1}\,;\cdot\rangle\). The first is a proponent return transition which moves the function into the opponent's knowledge.
\[C_{1}\xrightarrow{\mathsf{ret}(\rrbracket)}\langle\cdot\,;\cdot\,;\cdot\,; \cdot\,;M_{1}\rangle=C_{12}\]
Then opponent calls and proponent immediately returns the second function (**fun** g ->...), which we call \(M_{11}\), and opponent calls \(M_{11}\); all are top-level interactions.
\[C_{11} \xrightarrow{\underline{\mathsf{call}}(1,\alpha_{f})}\langle \alpha_{f}^{0}\,;\cdot\,;\cdot\,;M_{11}[\alpha_{f}^{0}/\mathfrak{f}]\,;\cdot\rangle\] \[\xrightarrow{\mathsf{ret}(\rrbracket)}\langle\alpha_{f}^{0}\,; \cdot\,;\cdot\,;\cdot\,;M_{11}[\alpha_{f}^{0}/\mathfrak{f}]\rangle\] \[\xrightarrow{\underline{\mathsf{call}}(1,\alpha_{g})}\langle \alpha_{f}^{0},\alpha_{g}^{0}\,;\cdot\,;\cdot\,;\cdot\,;M_{12}\,;\cdot\rangle=C_ {12}\]
where
\[M_{12}=\textbf{if}\ \alpha_{f}^{0}\ \texttt{()}\ \texttt{=}\ \alpha_{g}^{0}\ \texttt{()}\ \texttt{then}\] \[\textbf{if}\ \alpha_{f}^{0}\ \texttt{()}\ \texttt{=}\ \alpha_{g}^{0}\ \texttt{()}\ \texttt{then}\ \texttt{ \emptyset}\ \texttt{else}\ \texttt{1}\]
else 2
The following transition is a proponent call of \(\alpha_{f}^{0}\), followed (necessarily, due to types) by an opponent return.
\[C_{12} \xrightarrow{\tau}\langle\alpha_{f}^{0},\alpha_{g}^{0};\{t_{1}\} \,;\langle,E_{1}\rangle\,;t_{1}\,;\cdot\,;\cdot\rangle\] \[\
If they are equal, the remaining transitions will be the following ones, reaching final configuration \(C_{15}\).
\[C_{14} \xrightarrow{\tau}\xrightarrow{\tau}\xrightarrow{\tau}\xrightarrow{\tau} \xrightarrow{\tau}\] (inner-level moves \[\mathtt{call}(\alpha_{f},()),\mathtt{ret}(k_{1}),\] \[\mathtt{call}(\alpha_{g},()),\mathtt{ret}(k_{2}))\] \[\xrightarrow{\mathtt{ret}(0)}\langle\alpha_{f}^{0},\alpha_{g}^{0} \,;M_{1}\,;\,\cdot\,;\,\cdot\,;\cdot\,;\cdot\,\rangle=C_{15}(k_{1},k_{2})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
\(O\)-configurations, let their current (common) trace component be \(t\). As the memory maps \(M_{1},M_{2}\) of \(C_{1(j+1)},C_{2(j+1)}\) are included in \(M\), we must have \(\mathsf{next}_{M_{1}}(t)=\mathsf{next}_{M_{2}}(t)\) and, hence, \(C_{1j}\) and \(C_{2j}\) must have made the same move and therefore \(C_{1(j+1)}=C_{2(j+1)}\). Now, since \(C_{1n}=C_{2n}\) and one of them makes a \(P\)-return then, by determinacy of proponent, the other must make the same \(P\)-return. Hence, \(C_{1}=C_{2}\).
Weak bisimulation is defined in the standard way, albeit using the big-step transition relation corresponding to initial and final configurations. In addition, \(M\)-components are compared _contravariantly_: \(M\) records the opponent behaviour faced by the simulated expression, which then restricts that faced by the simulating expression..
**Definition 15** (Weak (Bi-)Simulation).: A binary relation \(\mathcal{R}\) on initial and final configurations is a _weak simulation_ when for all \(C_{1}\;\mathcal{R}\;C_{2}\):
* _Initial configurations_: if, there exists \(C_{2}^{\prime}\) such that \(C_{2}\stackrel{{\mathsf{ret}(D^{\prime})}}{{\rightarrow}}C_{2}^{\prime}\) and \(C_{1}^{\prime}\;\mathcal{R}\;C_{2}^{\prime}\);
* _Final configurations_: if \(C_{1}\stackrel{{\mathsf{call}(i,D[\vec{\sigma}])\;\mathsf{ret}(D^{ \prime})}}{{\rightarrow}}C_{1}^{\prime}\) with \(\vec{\sigma}\) fresh for \(C_{2}\), there exists \(C_{2}^{\prime}\) such that \(C_{2}\stackrel{{\mathsf{call}(i,D[\vec{\sigma}])\;\mathsf{ret}(D^{ \prime})}}{{\rightarrow}}C_{2}^{\prime}\) and \(C_{1}^{\prime}\;\mathcal{R}\;C_{2}^{\prime}\);
* \(M_{C_{2}}\subseteq M_{C_{1}}\) (where \(M_{C_{i}}\) is the \(M\)-component of \(C_{i}\)).
If \(\mathcal{R}\), \(\mathcal{R}^{-1}\) are weak simulations then \(\mathcal{R}\) is a _weak bisimulation_. Similarity \((\stackrel{{\mathsf{c}}}{{\Rightarrow}})\) and bisimilarity \((\approx)\) are the largest weak simulation and bisimulation, respectively.
This definition resembles that of applicative bisimulation for \(\mathrm{PCF_{v}}\), in that related top-level functions applied to identical arguments must co-terminate and return related results. However the most important difference here is that there is no quantification over all possible programs. The context \(D\) is a value without any functions in it (essentially containing constants and/or pairs) which is determined by the type of the \(i\)'th function. The fresh names \(\vec{\alpha}\) correspond to opponent-generated functions but are first-order entities that are equivalent up to renaming. Thus this definition constitutes a big-step Normal Form bisimulation.
**Definition 16** (Bisimilar Expressions).: Expressions\(\vdash e_{1}:T\) and \(\vdash e_{2}:T\) are bisimilar, written \(e_{1}\approx e_{2}\), when \(C_{e_{1}}\approx C_{e_{2}}\).
**Lemma 17**.: \(e_{1}\approx e_{2}\) _iff \(\llbracket e_{1}\rrbracket=\llbracket e_{2}\rrbracket\)._
Proof.: Note first that if \(e_{1}\approx e_{2}\) and \((t,M)\in\llbracket e_{1}\rrbracket\) then, starting from \(C_{e_{2}}\), we can simulate the transitions producing \(t\) and arrive at the same \(M\). Conversely, suppose that \(\llbracket e_{1}\rrbracket=\llbracket e_{2}\rrbracket\) and define:
\[\mathcal{R}=\{(C_{1},C_{2})\mid M_{C_{1}}=M_{C_{2}}\wedge\exists t.\,C_{e_{i }}\stackrel{{ t}}{{\rightarrow}}C_{i}\wedge C_{i}\text{ final}\}.\]
We show that \(\mathcal{R}\) is a weak bisimulation. Suppose \(C_{1}\;\mathcal{R}\;C_{2}\) with trace \(C_{e_{i}}\stackrel{{ t}}{{\rightarrow}}C_{i}\), and let \(C_{1}\stackrel{{\mathsf{call}(i,D[\vec{\sigma}])\;\mathsf{ret}(D ^{\prime})}}{{\rightarrow}}C_{1}^{\prime}\) with \(\vec{\alpha}\) fresh for \(C_{2}\). As \(\llbracket e_{1}\rrbracket=\llbracket e_{2}\rrbracket\), there is a transition sequence:
\[C_{e_{2}}\stackrel{{ t}}{{\rightarrow}}C_{2}\stackrel{{ \mathsf{call}(i,D[\vec{\sigma}])\;\mathsf{ret}(D^{\prime})}}{{ \rightarrow}}C_{2}^{\prime}\]
such that \(M_{C_{1}^{\prime}}=M_{C_{2}^{\prime}}\). Since \(M_{C_{2}}=M_{C_{1}}\subseteq M_{C_{1}^{\prime}}\), we have \(M_{C_{2}}\subseteq M_{C_{2}^{\prime}}\). Hence, starting from \(C_{e_{2}}\) and repeatedly applying Lemma 14, we conclude that \(C_{2}=\hat{C}_{2}\), and thus \(C_{2}\) can match the challenge of \(C_{1}\). Hence, \(\mathcal{R}\) is a weak simulation and, by symmetry, a weak bisimulation.
The previous result can be used to show that bisimilarity is sound and complete with respect to contextual equivalence. The proof is discussed in the next section.
**Theorem 18** (Full abstraction).: \(e_{1}\approx e_{2}\) _iff \(e_{1}\equiv e_{2}\)._
**Remark 19**.: Following [7], we can define \(\llbracket e_{1}\rrbracket\leq\llbracket e_{2}\rrbracket\) to hold if \(\forall(t,M_{1})\in\llbracket e_{1}\rrbracket\). \(\exists M_{2}.\,(t,M_{2})\in\llbracket e_{2}\rrbracket\wedge M_{2}\subseteq M _{1}\). Then, Lemma 17 can be sharpened to its similarity variant, which would lead to full abstraction of normal-form similarity.
## V Full Abstraction
To prove that the LTS is sound and complete we use an extended LTS based on operational game semantics [19]. The latter differs from our main LTS in that proponent and opponent can play the same kinds of moves, and in particular they can pass fresh function names to the other player, or apply functions of the other player by referring to their corresponding names. This duality in roles allows for the modelling of both expressions and contexts. Moreover, all moves are recorded in the trace, not just top-level ones, which in turn enables us to compose two LTS's corresponding respectively to an expression and its context.
We shall call this the _game-LTS_, whereas the main LTS shall simply be _the/our LTS_. We shall be re-using some of our main LTS terminology here, for example traces will again be sequences of moves, albeit of different kind of moves. This is done for notional economy and we hope it is not confusing.
### _The game-LTS_
We start by introducing an enriched notion of trace. Traces shall now consist of _moves_ of the form:
Moves \[m ::=\ p\mid o\] Proponent moves \[p ::=\mathsf{call}(\mathfrak{o},D[\vec{\mathfrak{p}}])\mid\mathsf{ret}(D[ \vec{\mathfrak{p}}])\] Opponent moves \[o ::=\mathsf{call}(\mathfrak{p},D[\vec{\mathfrak{o}}])\mid\mathsf{ret}(D[ \vec{\mathfrak{o}}])\]
where \(\mathfrak{o},\mathfrak{p}\) (and variants thereof) are sourced from disjoint sets _ONNames_ and _PNames_ of _opponent_ and _proponent names_ respectively. Names represent abstract functions and are used to abstract away the functions that a context and an expression are producing in a computation. We shall often be abbreviating "proponent" and "opponent" to \(P\) and \(O\) respectively and write, for instance, "\(O\)-moves" or "\(P\)-names".
A _complete trace_ is then given by the following grammar.
\[CT \rightarrow\ CT_{P}\mid CT_{O}\] \[CT_{P} \rightarrow\ \mathsf{ret}(D[\vec{\mathfrak{p}}])\ CT_{OP}\] \[CT_{O} \rightarrow\ \mathsf{ret}(D[\vec{\mathfrak{o}}])\ CT_{PO}\] \[CT_{OP} \rightarrow\ \cdot\mid\mathsf{call}(\mathfrak{p},D[\vec{\mathfrak{o}}])\ CT_{PO}\ \mathsf{ret}(D[\vec{\mathfrak{p}}])\ CT_{OP}\] \[CT_{PO} \rightarrow\ \cdot\mid\mathsf{call}(\mathfrak{o},D[\vec{\mathfrak{p}}])\ CT_{OP}\ \mathsf{ret}(D[\vec{\mathfrak{o}}])\ CT_{PO}\]
A _trace_ is a prefix of a complete trace. A trace \(t\) is called _legal_ if it satisfies these conditions:
* for each \(t^{\prime}p\sqsubseteq t\) with \(p=\mathtt{call}(\mathfrak{o},D[\vec{\mathfrak{p}}])\) or \(p=\mathtt{ret}(D)\vec{\mathfrak{p}}\):
* \(\vec{\mathfrak{p}}\) do not appear in \(t^{\prime}\) -- we say that move \(p\)_introduces_ each \(\mathfrak{p}\in\vec{\mathfrak{p}}_{i}\!\,\)-- and
* there is some move \(o^{\prime}\) in \(t^{\prime}\) that introduces \(\mathfrak{o}\);
* for each \(t^{\prime}o\sqsubseteq t\) with \(o=\mathtt{call}(\mathfrak{p},D[\vec{\mathfrak{o}}])\) or \(o=\mathtt{ret}(D)\vec{\mathfrak{o}}\):
* \(\vec{\mathfrak{o}}\) do not appear in \(t^{\prime}\) -- we say that move \(o\)_introduces_ each \(\mathfrak{o}_{i}\in\vec{\mathfrak{o}}\) -- and
* there is some move \(p^{\prime}\) in \(t^{\prime}\) that introduces \(\mathfrak{p}\).
Thus, in a legal trace all applications refer to names introduced earlier in the trace. Put otherwise, all function calls must be to functions that are available when said calls are made. We say that an application \(\mathtt{call}(\mathfrak{p},D[\vec{\mathfrak{o}}])\) (or \(\mathtt{call}(\mathfrak{o},D[\vec{\mathfrak{p}}])\)) is _justified_ by the (unique) earlier move that introduced \(\mathfrak{p}\) (resp. \(\mathfrak{o}\)). On the other hand, a return is justified by the call to which it returns. In a legal trace, all call moves are justified.
Due to the modelled language being functional, not all names are visible to the players (i.e. proponent and opponent) at all times. For example if opponent makes two calls to proponent function \(\mathfrak{p}\), say first \(\mathtt{call}(\mathfrak{p},D_{1}[\vec{\mathfrak{o}}_{1}])\) and later \(\mathtt{call}(\mathfrak{p},D_{2}[\vec{\mathfrak{o}}_{2}])\), the second call will hide from proponent all the trace related to the first one. This limitation is captured by the notion of _view_. Given a legal trace \(t\), we define its _\(P\)-view_\({}^{r}t^{\top}\) and _\(O\)-view_\(\sqcup t\)_._ respectively as follows:
\[\begin{split}\ulcorner t^{\top}&=\begin{cases}t& \text{if }|t|\leq 1\\ \ulcorner t^{\prime}\urcorner p&\text{if }t=t^{\prime}p\\ \ulcorner t^{\prime}\urcorner po&\text{if }t=t^{\prime}ptt^{\prime\prime}o \text{ and }o\text{ is justified by }p\\ \llcorner t_{\cdot}&=\begin{cases}t&\text{if }|t|\leq 1\\ \ulcorner t_{\cdot}o&\text{if }t=t^{\prime}o\\ \llcorner t^{\prime}\lr op&\text{if }t=t^{\prime}ot^{\prime\prime}p\text{ and }p\text{ is justified by }o\end{cases}\end{cases}\end{split}\]
We will focus on traces where each player's moves are uniquely determined by their current view. This corresponds to game-semantics _innocence_ (cf. [12]).
In the following definitions we employ basic elements from nominal set theory [32] to formally account for names in our constructions. Let us write \(\mathcal{N}\) for \(\mathit{ONames}\!\uplus\mathit{PNames}\). Finite-support name permutations that respect \(O\)- and \(P\)-ownership of names are given by:
\[\begin{split}\mathit{Perm}&=\{\pi:\mathcal{N} \xrightarrow{\cong}\mathcal{N}\mid\exists X\subseteq_{\text{finite}}\mathcal{N}.\,\forall y\in\mathcal{N}\setminus X.\,\pi(y)=y\\ &\qquad\wedge\forall x\in X.\,x\in\mathit{ONames}\iff\pi(x)\in \mathit{ONames}\}\end{split}\]
Given a trace \(t\) and a permutation \(\pi\), we write \(\pi\cdot t\) for the trace we obtain by applying \(\pi\) to all names in \(t\). We write \(t\sim t^{\prime}\) if there exists some \(\pi\) such that \(t^{\prime}=\pi\cdot t\). The latter defines an equivalence relation, the classes of which we denote by \([t]\):
\[[t]=\{\pi\cdot t\mid\pi\in\mathit{Perm}\}.\]
Moreover, we define the sets of \(O\)-views and \(P\)-views of \(t\) (under permutation) as:
\[\begin{split}\mathit{PV}(t)&=\{\pi\cdot\ulcorner t^{ \prime}\!\,\mid\,t^{\prime}\sqsubseteq t\wedge\pi\in\mathit{Perm}\}\\ \mathit{OV}(t)&=\{\pi\cdot\ulcorner t^{\prime}\lrcorner\, \mid\,t^{\prime}\sqsubseteq t\wedge\pi\in\mathit{Perm}\}\end{split}\]
**Definition 20**.: A legal trace \(t\) is called a _play_ if:
* for each \(t^{\prime}p,t^{\prime\prime}o\sqsubseteq t\), the justifier of \(p\) (of \(o\)) is included in \(\ulcorner t^{\prime}\) (resp. \(\llcorner t^{\prime\prime}\));
* for all \(t_{1}p_{1},t_{2}p_{2},t^{\prime}_{1},t^{\prime}_{2}o_{2}\sqsubseteq t\),
* if \(\ulcorner t_{1}\!\,\)\(\sim\!\ulcorner t_{2}\!\,\)- then \(\ulcorner t_{1}p_{1}\!\,\)\(\sim\!\ulcorner t_{2}p_{2}\!\,\)",
* if \(\llcorner t^{\prime}_{1}\!\,\)\(\sim\!\ulcorner t^{\prime}_{2}\!\,\)- then \(\llcorner t^{\prime}_{1}o_{1}\!\,\)\(\sim\!\ulcorner t^{\prime}_{2}o_{2}\!\,\).
We refer to the conditions above as _visibility_ and _innocence_ respectively.
Visibility and innocence are standard game conditions (cf. [12, 27]): the former corresponds to the fact that an expression (or context) can only call functions in its syntactic context; while the latter enforces purely functional behaviour.
We can now proceed to the definition of the game-LTS. Similarly to the previous section, we extend the language syntax of Fig. 1 by including O-names as values. We define proponent and opponent _game-configurations_ respectively by:
\[\langle\mathcal{A}\,;\kappa\,;K\,;t\,;e\,;V\,;\vec{\mathfrak{o}}\rangle\quad \text{and}\quad\langle\mathcal{A}\,;\kappa\,;K\,;t\,;V\,;\vec{\mathfrak{p}}\rangle\]
and range over them by \(\mathcal{C}\) and variants. Here:
* \(\mathcal{A}\) is a map which assigns to each (introduced) opponent name a sequence of proponent names. We write \(\mathfrak{o}^{\vec{\mathfrak{p}}}\in\mathcal{A}\) for \(\mathcal{A}(\mathfrak{o})=\vec{\mathfrak{p}}\). The sequence \(\vec{\mathfrak{p}}\) are the proponent (function) names that were available to opponent when the name \(\mathfrak{o}\) was introduced.
* Dually, \(\kappa\) is a _concreation map_ which assigns to each (introduced) proponent name the function that it represents and the opponent names that are available to it.
* \(t\) is a play recording all the moves that have been played thus far. Given \(t\), we define the partial function \(\mathsf{next}_{O}(t)\), which we use to impose innocence on \(O\)-moves, by: \[\mathsf{next}_{O}(t)=\{\pi\cdot o\mid\exists t^{\prime}o\sqsubseteq t.\llcorner t _{\cdot}\!\,=\pi\cdot\llcorner t^{\prime}\lrcorner t(\pi\cdot o)\text{ a play}\}\] When we write \(\mathsf{next}_{O}(t)\subseteq_{\star}[o]\), for some \(o,t\), we mean that either \(o\in\mathsf{next}_{O}(t)\) or \(\mathsf{next}_{O}(t)=\emptyset\).
* \(K\) is a stack of proponent continuations (pairs of evaluation contexts and opponent names \(\vec{\mathfrak{o}}\)), and \(e\) is the expression reduced in proponent configurations.
* \(\vec{\mathfrak{o}}\) and \(\vec{\mathfrak{p}}\) are sequences of other-player names that are available to proponent and opponent respectively at the given point in the interaction; \(V\) is a stack of \(\vec{\mathfrak{p}}\)'s.
Note that we store the full trace in configurations and we use names (\(\mathfrak{p}\) and variants) to abstract proponent higher-order values. There is no need of an \(M\)-component as we can rely on the full play. We call a configuration _initial_ if it is in one of these forms (called respectively \(P\)- _and \(O\)-initial_):1
Footnote 1: we write \(V=\cdot\) for an empty stack, and \(V=\varepsilon\) for a singleton stack containing the empty sequence; moreover, here and elsewhere, we use underscore (_) to denote any component of the appropriate type.
\[\langle\_;\cdot\,;\cdot\,;\cdot\,;\cdot\,;e\,;\varepsilon\,;\varepsilon\rangle\;\; \text{or}\;\;\;\mathcal{C}_{E}=\langle\,;\cdot\,;;\,(E[\cdot]_{T}\!\cdot\! \text{unit},\varepsilon)\,;\cdot\,;\cdot\,;\varepsilon\rangle\]
and _final_ if it is in one of these forms (_\(O\)- and resp. \(P\)-final_):
\[\langle\_;\_;\cdot\,;\cdot\,;\cdot\,;\cdot\,\rangle\quad\text{or}\quad\langle\_ {\_;\_;\cdot\,;\cdot\,;;\_}\rangle\;\;\;;\,\rangle.\]
Note that, by definition of the LTS, a \(P\)-initial configuration can only lead to \(O\)-final configurations, whereas \(O\)-initial configurations lead to \(P\)-final configurations.
**Definition 21**.: The game-LTS is defined by the rules in Fig. 3. Given initial configuration \(\mathcal{C}\), we set:
\[\mathit{CP}(\mathcal{C})=\{t\in\mathsf{Pls}(\mathcal{C})\mid t\text{ complete}\}\]
where we let \(\mathsf{Pls}(\mathcal{C})\) be the set of plays produced by the LTS starting from \(\mathcal{C}\).
We can show that the traces produced by the game-LTS are plays and define a model for \(\mathrm{PCFv}\) based on sets of complete plays, but that would not be fully abstract. Though presented in operational form, our game-LTS is equivalent to the (base) game-model of \(\mathrm{PCFv}\)[10]. Consequently, if we model expressions by the sets of complete plays they produce, we miss even simple equivalences like \(\lambda f.f()\equiv\lambda f.f(f)\) -- plays are too intentional and do not take into account the limitations of functional contexts. To address this, one can use a semantic quotient (cf. [10]) or, alternatively, group the plays of an expression into sets of plays so as to profile functional contexts the expression may interact with (cf. [7]). Thus, an expression is modelled by a _set of sets of plays_, one for each possible context. We follow the latter approach, and also combine it with the fact that applicative tests suffice (cf. Proposition 7).
**Definition 22**.: Given a \(P\)-starting play \(t\), we call a move \(m\) of \(t\)_top-level_ if:
* either \(m\) is the initial \(P\)-return of \(t\),
* or \(m\) is an \(O\)-call justified by a top-level \(P\)-move,
* or \(m\) is a \(P\)-return to a top-level \(O\)-move.
We say that \(t\) is _top-linear_ if each top-level \(O\)-move in \(t\) is justified by the \(P\)-move that precedes it.
Hence, top-level moves are those that start from or go to a final configuration. If \(t\) is complete and top-linear then:
\[t=p_{0}o_{1}\cdots p_{1}\cdots o_{n}\cdots p_{n}\quad\text{and}\quad\llcorner t \lrcorner=p_{0}o_{1}p_{1}\cdots o_{n}p_{n}\]
where each \(o_{i+1}\) is justified by \(p_{i}\), each \(p_{i}\) returns \(o_{i}\) (\(i>0\)), and the \(o_{i},p_{i}\) above are all the top-level moves in \(t\). This means that, at the top level of a top-linear play, opponent may only choose one of the functions provided by proponent in their last move and examine it (i.e. call it), which precisely corresponds to what an applicative context would be able to do.
We can now present our main results for the game-LTS. Given initial \(P\)-configuration \(\mathcal{C}\), we define:
\[\mathit{OV}(\mathcal{C}) =\{\mathit{OV}(t)\mid t\in\mathit{CP}(\mathcal{C})\}\] \[\mathit{OV}_{tl}(\mathcal{C}) =\{\mathit{OV}(t)\mid t\in\mathit{CP}(\mathcal{C})\text{ and }t \text{ top-linear}\}\]
**Proposition 23** (Correspondence).: _Given \(\vdash\)\(e_{1},e_{2}:T\), \(\mathit{OV}_{tl}(\mathcal{C}_{e_{1}})=\mathit{OV}_{tl}(\mathcal{C}_{e_{2}})\) iff \(\llbracket e_{1}\rrbracket=\llbracket e_{2}\rrbracket\)._
**Proposition 24** (Game-LTS full abstraction).: _Given \(\vdash\)\(e_{1},e_{2}:T\), \(e_{1}\equiv e_{2}\) iff \(\mathit{OV}_{tl}(\mathcal{C}_{e_{1}})=\mathit{OV}_{tl}(\mathcal{C}_{e_{2}})\)._
Theorem 18 follows from the two results above. For the first result we build a translation from the game-LTS to the (plain) LTS that forms a certain bisimulation between the two systems. To prove full abstraction of the game-LTS we use standard and operational game semantics techniques (cf. [12, 19, 8]) along with the characterisation of PCF equivalence by sets of \(O\)-views presented in [7].
## VI Prototype Implementation
We implemented the LTS with symbolic higher-order transitions in a prototype bisimulation checking tool for programs written in an ML-like syntax for \(\mathrm{PCFv}\). Our tool implements
Fig. 3: The Game Labelled Transition System (game-LTS).
a Bounded Symbolic Execution-via calls to Z3-for a big-step bisimulation of the LTS; the tool was developed in OCaml2.
Footnote 2: [https://github.com/LafisV1/pcfcq](https://github.com/LafisV1/pcfcq)
The tool performs symbolic execution of base type values through an extension of the LTS to include a _symbolic environment_\(\sigma:\mathsf{Val}\rightarrow\mathsf{Val}\) that accumulates constraints on _symbolic constants_\(\varkappa\in\mathsf{Val}\) that extend the set of values. Symbolic constants are of base type and may only be introduced by opponent moves (arguments and return values) and by reducing expressions that involve symbolic constants; their semantics follows standard symbolic execution. The exploration is performed over _configuration pairs_\(\langle C_{1},C_{2},M,\sigma,k\rangle\) of bisimilar term configurations \(C_{1}\) and \(C_{2}\), shared memory \(M\) and given bound \(k\). This shared memory is the combination of memories in \(C_{1}\) and \(C_{2}\). When configurations \(C_{1}\) and \(C_{2}\) are final, equivalence requires \(M_{C_{1}}=M_{C_{2}}=M\). Being a symbolic execution tool, our prototype implementation is _sound_ (reports only true positives and true negatives) and _bounded-complete_ since it exhaustively and precisely explores all possible paths up to the given bound, which defines the number of consecutive function calls allowed.
Because of the infinite nature of proving equivalence -- and even of disproving equivalence -- of pure higher-order programs, a bounded exploration often does not suffice for automatic verification. For this reason, we implement simple enhancements that attempt to prune the state-space and/or prove that cycles have been reached to finitise the exploration for several examples in our testsuite. We currently have not implemented more involved up-to bisimulation enhancements, perhaps guided by user annotations, which we leave for future work. In particular we make use of:
* _Memoisation_, which caches configuration pairs. When bounded exploration reaches a memoised configuration pair, the tool does not explore any further outgoing transitions from this pair; these were explored already when the pair was added to the memoisation set.
* _Identity_, which deems two configurations in a pair equivalent when they are syntactically identical; no further exploration is needed in this case.
* _Normalisation_, which renames bound variables and symbolic constants before comparing configuration pairs for membership in the memoisation set. This also normalises the symbolic environments \(\sigma\) in the configuration pairs.
* _Proponent call caching_, which caches proponent calls once the corresponding opponent return is reached. When the same call (same function applied to the same argument) is reached again on the same trace, it is immediately followed by the cached opponent return move. Performing this second call would not have materially changed the configuration, as the behaviour of the call is determined by the traces in the memory \(M\) of the configuration.
* _Opponent call skipping_, which caches opponent calls once the corresponding proponent return is reached. If the same call is possible from later configurations with the same opponent knowledge, the call is skipped as the opponent cannot increase its knowledge by repeating the same call.
* _Stack-based loop detection_, which searches the stack component \(K\) of a configuration for nested identical proponent calls. When this happens, it means that the configuration is on an infinite trace of interactions between opponent and proponent which will keep applying the same function indefinitely. We deem these configurations diverging.
Running our tool on the examples in this paper on an Intel Core i7 1.90GHz machine with 32GB RAM running OCaml 4.10.0 and Z3 4.8.10 on Ubuntu 20.04 we obtain the following three-trial average results: Example 1, deemed equivalent, 8ms; Example 2, inequivalent, 3ms; Example 3, inequivalent, 4ms; 4, inequivalent, 3ms. For the entire benchmark of thirty seven program pairs, we successfully verify six equivalences and nineteen inequivalences with twelve inconclusive results in 471ms total time. The complete set of examples is available in our online repository.
## VII Conclusion
We have proposed a technique which combines operational and denotational approaches in order to provide a (quotient-free) characterisation of contextual equivalence in call-by-value PCF. This technique provides the first fully abstract normal form bisimulation for this language. We have justified several of our choices in designing our LTS via examples, and we believe the LTS is succinct in not carrying more information than needed for completeness. Our technique gives rise to a sound and complete technique for checking of PCFv program equivalence, which we implemented into a bounded bisimulation checking tool.
After testing our tool implementation, we have found it useful for deciding instances of the equivalence problem. This is particularly true for inequivalences: the tool was able to verify most of our examples, including some which were difficult to reason about even informally. Further testing and optimisation of the implementation are needed in order to assess its practical relevance, particularly on larger examples. Currently, the main limitation for the tool is the difficulty in establishing equivalences, as these typically entail infinite bisimulations and are hard to capture in a bounded manner. To address this, we aim to develop up-to techniques [34] and (possibly semi-automatic) abstraction methods in order to finitise the examined bisimulation space.
|
2303.01151
|
Real-time Tracking of Medical Devices: An Analysis of Multilateration
and Fingerprinting Approaches
|
Hospital infrastructures are always in evidence in periods of crisis, such as
natural disasters or pandemic events, under stress. The recent COVID-19
pandemic exposed several inefficiencies in hospital systems over a relatively
long period. Among these inefficiencies are human factors, such as how to
manage staff during periods of high demand, and technical factors, including
the management of Portable Medical Devices (PMD), such as mechanical
ventilators, capnography monitors, infusion pumps, or pulse oximeters. These
devices, which are vital for monitoring patients or performing different
procedures, were found to have a high turnover during high-demand, resulting in
inefficiencies and more pressure on medical teams.
Thus, the work PMD-Track evaluates in detail two popular indoor tracking
approaches concerning their accuracy, placement of beacons, and economic
impacts. The key novelty of PMD-Track relies on using smartphones provided to
hospital employees, replacing typical stationary gateways spread across a
hospital, functioning as mobile gateways with a front-end that assists staff in
locating PMDs. As employees approach tagged PMDs, their smartphone
automatically updates the location of spotted PMDs in real-time, providing
room-level localization data with up to 83% accuracy for fingerprinting and 35%
for multilateration. In addition, fingerprinting is 45% cheaper than
multilateration over the course of five years. Practical experiments were
evaluated based on two locations in Z\"urich, Switzerland.
|
Bruno Rodrigues, Eder J. Scheid, Katharina O. E. Müller, Julius Willems, Burkhard Stiller
|
2023-03-02T10:54:16Z
|
http://arxiv.org/abs/2303.01151v1
|
# Real-time Tracking of Medical Devices: An Analysis of Multilateration and Fingerprinting Approaches
###### Abstract
Hospital infrastructures are always in evidence in periods of crisis, such as natural disasters or pandemic events, under stress. The recent COVID-19 pandemic exposed several inefficiencies in hospital systems over a relatively long period. Among these inefficiencies are human factors, such as how to manage staff during periods of high demand, and technical factors, including the management of Portable Medical Devices (PMD), such as mechanical ventilators, capnography monitors, infusion pumps, or pulse oximeters. These devices, which are vital for monitoring patients or performing different procedures, were found to have a high turnover during high-demand, resulting in inefficiencies and more pressure on medical teams.
Thus, the work PMD-Track evaluates in detail two popular indoor tracking approaches concerning their accuracy, placement of beacons, and economic impacts. The key novelty of PMD-Track relies on using smartphones provided to hospital employees, replacing typical stationary gateways spread across a hospital, functioning as mobile gateways with a front-end that assists staff in locating PMDs. As employees approach tagged PMDs, their smartphone automatically updates the location of spotted PMDs in real-time, providing room-level localization data with up to 83% accuracy for fingerprinting and 35% for multilateration. In addition, fingerprinting is 45% cheaper than multilateration over the course of five years. Practical experiments were evaluated based on two locations in Zurich, Switzerland.
Bluetooth, Healthcare, Indoor Tracking, Fingerprinting, Multilateration, Healthcare +
Footnote †: footnote]Note: [https://www.fnal.gov/](https://www.fnal.gov/)
## 1 Introduction
The recent global pandemic exacerbated the challenge of hospital equipment management. To ensure a sufficient availability of medical equipment and to cope with related inefficiencies, hospitals typically acquire excess capacities, resulting in cost overheads and asset utilization rates below 50%. At the same time, experts consider a utilization rate of 80% feasible [21]. The recent period of scaling demands revealed various flaws in healthcare systems worldwide due to the unexpected rise in patient load, whose inefficiencies to respond effectively during times of crisis were pointed out [3, 44]. For example, it was found that mechanical ventilators and supervision equipment, such as ECG (Electroccardiogram), capnography monitors, infusion pumps, or pulse oximeters, were not adequately managed on hospital property and that, in the majority of cases, hospitals were understaffed [3].
Figure 1 presents a hospital occupancy for September 2020 to January 2021, in which the occupancy exceeds not only the capacity certified by the Swiss Society of Intensive Care Medicine (SSMI), but also the maximum capacity [46]. The period of great stress on healthcare infrastructures has triggered several considerations to operate more efficiently,
Figure 1: Occupancy of intensive care units in Switzerland in 2020[46]
such as managing available infrastructure and devices and the overwhelming stress on the medical staff that influences several management aspects [25].
During periods of high demand, such as health crises during the COVID pandemic or natural disasters such as earthquakes, Portable Medical Devices (PMD) for monitoring patients or performing different procedures were found to have a high turnover [44]. Staff typically communicates about such devices orally, transferring the equipment's responsibility when their shift ends [47]. The lack of an automated and structured approach for locating portable equipment and the pressure the staff is typically exposed to during these periods makes their management ineffective.
### _Benefits and Drawbacks_
Tracking the location of PMDs automatically increases a hospital's efficiency in reacting to emergencies. The main **advantage** for using tracking solutions is that these show potential for increasing operational efficiency to quickly react to emergencies by knowing about precise locations of PMDs [3, 44]. While hospitals do not always operate close to their capacity (often measured based on the number of occupied beds [36]), in exceptional cases, it is common for this capacity to be reached or even exceeded, and tracking solutions can have a significant positive impact in terms of saving lives. In addition, there are additional benefits, such as reducing the burden on medical staff by decreasing unnecessary communications and the time required to find equipment, improving inventory management, and potentially reducing overall expenses in the long-term once hospitals can better estimate equipment replacement and maintenance.
While there are several benefits, there are also **drawbacks** that prevent such solutions from being deployed on a large scale in hospitals. The literature presents several solutions based on different fingerprinting and multilateration approaches, using technologies such as RFID (Radio-Frequency IDentification), UWB (Ultra Wide-band), WiFi, and Bluetooth Low Energy (BLE) for device tracking [13]. Approaches have also been proposed in the context of hospitals but have not seen a widespread deployment [11, 34, 35, 48]. For instance, [34] uses RFID tags attached to assets, herein termed PMDs, and RFID readers scattered through the hospital's infrastructure to read the position of tags. In a similar direction, [48] proposed a WiFi tracking solution using multilateration to find the position of wheelchairs in real-time. While RFID shows the drawback of requiring several RFID readers, a WiFi-based approach offers a higher range, but suffers from a lack of accuracy.
Since hospitals typically do not operate at full capacity, and the cost of deploying and operating these solutions is relatively high, the need for precise tracking solutions is often not perceived as necessary by hospital managers. Thus, instead of improving how existing PMDs are managed, a typical approach is often to acquire more PMDs. However, the need to improve how PMDs are managed became apparent during the recent pandemic. Several hospital infrastructures were overwhelmed, including staff communications under stressful situations [3, 44], thus, resulting in "missing" PMDs, which only had been misplaced. In addition, the effects of increased signal emission still require further study to ensure that its operation does not affect patients' health in critical states. For example, [12] has run a descriptive study on the impact of RFID asset-tracking in healthcare, but the area still requires practical studies and larger analysis.
### _Overview and Contributions_
PMD-Track's initial design was first presented in [40], and a preliminary evaluation was published in [41]. Nevertheless, this paper presents the full architecture of PMD-Track, and a very detailed real-world evaluation of its two prototyped tracking approaches, _(i)_ fingerprinting and _(ii)_ multilateration. Furthermore, this paper includes an economic analysis considering the number of devices necessary to operate a hospital effectively. Other and previous work of the authors covered different aspects and tracking technologies, such as _(a)_ passive Bluetooth and WiFi tracking [37, 39], _(b)_ the combined use of RFID and cameras tracking [38], and _(c)_ the correlation of several tracking sources using temporal and spatial dimensions to improve precision [42]. Although these approaches use passive tracking in the context of event marketing analysis, instead of the hospital context, the algorithms and techniques developed contributed indirectly to the PMD-Track's design.
The main challenge that PMD-Track approach faced was the cost/benefit optimization of existing asset-tracking approaches used in hospitals. In this regard, PMD-Track leverages two fundamental pillars: tracking operation based on smartphones and static BLE tags components that simplify the operation in contrast to existing indoor tracking solutions, combined with an intuitive visualization and detailed analytics on the usage of PMDs. The location of tagged PMDs can be automatically updated whenever a staff member passes by or comes within the range of a tagged PMD. Hospitals typically equip their staff with smartphones to facilitate internal communications and provide easy access to hospital services. This is the key aspect this proposal considers: smartphones can replace expensive gateways scattered throughout the hospital's infrastructure by acting as mobile gateways. This paper's **contributions** are summarized as follows:
* Design and prototyping of a gateway-less tracking approach, replacing gateways typically used in real-time tracking solutions with mobile devices used by hospital employees.
* Comparison of room-level accuracy, training size, and beacon placement of multilateration and fingerprinting tracking approaches in a real-world evaluation.
* Presenting an extensive analysis of the proposed approach regarding the accuracy, economics, and impacts on security and privacy.
### Organization
The remainder of this paper is organized as follows. Section II overviews fundamentals. While Section III describes the rationale and design, Section IV details the evaluation in essential dimensions. Finally, Section V summarizes the work and outlines future steps.
## II Fundamentals
While Subsection II-A provides insights into major concepts required for this work, especially Bluetooth Low Energy (BLE) and the comparison of tracking approaches _i.e_., multi-lateration and fingerprinting, Subsection II-B surveys related work in indoor tracking and within hospitals, as well as respective visualization and analytics.
### Background
The major and underlying concepts include Bluetooth Low Energy (BLE) used in tags (_cf._ Subsection II-A1) and tracking approaches of multilateration (_cf._ Subsection II-A2) as well as fingerprinting (_cf._ Subsection II-A3).
#### Ii-A1 Bluetooth Low Energy (BLE)
BLE is a widely adopted wireless technology for personal area networking with a range of up to 100 m in line-of-sight situations. It operates on 40 channels in the unlicensed 2.4 GHz ISM frequency band. BLE is made for low-power data transmission at up to 2 Mbit/s, making it one of the most popular technologies in Internet of Things (IoT) applications [8]. Recent estimates indicate that by 2026, 7 billion BLE-enabled devices will be shipped each year, a common communication protocol embedded in IoT devices (_e.g_., smart home, industrial IoT) [45, 23].
#### Ii-A2 Multilateration
Is an approach to geometrically estimate an object's position in space through distance measures to at least three points (_i.e_., trilateration is the minimal case). This corresponds to solving the following non-linear system, with \((x_{i},y_{i},z_{i})\) being the position of the \(i\)-th point, \((x,y,z)\) as the position of the object, and \(d_{i}\) as the distance of the object to the \(i\)-th point. In a two-dimensional (planar) space, this leads to solving the following system in two variables:
\[\begin{split}(x-x_{1})^{2}+(y-y_{1})^{2}&=d_{1}^{2 }\\ (x-x_{2})^{2}+(y-y_{2})^{2}&=d_{2}^{2}\\ (x-x_{3})^{2}+(y-y_{3})^{2}&=d_{3}^{2}\end{split} \tag{1}\]
Solving equation 1 determines \(x\) and \(y\) coordinates of point P. This is observed in Figure 2, in which an item attached to a BLE beacon's position is predicted within the blue area. The position can be calculated using the location of at least three stationary beacons to reach the solution analogously for any higher-order multilateration problems, _i.e_., four or more beacons. In practice, distance measures are often imperfect, once signal measurements often present slight variances, and the calculation of a solution for using a non-linear optimization leads to better results [13].
#### Ii-A3 Fingerprinting
Is a method based on pattern recognition and consists of (a) an offline or training phase and (b) an online or operational phase. In (a), a site survey is conducted where measurements are collected in areas of interest. These measurements are called fingerprints and are ideal as uniquely as possible for each area. A typical example of fingerprinting is WiFi fingerprinting, where the Received Signal Strength Indicator (RSSI) of different Access Points (AP) is measured at specific points. In (b), a user can record the fingerprint at its current location and query it against the fingerprint database.
The k-nearest neighbors (kNN) classifier is used for the fingerprinting model, and a modified implementation of a multilateration technique for approximating the room based on geometric calculations [6, 16, 27]. In the example shown in Figure 3, the unseen green data point is classified as a red triangle for \(k=3\) or a blue square for \(k=5\). kNN uses distance-based metrics to determine neighboring data points. In the case of the collected RSSI values, the data set is split into a disjoint training and test data set with the dimensions \(cn\times k+1\) and \((1-c)n\times k+1\) for a train-test-split coefficient \(c\) (_i.e_., \(c=0.2\)).
\[class(q)=majority(k\_min(\parallel q-v_{i}\parallel))\forall v_{i}\in train\]
The training phase consists of storing the k-dimensional training samples and their associated class labels. In the classification phase of an unlabeled vector \(q\), its k nearest neighbors are calculated from the training data set based on the chosen distance metric. Eventually, the class label of \(q\) is determined based on most of its nearest neighbor's class labels.
Figure 3: kNN example
Figure 2: Multilateration example in the evaluation scenario detecting a device in the blue area.
### _Related Work_
As indoor location services gain popularity due to the absence of Global Positioning System (GPS) signal indoor, BLE and RFID (Radio Frequency Identification) technologies emerge as an alternative (among others such as WiFi, Ultra-WideBand (UWB), and Visible Light Communication (VLC). Among these technologies, BLE provides the most attractive trade-off between cost and accuracy at the cost of having a relatively low range.
#### Ii-B1 WiFi and Bluetooth-based Approaches
[40] presented previous work on the initial architecture of PMD-Track. Although presenting the idea of a gateway-less architecture based on BLE beacons and BLE-enabled mobile phones, the paper lacked an in-depth analysis that stems from a real-life implementation and experimentation, as presented in this article. In this regard, this article presents a detailed architecture and evaluation comparing fingerprinting and multilateration approaches.
[39] developed a passive approach to track devices emitting Bluetooth packets. Positive points of this work are characterized by the architecture of streaming data (from multiple sensors) to a sink where the location data is processed. However, a passive approach is imprecise because it requires devices to emit packets to capture them in the environment. Thus, such a solution would not be feasible to track assets within a hospital since tags can be paired with nearby sinks, eliminating the need for a passive approach.
[47] developed a system using active BLE beacons attached to medical devices and the hospital's WiFi access points to relay data. Then, the authors relied on signals actively emitted by beacons to calculate their position using a multilateration approach. The authors identified two major issues: poor precision due to interference from other radio waves and relatively high battery consumption considering active beacons. One disadvantage of the multilateration approach is that it requires constant probing of devices. In the authors' case, the devices report their position directly to the APs. Further, the fact that the beacons connect to APs means that they present an onboard network interface, which impacts the power consumption and the price of each tag.
[10] proposes a real-time indoor positioning system based on BLE using frequency diversity, trilateration, and Kalman Filter (KF). The tag position is calculated in the approach based on the trilateration of 3 RSSI sniffer devices and the tag sending RSSI values; KF is used to smooth the position calculations. One shortcoming of the approach is that it relies on four beacons per room, and each receiver (_i.e._, sniffer device) costs around 120 Euros. This makes the approach not cost-effective to be employed in the PMD-Track as its goal is to reduce costs.
[1] proposes a real-time tracking solution based on Arduino to track Bluetooth devices at a maximum range of 10 meters. The Arduino board also contains a GSM antenna that periodically transmits the collected data to a user's smartphone. The authors' approach has two problems that PMD-Track solves: the possible use of several static sinks (Arduino devices) to collect data from BLE tags, and the limited mobility combined with the short-range makes the solution ineffective for tracking objects in the hospital scenario.
[48] presents real-time asset tracking based on WiFi tags and the existing WiFi infrastructure available in hospitals. The authors employed fingerprinting based on RSSI signals from six beacon APs covering an area of 450 square meters _i.e._, entire floor of a medium-sized hospital floor (60 rooms). Thus, positioning data was collected during the training stage, and real-time measurements were compared and approximated with training values to determine their position. However, WiFi-based RSSI is highly unreliable due to the number of running devices and connections, an aspect that the authors pointed out during their evaluation.
#### Ii-B2 RFID-based Approaches
[19] presents a passive RFID tracking system for hospitals. While passive tags present the benefits of being relatively small and not requiring batteries, they also are relatively less reliable and accurate than active tracking solutions. The authors tested the proposed system in the university's biomedical department. Although a financial analysis was included, the publication did not disclose an evaluation in terms of the accuracy and precision of the proposed approach. In this regard, passive RFID tracking systems are known for their poor accuracy, given that tags rely on the active signal of nearby readers and other factors related to the interference of signals [13]. This fact can also be observed in CCount [38], in which a combination of passive RFID tags and cameras were used to track people's movement at indoor events and gauge interest in products and merchandise.
[7] developed a crowdsourced asset-tracking solution for the construction industry based on integrating RFID and BLE technology. Assets (_e.g._, materials, tools) are stored in warehouses and are used outdoors on construction sites. Warehouses are equipped with RFID readers, construction site workers are equipped with smartphones, and assets are tagged with RFID and BLE tags. If an asset leaves the warehouse, its RFID tag is scanned, and if a construction site worker passes by the asset in the field at close range, the BLE tag's RSS values are scanned by a mobile application installed on the worker's smartphone. With the knowledge of the RFID scanner's location and the smartphone's GPS coordinates, the approximate location of a scanned asset can be determined.
#### Ii-B3 UWB-based Approaches
UWB is still not broadly available on mobile devices, with only a few flagship phones supporting it at the time of writing. However, few promising approaches are listed, such as [14, 17, 26].
In high-precision, Line-of-Sight (LoS) tracking, UWB shows great potential thanks to its centimeter-level accuracy. [14] investigates how UWB performs in complex indoor environments with partial or non-LoS connections available.
Based on ToF ranging measurements, trilateration is used to position the node. In contrast, a fingerprinting-based algorithm provides additional context in cases of insufficient LoS measurements. The key aspect of their work is that fingerprints are not labeled with locations but rather with distances to a set of pre-defined reference points, which are then used in trilateration. Their findings include that UWB represents an effective alternative in-room identification with an accuracy of more than 95%. Further, the system achieved comparable accuracy in cases of 2 instead of 3 LoS connections, making UWB deployments more attractive as the number of anchor points might be reduced.
[26] proposes a framework for using and managing multiple IoT devices in a hospital. The framework is organized in layers. The sensing layer covers technologies such as UWB, BLE, WiFi to perform indoor localization functions, flow analysis, and fall detection, among other functions specific to hospitals (ECG). Data collected from devices is sent for processing to a cloud backend via WiFi that implements the functionality for these different services. One of the drawbacks of the proposed solution is that the use of several technologies in a single device/board, as proposed by the authors, becomes unfeasible for use at scale from an economic viewpoint and the mobility of PMDs.
[17] presents SnapLoc, a UWB-based indoor localization system that allows tracking an unlimited number of tags, in contrast to existing solutions that are limited in the number of supported tags. The authors rely on the Decawave DW1000 UWB chip and multiple anchors to detect UWB signals of tags, which can achieve a decimeter-level positioning accuracy with a 90% error of 33.4 cm. While the accuracy of UWB is far superior to technologies such as Bluetooth, WiFi, and RFID, the technology is still relatively expensive and of limited availability. Using multiple anchors, as in SnapLoc, makes the approach extremely accurate at a high acquisition and operating cost.
#### 3.2.4 Other Approaches and Indoor Tracking Surveys
As with a slightly different approach, [18][43] propose EchoLoc and RoomSense, approaches using _acoustic_ responses to a chirp emitted by a smartphone. The system is based on the location-specific features captured in a chirp's acoustic response or echo and utilizes fingerprinting to learn these response patterns. While achieving high accuracy and low localization errors, these approaches are not feasible in the context of PMD-Track due to the rather small coverage area of the fingerprint (less than 1 m), which would translate into a high site survey effort.
[13] presents a _survey_ on indoor tracking covering a wide range of theories, methods, and technologies. This work has fundamental importance once it provides a mathematical formulation considering the types of methods used for each type of technology, such as which types of path-loss models can be used in geometric-related measurements and estimation of position based on Time-of-Arrival (ToA). In this sense, this work is important to provide a theoretical basis for tag localization based on reference beacons and smartphones.
[28]_surveys_ the state-of-the-art on enabling technologies on the different localization methods that different technologies can use. The paper highlights the characteristics, advantages, and disadvantages of different technologies ranging from RFID, through WiFi and UWB, to Bluetooth. Although it provides an overview from a technological viewpoint, some protocols listed are outdated, and Bluetooth is this paper's main one (listed in version v3.0).
#### 3.2.5 Discussion
Table 1 provides an overview of related work, comparing their major characteristics, such as used technology and approaches, and the pros and cons of their applicability to the proposed solution's scope. UWB-based solutions' precision is higher than BLE, WiFi, and RFID solutions. However, UWB solutions are not as mature and pervasive as others, which impacts hardware availability and costs. Once hospitals are equipped with thousands of portable devices, the relation between cost and benefit is often an important requirement in adopting tracking solutions.
In this regard, the advantage of acoustic-based approaches is that they require minimal additional infrastructure. For example, additional gateways/readers and APs are not required to operate with such solutions. However, they are not widely utilized in practice, and their limited coverage area renders
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline
**Work** & **Tr
them impractical for large-scale deployments. In addition, it is unclear how environmental changes and noise affect their localization performance.
Table 1 also shows that solutions based on multilateration of signals are in the majority, which does not necessarily imply greater precision. In this case, solutions based on multilateration are simpler to calculate since only three overlapping reference points are needed to determine the position of a device. In contrast, fingerprinting requires a longer training period with signal collection at several points to statistically estimate the probability that a detected signal is close to one or more reference points.
In indoor environments, where LoS connections are not guaranteed and triangulation-based approaches fail, fingerprinting has become a popular choice among RSSI-based methods. Despite these benefits, a labor-intensive site survey is always required before large-scale deployment.There have been numerous efforts to streamline the site survey procedure, some of which are entirely passive. Lastly, other approaches have focused on algorithms to improve the recognition and accuracy of fingerprints.
## III PMD-Track Design
Traditional asset-tracking solutions track tagged assets by installing static gateways throughout the building. The contribution of PMD-Track is the absence of such gateways by leveraging employee workforce mobility. In PMD-Track, hospital staff, typically equipped with smartphones, move throughout the building daily, covering a large area of the hospital's premises and the gateway application is a BLE tracking software installed on the employee's smartphones that detects BLE beacons in proximity. Design considerations and requirements on an abstract level are:
* **Accuracy**: the goal of the proposed solution is tracking PMDs within hospitals or healthcare facilities, which are segmented into rooms. Thus, room-level tracking is required.
* **Heterogeneous smartphones**: Deployed as a crowdsourced application, smartphone brands and models may be heterogeneous and certain limitations may be experienced in the reception of BLE signals or the usage of system resources. Hence, device heterogeneity must be taken into consideration.
* **Real-time location data**: Near real-time or fresh data is crucial for applications such as PMD-Track. Outdated information can lead to inefficient processes and wasted resources, leading to mistrust of users against the system. Thus, it is crucial to manage the user's expectations by communicating the staleness of information.
* **Privacy**: From a user perspective, the person's privacy carrying the smartphone mustn't be violated. Thus, it is crucial that the user's location is not tracked or cannot be inferred from other data. Furthermore, hospitals often implement strict IT-security policies. Thus, minimal friction and touch points with existing IT operations, such as connectivity services through an existing WiFi infrastructure, are needed.
As illustrated in Figure 4, smartphones interact - as the medical staff performs their daily activities - via BLE with BLE tags attached PMDs, and static BLE tags used as reference points. As soon as smartphones are in range with a BLE tag (static or mobile), it transmits the Received Signal Strength Indicator (RSSI) to a backend service via WiFi, which calculates the position of the PMD and updates its location in the inventory.
The design consists of (1) a mobile application that records Bluetooth encounters between a mobile device and BLE tags placed on objects (represented by geometric shapes) and (2) a backend that receives information sent by smartphones in the form of message streams and uses it to calculate the approximate location (_e.g._, Room 1, Room 2, Room N) of the object in real-time based on three primary reference points: the mobile device, a static tag, and a mobile tag.
After gathering information about static and mobile tags, the smartphone application timestamps their receipt and sends it to the backend service. At this stage, it is critical to minimize excessive processing power and data transfer by sending data in rolling time periods, hence minimizing smartphone battery depletion.
Upon receiving data, the backend service _(1)_ executes the localization algorithm and updates the PMD's location by triangulating BLE RSSI values, using the known position of static (_i.e._, anchor) BLE tags as a reference, and _(2)_ calculates metrics regarding their usage (_e.g._, heatmap of PMD by type, their time spent in rooms, or the number of times a PMD has been moved). The analytics engine provides data for two frontends: a mobile (simple, intuitive localization of PMDs) and a Web-based (PMD-specific detailed analytics) one.
Figure 5 illustrates PMD-Track's architecture overview, which comprises the following components summarized be
Figure 4: PMD-Track beacon scanning [40]
low and further described in the following subsections:
* **MySQL DB**: Store static inventory data
* **Kafka-connect and cluster**: Integrate various data sources with the Kafka Cluster instance [2]
* **KSQL-DB**: Database to query and aggregate streams
* **Location engine**: Predict location of PMDs
* **Web server**: Exposes location data through REST API
* **MQTT broker**: Facilitates on-premise to cloud communication
* **PWA frontend**: Display asset data on mobile [15]
* **PMD-Track frontend**: Display asset data on desktop
* **Gateway application**: Beacon scanning application
* **Fixed/mobile tag**: BLE beacons (fixed location / on asset)
### _Ble beacons_
BLE beacons are small, inexpensive, and battery-powered devices emitting a periodic BLE signal to advertise their presence to nearby devices. A beacon is uniquely identifiable through its Medium Access Control (MAC) address and RSSI value, and the distance between it and the receiving device can be approximated using the path-loss model. A common application scenario is proximity detection, where an application or device is scanning for beacons in its perimeter to establish a geospatial context associated with the scanned beacons. In the case of PMD-Track, two types of BLE beacons are distinguished. First, **mobile** beacons are attached to the assets one wants to track and are moving along with them. Their MAC address is associated with a specific piece of inventory. Second, **fixed** beacons are installed in predefined areas of the building and thus do not change their location. Their purpose is to serve as reference points for determining the current location of the scanning device within the building. Locating the scanning device allows the inferring of the location of assets it has detected in its vicinity.
### _Gateway application_
Traditional asset-tracking solutions rely on installing static gateways throughout the building to track tagged assets. A key requirement and novelty of PMD-Track is the absence of such a static gateway by leveraging the mobility of the employee workforce. Typically equipped with a smartphone, hospital staff moves through the building, covering a wide area of the hospital's premises daily. The gateway application is a BLE tracking software installed on the employee's smartphones that detects BLE beacons in proximity. The application runs passively and in the smartphone's background without requiring user interaction; thus, employees are not disturbed in their activities. It scans mobile and fixed BLE beacons in range and relays the timestamped information to a cloud service as illustrated in Figure 4. Reliable and uninterrupted operation of the gateway application is therefore of utmost importance to provide up-to-date information.
### _Frontend views_
Information about inventory location is displayed through two User Interfaces (UI) as depicted in Figure 6. Field workers are provided with a mobile application to quickly query and locate tracked assets. It is important to note that the mobile UI application is separate and not integrated into the existing gateway application that also runs on the smartphone. The reason for this separation is two-fold. First, apps that display information to users can efficiently be built by leveraging cross-platform frameworks to reduce development effort. Conversely, the gateway application requires a native implementation, as accessing the smartphone's hardware resources (_i.e._, BLE scanning in the background) requires access to OS-level libraries to acquire necessary permissions and to ensure continuous background operation. Second, due to privacy concerns, a separate UI application allows users to consume asset location information without opting in on the BLE tracking. Apart from the mobile application, a web-based dashboard provides asset location information in a single view, suitable for larger displays, such as desktops.
Figure 5: PMD-Track Architecture [41]. PWA: Progressive Web Application
Figure 6: Web and PWA mobile frontends
### _Communication Between Gateway and Cloud_
Message Queuing Telemetry Transport (MQTT) is a lightweight, publish-subscribe, machine-to-machine communication protocol [33]. Its low resource consumption makes it a popular choice for IoT applications in resource-constrained environments. In PMD-Track, MQTT facilitates communication between the gateway application and the cloud service. The gateway application connects to the MQTT broker and publishes aggregated BLE scan results on a predefined topic in regular intervals (_e_.\(g\)., every minute). Downstream, a consumer application can connect to the broker, subscribe to the topic, and process the received messages.
The messages on the MQTT topic form a continuous stream of events, each representing a device's encounters between the gateway (_i_.\(e\)., smartphone) and a BLE beacon it has recently scanned - termed _BleScanEvent_. Its properties include a _client_id_ identifying the gateway application, a _mac_address_ associated with the BLE beacon, its _rssi_ value, and a _timestamp_ when it was detected. These messages are fed into a stream processing framework by a connector application that subscribes to the MQTT topic and connects to the stream processing framework to forward the messages on a dedicated stream. Client applications can then consume, aggregate, and act on those events to either derive state or emit new events.
### _Streaming and Static Data Processing_
Apache Kafka has been chosen as a stream processing framework to facilitate the communication of services and to enable the processing of BLE scan results [2]. For simplicity, the Kafka infrastructure scaling has been kept at a minimum with single cluster and single partition deployment. To integrate the MySQL database and the MQTT broker with the Kafka stream processing framework, Kafka-Connect, a free and open-source component of Confluent's Kafka suite, has been chosen.
The _BleScanEvents_ received in the cloud are small bits of information from various data sources, depending on how many gateway applications are running. There is no guarantee that, eventually, all BLE beacons will be scanned after a certain time, as the gateway only detects what currently is in range. If a beacon is lost or never comes into range of a gateway, this beacon will never be detected; thus, no _BleScanEvent_ will ever be emitted in the system. In the context of asset tracking, the detection of the absence of an asset is equally important as detecting its presence. Hence, the need for storing a predefined, stable list of inventory as a base data set occurs.
Storing a list of inventory can be achieved with a variety of proven technologies. In PMD-Track, a relational database stores a table of BLE beacons along with their type (_i_.\(e\)., _mobile_, _fixed_) and their _mac_address_. Similarly, a table stores a list of active gateway applications along with their _client_id_ and human-readable _name_. Finally, a table stores room information about the building. To make the static information available in the stream processing framework, a connector application periodically fetches the information from the relational database and feeds it into a dedicated stream. Serving as the base data set, a new, enriched stream joins _BleScanEvents_ with the static beacon data from the relational database, filtering out non-inventoried beacons.
### _Location Engine_
Room-level positioning can be achieved using different localization algorithms, each with advantages and disadvantages. The location engine receives a stream of enriched _BleScanEvents_ and produces a stream of positioned _mobile_ beacons associated with a location (_i_.\(e\)., room). Thus, the asset associated with the _mobile_ beacon has been located in the given room.
#### _Data Preparation_
Preparing a single data set per test location (_cf_. Figure 7) is a fundamental step for the subsequent model training and comparison of the two algorithms. The following steps for data gathering and preparation have been done in both test locations. Each room, labeled with a capital letter (A-Z), is equipped with a _fixed_ BLE beacon, denoted by the MAC address below the letter of the room. The beacons are installed in the center of the ceiling of each room, and the mapping between the room label and MAC address is stored.
After preparing the test location with beacons, the data set can be gathered. In this process, a person visits each room and takes a certain amount of RSSI samples of beacons that are
Figure 8: Collecting fingerprints at different positions within a room
Figure 7: Floor plan office at Europastrasse Zürich
in range within that room. Recording these RSSI samples is done using a Smartphone app. The user enters the room label where the sampling occurs, and the app performs repeated BLE scans. Within each scan -- which only lasts a few seconds -- the app records the MAC address and RSSI value of the _fixed_ beacons it detected. At the end of each scan, the detected beacons and RSSI values are stored in a new data point along with the room label. To make sure RSSI samples are not only taken at a single position within the room, the person moves throughout the room while collecting the samples, as seen in Figure 8
Once 1,000 data points are collected within the same room, the person stops the app and moves to the next room, where the procedure is repeated. Repeating this for \(k\) rooms yields the final data set of \(k*1,000\) rows and \(k+1\) columns, as one column accounts for the room label. As the data collection is terminated after a fixed number of measurements, the final data set is balanced by containing an equal number of samples per class/room. Table 2 shows an example of the final data set.
The RSSI value captures the distance relationship between the beacon and the smartphone. Due to environmental noise, not every beacon might be captured in a data point. This results in the sparsity of the final data set (_i.e._, cells being null). As fingerprinting and multilateration-based approaches cannot operate on sparse input data, two imputation strategies are applied to fill in missing observations. A certain beacon might never be detected for a given class in the first case. Considering the floor plan in Figure 7, the data points for room \(A\) might not have any records of the beacon in room \(L\) since it is too far away and, thus, out of range. In this case, the column of beacon \(L\) for room \(A\) is set to the constant value of -200 to indicate an out-of-range beacon.
RSSI values are only partially absent in the second case due to shadowing or other signal interference effects. Considering the floor plan in Figure 7, rows might exist for room \(A\) where the RSSI value of the beacon of the adjacent room \(D\) is missing. In this case, the conditioned mean of beacon \(D\) given room \(A\) is used to impute the missing observations. \(value=mean(D|Room=A)\). Applying the described imputation strategy for each room eventually yields a non-sparse data set.
#### 3.2.2 Model Training
Once missing values have been imputed and the data set is complete, the two models can be defined, fitted, and evaluated. The classification task of predicting the room label given a set of RSSI measurements is typical. A k-nearest neighbors (kNN) classifier is utilized for the fingerprinting model. In contrast, a multilateration algorithm is modified to approximate the room using geometric calculations (as described in Section 2).
Figure 9 presents the stages for training the model, including preparing the dataset described in the previous section. The dataset is first partitioned into a training and testing set with an 80% and 20% split, respectively. Subsequently, and in the case of kNN, the model fits on the training set.
* **Multilateration** localization yields a geospatial position with \(x\) and \(y\) coordinates. In non-LoS conditions, determining the distance based on the RSSI value is error-prone due to environmental signal interference. Further, in the case of PMD-Track, room-level granularity suffices compared to an exact position. Thus, the design of an adapted multilateration algorithm predicting the room label instead of coordinates in combination with floor plan information is key.
* **kNN** uses distance-based metrics to determine neighboring data points. In case of the collected RSSI values (as described in Section 2), the data set is split into a disjoint training and test data set with the dimensions \(cn\times k+1\) and \((1-c)n\times k+1\) for a train-test-split coefficient \(c\) (_i.e._, c=0.2). In the case of PMD-Track, the train-test-split was set to 0.2, \(k\) was set to 7, and \(l_{2}\) norm (Euclidean) was used as the distance metric.
#### 3.2.3 Adapted Multilateration
Distance measurements might not be accurate due to noise in the RSSI signal. Thus, an overlapping intersection area might exist rather than a deterministic intersection point between the ranges of three known points. Overlaying this intersection area with floor plan information yields coverage areas on a room-level basis. The room with the largest coverage
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**BEACON\_A** & **BEACON\_B** & **—** & **BEACON\_L** & **Room** \\ \hline -65 & -70 & -99 & A \\ \hline � & � & � & � \\ \hline -99 & -75 & -65 & L \\ \hline \end{tabular}
\end{table}
Table 2: Data set
\begin{table}
\begin{tabular}{|l|l|l|} \hline \multicolumn{3}{|c|}{Fixed value imputation} \\ \hline
**BEACON\_J** & **BEACON\_J (imputed)** & **Room** \\ \hline null & -200 & A \\ \hline null & -200 & A \\ \hline null & -200 & A \\ \hline null & -200 & A \\ \hline \multicolumn{3}{|c|}{Column mean imputation} \\ \hline -65 & -65 & A \\ \hline null & -64.67 & A \\ \hline -62 & -62 & A \\ \hline -67 & -67 & A \\ \hline \end{tabular}
\end{table}
Table 3: Value imputation
Figure 9: Steps taken in the model training
area is considered the most probable current location of the asset. The following sections describe different localization scenarios on a case-by-case basis and how they are resolved to a room-label prediction.
The **first case** handles the situation when the smartphone and the fixed beacon are nearby so that they can be considered in the same room. This is the case for strong RSSI values above -70 dBm, which translates to a range of up to approximately 2 meters. Figure 10 illustrates a situation where the smartphone detects fixed beacons from rooms A, D, G, and E. The radius of the dotted circles is calculated based on the path-loss model and indicates the distance measured from the smartphone to each of them. As seen from the plot, the beacon in room E (hallway) is nearby and, in this case, within the range of 2 meters. Thus, the predicted room label in this situation is E. If no beacon is within the range of a 2-meter radius, the intersection area of the detected fixed beacons is calculated.
Multiple beacons are detected in the **second case**, and multiple intersection areas might exist. Figure 11 shows such a situation where beacons A, D, E1, and E3 are detected with their respective intersection areas. To characterize different intersection areas, the concepts of _intersection set_ and _intersection cardinality_ are introduced. Given an intersection \(i\), a set of origin shapes can be defined as the intersection set \(s_{inter}(i)\) whose elements yield the intersection \(i\). For example, given two circles \(c_{1}\) and \(c_{2}\) that produce intersection \(i\), the intersection set is defined as \(s_{inter}(i)=\{c_{1},c_{2}\}\). Given an intersection set \(s\), the intersection cardinality is defined as \(\parallel s\parallel\). In case of the previous example, \(\parallel s_{inter}(i)\parallel=\parallel\{c_{1},c_{2}\}\parallel=2\).
Returning to the situation shown in Figure 11, one can observe that the intersection of beacons A, D, and E1 has the highest cardinality. _i.e._,
\[\exists s_{inter}(i)\parallel\textit{s}_{inter}(i)\parallel>\parallel s_{inter} (j)\parallel\forall j\in intersect\textit{c}/i\neq j\]
Analogously to the previous case, once the intersection with the highest cardinality is determined, the intersection area is overlaid on the floor plan, and the room with the largest coverage area is predicted as the room label. In this case, it is room A. As shown in the previous example, multiple intersections with different cardinalities may occur. If multiple intersections exist with the same maximum cardinality, another selection criteria is applied to determine the intersection to be overlaid on the floor plan. To this end, the notion of _radii sum_ is introduced. Given an intersection set \(s_{inter}(i)=\{c_{1},c_{2},...,c_{n}\}\), the radii sum is defined as the sum of the individual radii \(r_{c_{k}}\)
\[R_{i}=\sum_{k=1}^{n}r_{c_{k}}\]
Such a situation of the **third case** is depicted in Figure 12 where there exist three intersections with cardinality 3 (\(\{A,D,E1\}\), \(\{D,E1,H\}\), \(\{H,E1,E3\}\)). Because RSSI noise increases with distance, a strong RSSI signal is more stable and accurate than a weak signal. The radius of the dotted circles inversely correlates to the signal strength (_i.e._, the stronger the RSSI signal, the smaller the radius). Thus, the smaller circles provide a more accurate indication of the distance than larger circles. On this premise, the intersection set \(\{A,D,E1\}\) is considered the most reliable intersection to be used for room label prediction. Overlaying it on the floor plan reveals that room A has the largest coverage area.
Finally, the **fourth case** stems from a situation where no two beacon signals overlap. Given that there is no overlap between static beacons, the smartphone, and the mobile beacon associated with the PMD, this is the worst-case scenario. In
Figure 11: Multilateration: Multiple intersections, max intersection cardinality
Figure 12: Multilateration: Multiple intersections, min sum of radii
Figure 10: Multilateration: close proximity
this instance, the conventional multilateration method would fail because it does not account for historic data and does not perform approximations without a minimum overlap of three beacons. This case is illustrated in Figure 13 where the beacon A and D ranges do not overlap or touch at any point. In this case, the beacon closest to the smartphone is considered to predict the room label. In the situation, the room predicted is A as beacon A has a stronger RSSI signal and is thus closer to the smartphone.
## 4 Evaluation
Experiments were independently performed in two test locations: an apartment building for initial experiments and configurations and an office building with a similar floor plan to a hospital floor. Importantly, permission to conduct experiments in a hospital environment has not been granted. For both test locations, an initial data set was gathered on which fingerprinting, and multilateration-based approaches were trained and evaluated. The experiments were based on i10 Durable Beacon [31] as static beacons, E7 Plus beacon [30] as mobile beacons, and Android phones hosting the gateway application. The following experiments were conducted to analyze the tracking model accuracy:
* Accuracy vs number of beacons (Subsection 4.1)
* Accuracy vs placement of beacons (Subsection 4.2)
* Accuracy vs training size (Subsection 4.3)
* Economic analysis (Subsection 4.4)
The goal is to analyze and compare the model accuracy under different input feature configurations to minimize the feature input space (_i.e._, number of beacons) while maximizing model accuracy. A column in the data set corresponds to a certain beacon or feature. As the initial data set was gathered with one beacon per room, training and evaluating the model with different beacon configurations can be achieved by simply considering a subset of the columns (_i.e._, beacons or features) at a time. Lastly, this section provides considerations on the limitations and lessons learned (_cf._ Subsection 4.2)
#### 4.1 Model Accuracy vs Number of Beacons
The cost of installing beacons can be a significant barrier to implementing a tracking solution such as PMD-Track. Thus, evaluating the trade-off between model accuracy and the number of beacons is essential to find an optimal number that is economically viable (economic analysis is presented in Subsection 4.3), straightforward to be maintained, and that can be scaled to meet a variety of evolving needs. Also, specific aspects of each beacon, such as coverage area and calibration, require careful consideration as these influence the model's accuracy.
Input feature combinations are called beacon subsets in our model. Thus, the model's accuracy analysis under different input feature combinations is needed, and all data classes/rooms in the testing scenarios are evaluated. As a non-skewed data measure, accuracy was utilized. Figure 14 depicts the number of samples per class, demonstrating that both data sets are well-balanced concerning the class members.
To evaluate the accuracy of a specific input feature combination, the data set is first partiti
Figure 14: Evaluation environments considering the number of samples per class: (Left) Office at Europastrasse and (Right) Apartment at Walfenplatz
Figure 15: Accuracy threshold
testing set with an 80% and 20% split, respectively. Subsequently, and in the case of kNN, the model fits the training set. Finally, the accuracy is calculated on the testing set by computing the number of correctly classified samples divided by the total number of samples in the testing set. Cross-validation is performed to achieve a more reliable accuracy score by repeatedly training and testing the model on different train-test-splits.
As mentioned above, a feature combination is a subset of the total number of features. Thus, there are feature combinations of length \(1,2,...,d\) where \(d\) is the number of available features. Performing this evaluation is computationally expensive due to the high number of combinations.
\[combinations=\sum_{i=1}^{d}\frac{d!}{i!(d-i)!}\]
The office and apartment test locations yield 65,535 and 31 combinations, respectively. The following analysis groups accuracy scores by the average number of features involved.
In addition, Figure 15 shows accuracy on the y-axis and the number of beacons per room ratio on the x-axis. For example, the green vertical line indicates the accuracy score for a beacon per room ratio of 0.5. _For the office test location, equipping half of all available rooms with beacons yields an accuracy of 35% and 83% on average for the multilateration-based and kNN-based approaches, respectively_. It is worth noting that the office test locations are equipped with 16 beacons and have 11 rooms, which explains the beacon-per-room ratio of up to 1.4. Based on this interpretation, one can easily observe that kNN performs considerably better than multilateration for all beacon-per-room ratios and across test locations. Further, it appears that accuracy for the fingerprinting-based approach grows logarithmically, whereas the multilateration-based approach has a more linearly-shaped growth.
The box plot in Figure 16 shows the accuracy distribution within a group of features. A wide distribution indicates that the choice of beacons has a high impact on the accuracy, whereas a narrow distribution suggests that the choice is less relevant concerning achieved accuracy. For example, in Figure 16 for the office test location and the kNN model, one can observe that installing 3 beacons in arbitrary rooms yields a median accuracy of approximately 77% with Q1 and Q2 quartiles around 75% and 78% respectively. It can be observed that the accuracy distributions get narrower as the number of beacons increases. Alternatively, for a low number of beacons (1-3), the choice of beacons seems to have a high impact, as seen by the wider distributions.
#### 4.2.2 Model Accuracy vs. Placement of Beacons
As a next evaluation step, the interesting question is to analyze the significance of individual beacons concerning accuracy to understand what beacons contribute more to accuracy than others, _i.e._, influence of beacon placement on accuracy. Placing beacons too close or far apart can compromise the system's accuracy. In addition, environmental factors such as interference, signal attenuation, and obstructions can impact
Figure 16. Accuracy box plot grouped by number of features (Europastrasse): (Left) kNN fingerprinting and (Right) Multilateration
Figure 17. Beacon frequency - Office at Europastrasse
the system's accuracy. For an effective indoor tracking system, finding the optimal balance between model accuracy and beacon placement is essential. The objective is to maximize precision while minimizing the number of beacons needed to achieve the desired precision.
Combining this with floor plan information might reveal geospatial patterns for where to place beacons on a floor plan to achieve maximum accuracy. To this end, the frequency is analyzed with which beacons occur in the top-ranking beacon configuration of each feature space length. In this sense, for each group of specific feature-length (_e.g._, all combinations of length 3), the combination is selected which achieves the best accuracy score.
Repeating this for all feature lengths and counting the occurrence of each beacon yields the bar plots in Figure 17 and Figure 19. While the beacon frequency of kNN and multilateralism-based approaches are almost identical in the apartment setting, the office test location differs. For certain beacons, the frequency count for both models is similar and may be off by a count of 1 to 2. However, there are beacons for which the discrepancy in the frequency count is higher. _E.g._, most beacons in the hallway (E6, E2, E4, and E5) are more important for the fingerprinting-based approach than the multilateration-based one. Figure 18 shows the position of fixed beacons overlaid on the floor plan, where the beacon size corresponds to its frequency count.
It can be seen that beacons D, E6, and F are all aligned horizontally and vertically centered. Further, if beacon A is also considered, two beacons emerge, one on the left (D, A) and one on the right (E6, F). Conversely, Figure 18 (right) shows the same map but for the multilateration-based approach, where another pattern emerges. In this case, the beacons in the center of the floor plan (along the hallway) are the least important. Instead, beacons in the outer rooms seem to have higher importance.
The described observations indicate general-pattern candidate locations for each localization approach. In the case of fingerprinting-based localization, beacons might be installed in pairs across a horizontally or vertically centered line concerning the floor plan. Otherwise, beacons might be best placed on the outermost border rooms of the floor plan. Despite those observations, it is important to highlight the qualitative nature of these results. In the case of the apartment, the beacon frequency analysis was not conducted due to the low number of beacons (only 5 beacons).
#### 3.2.3 Model Accuracy vs. Training Size
While multilateration is a geometrical approach that does not require prior knowledge of the environment and can work with a relatively small number of reference points, fingerprinting is a statistical method that uses a pre-existing database of signal strength values to determine the location of a device. In this regard, the disadvantage of the fingerprinting method is the need to prepare a data set in which beacons are properly positioned by observing, among other factors, the characteristics of each floor plan and signal reflection. In practice, the theory that the larger (and better, according to the previous points) the training database, the more accurate the model, can be demonstrated.
The following charts present the model accuracy of the kNN localization model subject to the training size. Also, presented results are based on the beacons placement as depicted in Figure 18. Similar to the previous evaluation, the model was repeatedly trained and tested on different training data sizes, ranging from 20 to 200 samples per room/class. The motivation for this analysis is to reduce the overhead induced by collecting samples for the training database. Figure 20 depicts a heatmap showing the number of beacons on the y-axis and training size on the x-axis.
As expected, increasing the training size _i.e._, the dataset, positively affects model accuracy,
Figure 19: Beacon frequency - Apartment at Waftenplatz
Figure 18: Beacon frequency map (Office at Europastrasse): (Left) kNN and (Right) Multilateration
increasing accuracy score. The kNN model with 16 beacons and a database of 20 records for each beacon results in an accuracy of 93%. It is important to note that increasing the number of records for each beacon is not as significant in terms of increasing accuracy than adding more BLE beacons, as observed as the number of records increases to a total of 200 records for each beacon, the accuracy reaches a total of 95%. However, the goal is to achieve a minimum acceptable accuracy with the lowest number of beacons as possible.
Further to notice is that the positive effect decreases with an increasing number of beacons. For example, the accuracy of a 3-beacon model can be increased by 5% when using a training set size of 200 instead of 20. In contrast, with a 16-beacon model, the accuracy can only be improved by 2%. Figure 21 illustrates this behavior in a 3D contour plot. The first derivative of the accuracy vs training size relationship confirms that trend and shows the diminishing marginal effect of additional training samples. In other words, for the configuration utilized in the experiments at the Europastrasse Office, increasing the database has a lesser impact on accuracy than increasing the number of sensors. However, beacon range and/or signal interference can also impact accuracy and necessitate a larger overlap between beacons.
#### 4.2.4 Economic Analysis
With the given analysis on model accuracy, the stage is set for economic considerations regarding the cost of ownership associated with the deployment of each localization algorithm. PMD-Track operates based on mobile phones, which are typically already provided to staff to facilitate coordination and communication, and static and mobile beacons (attached to PMDs). In this sense, the economic analysis only considers the purchase of hardware for the static and mobile tags and activities related to deployment, maintenance, and operation. Table 4 compares BLE tag options available on the market and their characteristics.
BLE static beacons do not have the same functionality as gateways or readers, as it is the case with the Impinj xArray device [22] used in the CCount project [38]. These devices typically have a higher processing power, which allows for the computation of the localization directly on the device, but are more expensive than BLE static beacons (> US$ 3,000). In this sense, static beacons are used as reference point for the localization algorithm and work similarly as mobile beacons but with relatively long range and higher battery capacity, as observed in Table 4.
Table 5 specifies a set of assumptions the following economic evaluation is based. These figures consider the prices charged in Switzerland according to the survey with companies that perform indoor tracking in similar sectors (indoor tracking for marketing analytics, as published in a previous work [42]). First, the scenario envisions the deployment in a location with 500 rooms, corresponding to a typical hospital size. The installation and labeling of a fixed beacon on the ceiling of a room is assumed to be carried out by a trained field worker with an hourly rate of US$ 30 and an average installation time of 15 minutes. The training time of 15
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \multicolumn{5}{c}{**BLE static beacons**} \\ \multicolumn{5}{c|}{**[**10**]**} & \multicolumn{1}{c|}{**[**31**]**} & \multicolumn{1}{c|}{**[**29**]**} & \multicolumn{1}{c|}{**[**24**]**} & \multicolumn{1}{c|}{**[**24**]**} \\ \hline
**Vendor** & \multicolumn{1}{c|}{**Nineine: CS**} & \multicolumn{1}{c|}{**Nine: CS**} & \multicolumn{1}{c|}{**Nine: CS**} & \multicolumn{1}{c|}{**Komilko, PL**} & \multicolumn{1}{c|}{**SATO, CN**} \\ \hline
**City** & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} \\ \hline
**Entersville** & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} \\ \hline
**Entersville** & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} \\ \hline
**Entersville** & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} \\ \hline
**Entersville** & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} \\ \hline
**Entersville** & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} \\ \hline
**Entersville** & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} & \multicolumn{1}{c|}{**Nine: ARF2**} \\ \hline
minutes to collect 200 training samples per room is based on our experience and is also accounted for with an hourly rate of $30. Hardware components such as beacons and required batteries can be purchased at $5 and $2, respectively, at the time of writing. Finally, a beacon-to-room ratio of 0.4 (fingerprinting) and 0.8 (multilateration) was chosen to achieve an estimated accuracy of 80% and 50-75%, respectively. These numbers are obtained by extrapolating those observations as of Figure 15.
Table 6 compares the associated costs with each localization model. Among the costs associated with setup are the installation of beacons and the collection of fingerprint data. It is important to note that the cost of training is roughly equivalent to the additional beacons required for multilateration. Thus, at first glance, the two approaches appear comparable, with the _multilateration model offering a cost advantage in terms of setup costs being 20% cheaper than fingerprinting_. However, the annual cost picture changes when maintenance expenses are considered. The annual maintenance costs are driven by the reoccurring cost of batteries and the labor involved in replacing them. In this instance, fingerprinting-based localization has an advantage due to its smaller hardware footprint compared to multilateration, which incurs half the maintenance costs. Considering the total amount with setup and recurring costs, a fingerprinting-based approach is 7% cheaper than a multilateration-based approach due to the recurring costs. This difference is further increased across multiple years once setup costs are only considered once.
Figure 22 outlines the accumulated ownership costs over 5 years. Notable is that the _fingerprinting-based method recovers the higher setup costs in less than two years and has a cost advantage of approximately 45% cheaper_ over five years. Under the current assumptions, it can be concluded that the fingerprinting-based approach is more effective in terms of hardware footprint and cost. This can change once the battery life of beacons is significantly increased, reducing the replacement frequency.
#### 4.2.5 Impacts on Security and Privacy
_Security_. Potential attack vectors are the communication link between the BLE beacon and the smartphone, the mobile phone application, and the tracking service itself running at a backend server. Potential impacts are summarized as follows:
* A malicious or malfunctioning BLE beacon could be deployed by an adversary to intrude or overload the system in the form of a Denial-of-Service (DoS) attack.
* Vulnerability or malfunctioning in the mobile application impairing the device use (DoS) or collecting personal information.
* Backend vulnerabilities causing a DoS on the server or emitting incorrect location data.
As mentioned in [9], a DoS caused by any device emitting wireless signals can impair the staff communication via radio or their mobile phones via WiFi. BLE supports a variety of communication schemes between two nodes. Some rely on an established connection to exchange encrypted information, whereas others only operate in a broadcast mode, preventing encrypted payloads. Excessive mobile device use due to a vulnerability or programming flaw is also a severe concern, as teams may be unable to communicate promptly. As a result, regardless of the communication scheme used, it is necessary to address these security concerns.
_Privacy_. Determining where a patient's medical equipment is located raises privacy concerns. For example, tracking the location of a patient's infusion pump also tracks the patient's
\begin{table}
\begin{tabular}{|l|l|l|l|} \multicolumn{4}{c}{Fingerprinting} \\ \hline
**Description** & **Units** & **Price/Unit** & **Total** \\ \hline BLE Beacons & 200 & $5.00 & $1,000.00 \\ \hline Installation of BLE Beacons & 200 & $7.50 & $1,500.00 \\ \hline kNN Fingerprinting & 500 & $7.50 & $3,750.00 \\ \hline
**Setup Costs** & & & **56,250.00** \\ \hline Coin cell battery & 200 & $2.00 & $400.00 \\ \hline Battery replacement work & 200 & $7.50 & $1,500.00 \\ \hline
**Recurring Costs (Yearly)** & & & **51,900.00** \\ \hline \multicolumn{4}{c}{Multilateration} \\ \hline BLE Beacons & 400 & $5.00 & $2,000.00 \\ \hline Installation of BLE Beacons & 400 & $7.50 & $3,000.00 \\ \hline kNN Fingerprinting & 0 & $7.50 & $0.00 \\ \hline
**Setup Costs** & & & **55,000.00** \\ \hline Coin cell battery & 400 & $2.00 & $300.00 \\ \hline Battery replacement work & 400 & $7.50 & $3,000.00 \\ \hline
**Recurring Costs (Yearly)** & & & **53,800.00** \\ \hline \end{tabular}
\end{table}
Table 6: Economical evaluation: comparison between fingerprinting and multilateration
\begin{table}
\begin{tabular}{|l|l|} \hline
**Parameter** & **Value** \\ \hline Rooms & 500 \\ \hline Installation time per room & 15 min \\ \hline Fingerprinting per room & 15 min \\ \hline Installation hourly rate & $30 \\ \hline Fingerprinting hourly rate & $30 \\ \hline Beacon unit price & $5 \\ \hline Battery unit price & $2 \\ \hline Battery lifetime & 1 year \\ \hline beacon-room-factor-knn & 0.4 \\ \hline beacon-room-factor-multi & 0.8 \\ \hline \end{tabular}
\end{table}
Table 5: Economical evaluation: parameters
Figure 22: Accumulated costs of ownership
location. Such concerns must be analyzed on a legal and ethical basis, and mitigation strategies, such as anonymizing patient data, are being developed. Another privacy consideration involves the smartphone and the personnel carrying it. Therefore, the smartphone's location must always be known. This information can be linked to an employee's location, invading his or her privacy. Anonymizing staff identities is a feasible solution to guaranteeing privacy in this situation.
#### 5.5.6 Limitations and Lessons Learned
A main limitation of this study was not being able to conduct it within hospital premises with real PMD. The initial intent was to conduct the experimental study in collaboration with the Universitatsspital Zurich USZ; however, it must undergo additional and detailed safety and security analysis before being deployed in an environment with patients, considering that USZ is the major hospital in the Zurich region. As an alternative, a comparison of multilateration and fingerprinting methods for tracking PMDs was made possible by utilizing an office floor with a structure similar to that of multiple hospital rooms.
In addition, PMD-distributed Track's nature presents additional challenges. Regardless of the localization method, the system relies on a critical mass of users to perform distributed PMD tracking. If the number of users is insufficient to cover the entire facility regularly, there may be "blind zones" where no up-to-date information is accessible due to staff absence. In this case, the deployment of stationary gateways in isolated areas may still be necessary.
In general, the PMD-Track approach demonstrated that it is possible to build a tracking system not relying on stationary gateways or readers as opposed to traditional Real-time Locating Systems (RTLS), while maintaining room-level accuracy. By leveraging smartphones provided to medical staff, it is possible to provide a simpler setup process, lower maintenance overhead, and, most importantly, lower ownership costs compared to existing tracking solutions. Major lessons learned include:
* Data pre-processing is the main driver for model accuracy. A straightforward data collection and imputation strategy are necessary to calculate intersections of PMDs tracked, and fixed and mobile beacons (_i.e._, smartphones).
* Beacon placement has a significant impact on model accuracy. It is important to consider the types of walls in the room, objects in the room, and possible interference to maximize the RSSI range.
* A requirement listed in practice is that tracking approaches that make intensive use of access points, _e.g._, sending probing packets to beacons or mobile phones to check RSSI, are likely not to be used. The WiFi infrastructure is vital to communication within the hospital, and its excessive use may not be permitted by IT staff.
* For both test locations, it could be shown that the accuracy of the fingerprinting-based approach surpassed one of the multilateration-based localization algorithms by 15-45% in non-line-of-sight conditions. Further, it has been shown that fingerprinting-based localization achieves roughly the same level of accuracy with half the number of fixed reference beacons.
* Fingerprinting presents 20% higher costs to setup due to the required training-stage but 50% lower recurring costs than multilateration as it uses considerably less beacons to achieve the same level of accuracy. A single-year fingerprinting is 7% cheaper, accumulating to 45% cheaper than multilateration over the course of five years.
* Multilateration is a comparatively simpler approach to deploy and operate than fingerprinting, but has considerably lower tracking accuracy and higher maintenance costs in the experimentation settings used in this paper.
* Hospital IT infrastructures are a critical aspect for the operation of hospitals since various equipment and adequate staff communication. In this sense, indoor tracking approaches should reduce as much as possible the use of access points for RSSI calculation and excessive communication with backend services to avoid a possible DoS of the communication channel.
* Simplicity is key. PMD-Track's key strength lies in its lightweight hardware requirements, which yield multiple benefits, such as a simpler setup process, less maintenance overhead, and, most importantly, lower ownership costs than existing tracking solutions.
## 6 Considerations and Future Work
PMD-Track provides accurate indoor tracking and inventory management for hospital's infrastructure management. Replacing expensive stationary gateways with BLE beacons and mobile phones provided to the staff provides high accuracy at a relatively low cost than using stationary readers or gateways typically as deployed in traditional tracking approaches. In this regard, PMD-Track evaluated two popular indoor tracking approaches, fingerprinting and multilateration, concerning their accuracy, placement of beacons, and economic impacts. As employees approach tagged PMDs their smartphone updates the location of spotted PMDs in real-time, providing room-level localization data with up to 83% accuracy for fingerprinting and 35% for multilateration. The economic analysis yields that fingerprinting presents 7% cheaper approach in the first year considering setup and recurring costs, which is further increased to a 45% cheaper than multilateration over five years.
Still, additional fine-tuning and improvements are possible within the proposed PMD-Track approach. Based on the observed results, the fingerprinting-based method was implemented in PMD-Track, and in this sense, although the kNN model worked considerably well with a standard configuration of \(k=7\) and the Euclidean distance metric, different hyperparameters could be evaluated, which might improve the model's accuracy even further. It is also observed that using the users' mobile phones also impacts the battery consumption of these devices, which needs to be evaluated
to optimize the application. In this sense, such optimization is considered future work in PMD-Track. Furthermore, an experimental study within a hospital could verify, in practice, if the proposed approach yields greater effectiveness in the daily operations and practical activities of medical staff.
|
2306.15922
|
Divide-and-rule policy in the Naming Game
|
The Naming Game is a classic model for studying the emergence and evolution
of language within a population. In this paper, we extend the traditional
Naming Game model to encompass multiple committed opinions and investigate the
system dynamics on the complete graph with an arbitrarily large population and
random networks of finite size. For the fully connected complete graph, the
homogeneous mixing condition enables us to use mean-field theory to analyze the
opinion evolution of the system. However, when the number of opinions
increases, the number of variables describing the system grows exponentially.
To mitigate this, we focus on a special scenario where the largest group of
committed agents competes with a motley of committed groups, each of which is
smaller than the largest one, while initially, most of uncommitted agents hold
one unique opinion. This scenario is chosen for its recurrence in diverse
societies and its potential for complexity reduction by unifying agents from
smaller committed groups into one category. Our investigation reveals that when
the size of the largest committed group reaches the critical threshold, most of
uncommitted agents change their beliefs to this opinion, triggering a phase
transition. Further, we derive the general formula for the multi-opinion
evolution using a recursive approach, enabling investigation into any scenario.
Finally, we employ agent-based simulations to reveal the opinion evolution and
dominance transition in random graphs. Our results provide insights into the
conditions under which the dominant opinion emerges in a population and the
factors that influence these conditions.
|
Cheng Ma, Brendan Cross, Gyorgy Korniss, Boleslaw K. Szymanski
|
2023-06-28T04:58:18Z
|
http://arxiv.org/abs/2306.15922v2
|
# Divide-and-rule policy in the Naming Game
###### Abstract
The Naming Game is a classic model for studying the emergence and evolution of language in a population. In this paper, we consider the Naming Game with multiple committed opinions and investigate the dynamics of the game on a complete graph with an arbitrary large population. The homogeneous mixing condition enables us to use mean-field theory to analyze the opinion evolution of the system. However, when the number of opinions increases, the number of variables describing the system grows exponentially. We focus on a special scenario where the largest group of committed agents competes with a motive of committed groups, each of which is significantly smaller than the largest one, while the majority of uncommitted agents initially hold one unique opinion. We choose this scenario for two reasons. The first is that it arose many times in different societies, while the second is that its complexity can be reduced by merging all agents of small committed groups into a single committed group. We show that the phase transition occurs when the group of the largest committed fraction dominates the system, and the threshold for the size of the dominant group at which this transition occurs depends on the size of the committed group of the unified category. Further, we derive the general formula for the multi-opinion evolution using a recursive approach. Finally, we use agent-based simulations to reveal the opinion evolution in the random graphs. Our results provide insights into the conditions under which the dominant opinion emerges in a population and the factors that influence this process.
Naming Game, divide and rule, mean field, tipping point
## I Introduction
Opinion spreading, language evolution, and collective behavior in social systems have been of great interest to researchers and they were investigated from mathematical and so-called sociophysics perspectives for at least four decades [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. Agent-based models and statistical physics provide powerful tools for studying the opinion dynamics and social influence, often modeled by dyadic agent interactions [16, 17, 18, 19]. When choosing one of the several opinions, some individuals may follow the choices of their peers or acquaintances. However, other individuals in the system may advocate a single opinion and refuse to consider any others, to which we refer as committed agents or zealots. The presence of zealotry strongly biases the evolution of the opinions towards those held by the committed minorities. Even the presence of one group with committed agents of modest size may convert all the uncommitted agents to adopting the opinion of committed agents [20, 21, 22, 23, 24, 25, 26].
Here, we focus on the Naming Game (NG) to study the opinion dynamics in the presence of committed minorities. Introduced as a linguistic evolution model, the NG was initially used as a model for the formation of a vocabulary from different observations, and it demonstrated how a population of agents can collectively converge to a single unique word for labeling different objects or observations in their environment [27, 16, 17, 28]. Recently, it has been used as a mathematical model for the dynamics of social influence, which describes the evolution of competing opinions through the dyadic interactions between agents. A number of theoretical studies have been done to investigate the spread and evolution of opinions on various regular and complex networks in the presence of committed agents [29, 30, 31, 32, 33, 34, 35]. Yet many of them focus on the models with two competing opinions. To gain a general understanding of this model, the scenario with multiple opinions deserves more attention. In such systems, agents can hold a variety of opinions, and the dynamics of opinion evolution can be more complex and diverse than in the two-opinion scenario.
In our study, we consider the Naming Game with an arbitrarily large number of competing opinions and examine the influence of committed members on opinion evolution. Given the presence of mixed states that involve more than a single opinion, monitoring the state of the system with \(m\) distinct single opinions becomes extremely challenging, as there are \(2^{m}-1\) possible combinations of opinions which are proportional to the number of state variables needed to write the equations for the state evolution of the system. Such exponential growth of state variables makes this problem intractable even for the system of size \(m\) larger than \(10\) for both numerical simulation and analytical derivation of the solution.
There are also a limited number of studies discussing the system with multiple competing opinions [36, 37, 38, 39]. For some special scenarios, one may reduce the system complexity by inspecting symmetry and making appropriate approximations [36]. We adapt this approach to investigate the influence of committed agents and phase transition in the quasi-symmetric setup. However, the approximation might fail if no symmetry is preserved. Our strategy is to focus on the key features of the system. Since the system state is determined by the density evolution of each single opinion, it is not necessary to distinguish or record all mixed states. Instead, one can just keep track of the density distribution and spreading probability of each single opinion. By anonymizing mixed states, the number of states to be monitored is reduced, making the analysis of the system more manageable. This approach is general and can be applied to a wide range of
scenarios.
The rest of the paper is organized as follows. Section II provides an overview of the interaction mechanism of the Naming Game and its variants, as well as its evolution from the perspective of mean-field theory. Section III focuses on the original model on complete graphs and uses the mean-field differential equations to investigate opinion evolution. This section considers systems with different numbers of opinions, including two opinions, three opinions, and an arbitrarily large number of opinions. We also discuss phase transitions for different allocations of committed agents among multiple groups and the conditions under which the critical points arise, as well as two simplified systems of symmetrical setups designed to approximate the critical thresholds of the arbitrary initial conditions. Section IV studies the listen-only variant of the Naming Game on complete graphs and presents a recursive approach to reduce the system's complexity. In Section V, we investigate the original Naming Game model on Erdos-Renyi ER networks and show that for the given simplified scenario, the system evolution makes the divide and rule policy observable.
## II Model Description and Mean-Field Approximation
In the Naming Game (NG) model [17, 16, 31] using several distinct opinions, each agent holds a subset of opinions that defines its state. This state may change as a result of this agent's interaction with other agents when it acts as a speaker or listener in the NG current state.
In the original NG dynamics, at each NG state, a randomly chosen agent becomes a speaker and sends a random opinion from its opinion state to a randomly chosen neighbor to be a listener. If the listener already has the sent opinion in its opinion state, both speaker and listener retain only this opinion, otherwise, the listener adds it to its opinion state. There is a special type of agent whose opinion state contains only one opinion, and it never acts as a listener, so it holds its opinion unchanged during the entire NG. In other words, such agents are immune to any influence but can still spread their opinions to their neighbors when acting as a speaker. We refer to them as committed agents or zealots. In addition to this original model, there are two variants, which limit changes to only one of the two interacting nodes, named the "listener-only" and "speaker-only" versions. For the "listener-only" type, only the opinion state of the listeners can be modified. Here, we focus on the original NG model and its "listener-only" variant.
For the opinion dynamics on the complete graph, mean-field theory can be applied to systematically study the evolution of opinion states. For the general scenario with \(m\) unique single opinions, an uncommitted agent can hold at most \(M=2^{m}-1\) opinion states in total. For instance, when \(m=3\), the possible opinion states are \(A\), \(B\), \(C\), \(AB\), \(AC\), \(BC\), and \(ABC\). Under the condition of homogeneous mixing, the mean-field differential equations are written as
\[\frac{\mathrm{d}x_{k}}{\mathrm{d}t}=\sum_{i=1}^{M}\sum_{j=1}^{M}U_{ij}^{(k)}x _{i}x_{j}+\sum_{i=1}^{M}\sum_{j=1}^{m}V_{ij}^{(k)}x_{i}P_{j}+\sum_{i=1}^{m} \sum_{j=1}^{M}W_{ij}^{(k)}P_{i}x_{j}. \tag{1}\]
Such equations describe the changes in the density of uncommitted agents holding different opinion states as well as the interactions between the uncommitted agents and committed agents. The density \(x_{i}(i=1,2,...,m)\) represents the fraction of uncommitted agents holding the single opinion state \(i\), and the density \(x_{i}\) (\(i=m+1,m+2,...,M\)) represents the fraction of agents holding the mixed opinion state \(i\). It represents the fraction of agents in this system that have opinion \(i\) in their opinion state. \(P_{i}(i=1,2,...,m)\) is the density of zealots committed to the single opinion \(i\), which does not change over time. The matrices \(U\), \(V\), and \(W\) contain the coefficients determined by the interaction mechanism and they differ for the three versions of the interaction rules. Specifically, \(U_{ij}^{(k)}\) is the probability that the interaction between the uncommitted speaker with the opinion state \(i\) and the uncommitted listener with \(j\) gives rise to the opinion state \(k\). \(V_{ij}^{(k)}\) is the probability that results in the speaker adopting the opinion state \(k\) for the interaction between the uncommitted speaker holding the opinion state \(i\) and the committed listener with \(j\). Similarly, \(W_{ij}^{(k)}\) is the probability that results in the listener adopting the opinion state \(k\) for the interaction between the committed speaker holding the opinion state \(i\) and the uncommitted listener with \(j\). The densities \(x_{i}\) and \(P_{i}\) must sum up to 1, so we have \(\sum_{i=1}^{M}x_{i}+\sum_{i=1}^{m}P_{i}=1\).
For the system with a small number of single opinions, \(m\), the numerical integration of the mean-field differential equation, Eq. (1), can be performed to obtain the density evolution of each opinion state in the NG model. However, as the number of all opinion states, \(M\), which includes both single and mixed opinions, increases exponentially with \(m\), performing direct numerical simulations becomes computationally infeasible and impractical for large values of \(m\).
## III Original version
First, the original NG dynamics are analyzed using mean-field differential equations, with a focus on the density evolution of each opinion state in the presence of committed minorities. This section includes the study of three scenarios varying in complexity., the first with two single opinions, the second with three single opinions, and the third with \(m\) single opinions in general.
### _The two-opinion scenario_
In the scenario of \(m=2\), there are two opinions, A and B, in the system competing against each other. Eq. (1) reduces to two mean-field equations,
\[\frac{\mathrm{d}x_{A}}{\mathrm{d}t} =-x_{A}x_{B}+x_{AB}^{2}+x_{AB}x_{A}+\frac{3}{2}P_{A}x_{AB}-P_{B} x_{A} \tag{2}\] \[\frac{\mathrm{d}x_{A}}{\mathrm{d}t} =-x_{A}x_{B}+x_{AB}^{2}+x_{AB}x_{B}+\frac{3}{2}P_{B}x_{AB}-P_{A} x_{B}\]
By definition, \(x_{A}+x_{B}+x_{AB}+P_{A}+P_{B}=1\). Together with Eq. (2), the two-opinion model can be analytically and numerically solved.
We are interested in the scenario in which one opinion (let us say \(A\)) has a higher fraction of committed agents than the other opinion, \(B\), but the latter is initially supported by all uncommitted agents, making it the majority opinion. Committed agents of opinion \(A\) can assimilate uncommitted agents, thus causing opinion \(A\) to eventually become the majority opinion. Previous studies [22, 23] have shown that there exists a minimal fraction of committed agents, denoted by \(P_{A}^{(c)}\), which is required for a fast phase transition of the dominant opinion from \(B\) to \(A\). Below this threshold, the waiting time for such a transition grows exponentially with the number of agents, making it infeasible to observe in practical cases. To understand the final dominant state of the system, a new variable, \(n_{i}\), is introduced, which represents the total fraction of agents holding opinion \(i\) in equilibrium. This fraction includes both the committed and uncommitted agents for a single opinion, \(n_{i}=x_{i}^{(s)}+P_{i}\), whereas for mixed opinion states, \(n_{i}\) only accounts for the uncommitted agents, \(n_{i}=x_{i}^{(s)}\), because committed agents only advocate their single opinions. Previous studies [22] have shown that in the absence of committed agents advocating opinion \(B\) (\(P_{B}=0\), \(P_{A}>0\)), a minimal fraction of committed agents advocating opinion \(A\) (\(P_{A}^{(c)}\)) of approximately \(0.098\) is required to trigger a fast transition from the majority opinion \(B\) to \(A\) As Fig. 1a shows, when both committed groups, opinions \(A\) and \(B\), are present, there are two types of transitions, the discontinuous transition and the continuous one, that may occur depending on the committed fractions. They are separated by the point \((P^{(c)},P^{(c)})\approx(0.162,0.162)\)[23]. For \(P_{B}>P^{(c)}\), the fraction of agents holding opinion \(A\) increases continuously with \(P_{A}\), and the critical points lie on the line \(P_{A}^{(c)}=P_{B}\).
### _Three-opinion scenario_
A slightly more complex system arises with three opinions. Let us consider three opinions \(A\), \(B\), and \(C\), where opinions \(A\) and \(C\) are committed by two minor fractions of committed agents, and initially, all uncommitted agents, which form the majority of all agents, hold opinion \(B\). We ask a similar question as in the previous example. For the scenario of \(P_{A}>P_{C}\), to enable opinion \(A\) to dominate the system, what is the minimal fraction of committed agents, \(P_{A}^{(c)}\), and how does this threshold depend on the committed fraction of the opinion \(C\)? According to Eq. (1), the evolution of each state variable can be numerically integrated. For small values of \(P_{C}\), the fraction of agents holding opinion \(A\), \(n_{A}\), exhibits a discontinuous transition with respect to \(P_{A}\) (Fig. 2a), and beyond the critical point \(P_{A}^{(c)}\), opinion \(A\) wins the majority of supporters. The relationship between the critical point \(P_{A}^{(c)}\) and \(P_{C}\) is non-monotonic. In the regime of discontinuous transition, \(P_{A}^{(c)}\) first decreases with \(P_{C}\) and then increases linearly with \(P_{C}\), indicating that increasing the latter is beneficial for the agents committed to opinion \(A\) to dominate the majority of uncommitted agents given that \(P_{C}\) is smaller than a certain value (\(P_{C}\approx 0.077\) at the lowest point in Fig. 2b). However, for \(P_{C}>0.077\), \(P_{A}^{(c)}\) increases linearly with \(P_{C}\) and this regime includes both the discontinuous transition and the continuous one, different from the previous two-opinion scenario. The critical point separating two types of transitions remains the same as the two-opinion scenario.
### _The general scenario - multi-opinion model_
For the general scenario with \(m\) single opinions (\(A\), \(B\), \(C_{1}\), \(C_{2}\), \(C_{3}\),..., \(C_{m-2}\)), it is of interest to understand the impact of committed agents on the majority of uncommitted agents and potential for one single opinion to dominate over other competitors. Consider a scenario where the majority of uncommitted agents support a single opinion, denoted as \(B\), while the remaining agents are committed to \(m-1\) single opinions. Among these \(m-1\) opinions, the one with the largest committed fraction, denoted as \(A\), has the potential to reverse the majority of uncommitted agents from supporting \(B\) to supporting \(A\). The question then arises as to the minimum fraction of committed agents, \(P_{A}^{(c)}\), required for such a transition to occur. To streamline the analysis, the committed agents supporting opinions other than \(A\) are grouped into a single category, referred to as \(\tilde{A}\), with a combined committed fraction of \(P_{\tilde{A}}\). This simplification is justified as none of the single opinions in the group \(\tilde{A}\) can prevail in the competition. However, the number of competing opinions in the group \(\tilde{A}\), \(m-2\), their total committed fraction, \(P_{\tilde{A}}\), and the allocation of these committed agents, \(P_{i}\), may all potentially affect the critical point, \(P_{A}^{(c)}\).
We, therefore, investigate the impact of such factors on the dominance transition of opinion dynamics by constructing three different scenarios for allocating committed agents within the group \(\tilde{A}\).
1. **Scenario \(S_{0}\): randomly distributed.** The committed fraction, \(P_{i}\), of each single opinion in the group \(\tilde{A}\) can be any value between \(0\) and \(P_{\tilde{A}}\), but their total adds up to \(P_{\tilde{A}}\).
2. **Scenario \(S_{1}\): perfectly symmetric.**\(m-2\) opinions in the group \(\tilde{A}\) share the equal fraction of committed agents, \(P_{i}=p_{0}=P_{\tilde{A}}/(m-2)\). The quantity, \(p_{0}\), in the later context also refers to the average committed fractions of agents advocating the single opinion state in the group \(\tilde{A}\).
3. **Scenario \(S_{2}\): extremely polarized.** In contrast to the scenario \(S_{1}\), we maximize the deviation of \(P_{i}\) in the group \(\tilde{A}\) to establish the highly uneven distribution of committed fractions. Provided that the single opinion \(A\) has the largest committed fraction in the system, the largest committed fraction in the group \(\tilde{A}\) should be smaller than \(P_{A}\). To set up the numerical simulation, we chose \(\max\{P_{i}\}=p_{1}=P_{A}-10^{-3}\), and the number of opinions with the committed fraction \(p_{1}\) is also maximized, which is \(n_{1}=|P_{\tilde{A}}/p_{1}|\). The rest of committed agents, \(p_{2}=P_{\tilde{A}}-n_{1}p_{1}(<p_{1})\), are assigned to one single opinion. In this scenario, there are \(m-n_{1}-3\) (\(\geq 0\)) single opinions in the group \(\tilde{A}\) without any committed followers. In the group \(\tilde{A}\), \(P_{i}\) can take three values, \(p_{1}\), \(p_{2}\), and \(0\). As there are no uncommitted agents assigned
to the group \(\tilde{A}\), some single opinions may end up with no supporters. One should note that the number of single opinions is still considered as \(m\) when compared with the scenarios \(S_{0}\) and \(S_{1}\).
The mean-field equations (1) can be directly integrated to analyze the opinion dynamics for a system with a limited number of single opinions. However, for a system with many opinions \(m\), this method becomes computationally infeasible because the number of variables, \(M\), increases exponentially with \(m\). To overcome this challenge, simpler scenarios are considered, as described in the scenarios \(S_{1}\) and \(S_{2}\). The simplified structures of scenarios \(S_{1}\) and \(S_{2}\) allow for a more efficient and manageable study of the critical transition in comparison to direct numerical integration for the scenario \(S_{0}\) with arbitrary initial conditions. In the scenario \(S_{1}\), a collection of single opinions (denoted as the group \(\tilde{A}\)) are designed to have the same fraction of committed agents and no uncommitted supporters. Under the homogeneous mixing condition, the number of supporters for these opinions is expected to evolve in the same fashion. In this scenario, the number of state variables to be monitored is reduced from \(2^{m}-1\) to \(4m-5\). For example, when \(m=5\) where single opinions are \(A\), \(B\), \(C_{1}\), \(C_{2}\), \(C_{3}\). Opinions \(C_{1}\), \(C_{2}\), and \(C_{3}\) are assigned the same fraction of committed agents, so the fraction of uncommitted agents they can assimilate to themselves is expected to be the same by symmetry. Further, some mixed opinion states, such as \(C_{1}C_{2}\), \(C_{1}C_{3}\), and \(C_{2}C_{3}\), or \(AC_{1}\), \(AC_{2}\), and \(AC_{3}\) also have the same uncommitted supporters as time
Fig. 1: Phase transition and the tipping point for \(m=2\). (a) The stable density of agents with opinion A \(n_{A}\) as a function of their committed fraction \(P_{A}\) for different values of \(P_{B}\). (b) The critical point \(P_{A}^{(c)}\) changes with \(P_{B}\). The blue dots represent the discontinuous transition of \(n_{A}\) versus \(P_{A}\), while the red ones represent the continuous change.
Fig. 2: Phase transition and tipping point for \(m=3\). (a) The stable density of agents with opinion A \(n_{A}\) as a function of their committed fraction \(P_{A}\) for different values of \(P_{C}\). (b) The critical point \(P_{A}^{(c)}\) changes with \(P_{C}\). The blue dots represent the discontinuous transition of \(n_{A}\) versus \(P_{A}\), while the red ones represent the continuous change.
progresses. This results in a reduction in the number of state variables that need to be monitored. A similar argument also applies to the scenario \(S_{2}\).
Next, we study the fraction of supporters of opinion \(A\) which is assigned the largest committed fraction, and the critical transition in which this opinion assimilates the majority of uncommitted individuals to itself for the three scenarios. In the scenario \(S_{1}\), the total fraction of supporters of opinion \(A\), \(n_{A}\) exhibits a discontinuous transition with \(P_{A}\) for small values of \(P_{\tilde{A}}\) shown in Fig. 3. Also, as seen Fig. 4, in the critical point \(P_{A}^{(c)}\) displays a non-monotonic behavior as \(P_{\tilde{A}}\) or \(p_{0}\) increases. The presence of a small committed group plays a key role in the formation of a dominant opinion. The initial decrease in the critical value \(P_{A}^{(c)}\) as the committed fraction \(p_{0}\) of the smaller groups increases suggests that as the number of committed individuals in these groups grows, they become more effective in facilitating the dominance of the opinion with the largest committed fraction. The initial decrease in \(P_{A}^{(c)}\) can be attributed to the increased potential for interactions and conversions between the committed individuals in the smaller groups and the uncommitted individuals in the system. Moreover, the non-monotonic behavior of \(P_{A}^{(c)}\) with increasing \(P_{\tilde{A}}\) or \(p_{0}\) also indicates the presence of a threshold effect. Beyond a certain value of \(P_{\tilde{A}}\) or \(p_{0}\), the critical value \(P_{A}^{(c)}\) begins to increase, indicating that the positive influence of the smaller committed groups on the dominant opinion's growth becomes weaker. The linear relationship instead shows the competition between opinion \(A\) and other opinions with a smaller committed fraction.
To explore how the value of the tipping point \(P_{A}^{(c)}\) depends on the distribution of committed agents in the group \(\tilde{A}\), we manipulate the committed fraction \(P_{i}\) while preserving \(P_{\tilde{A}}\) in the scenario \(S_{0}\). Results displayed in Fig. 5(a)-(c) show a non-monotonic behavior of the critical point \(P_{A}^{(c)}\) as a function of the maximum value of \(P_{i}\) in group \(\tilde{A}\). The initial decrease of \(P_{A}^{(c)}\) indicates that the presence of a large fraction of committed agents within group \(\tilde{A}\) is beneficial for opinion \(A\) to dominate the system compared to when the committed agents are uniformly distributed among the \(m-2\) single opinions.
This conclusion can also be confirmed by observing how \(P_{A}^{(c)}\) changes with the standard deviation of \(P_{i}\). However, it is worth noting that a higher \(P_{i}\) does not always result in a favorable outcome in terms of the dominance of opinion \(A\). For opinion \(A\) to become dominant, its committed fraction \(P_{A}\) must be greater than any other committed fraction in the group \(\tilde{A}\), which explains the linear increase of \(P_{A}^{(c)}\) observed in the results. The non-monotonic behavior of the critical value of \(P_{A}^{(c)}\) highlights the importance of considering the effects of different distributions of committed fractions on the overall dynamics of the system, especially the dominance transition.
As seen in Fig. 5, the scenario \(S_{2}\) is expected to have a smaller critical point \(P_{A}^{(c)}\) than \(S_{1}\) given the fraction of committed agents in group \(\tilde{A}\) is small enough, which can be confirmed from Fig. 6. The critical points obtained from two scenarios, \(S_{1}\) and \(S_{2}\), provide the upper bound and the lower bound, respectively, for the scenario \(S_{0}\). Additionally, one can compare the steady states of the three scenarios in Fig. 7. The scenarios \(S_{1}\) and \(S_{2}\) also provide a good approximation for the steady state \(n_{A}\) in the scenario \(S_{0}\). It is observed that the critical point \(P_{A}^{(c_{1})}\) in the scenarios \(S_{1}\) is always greater than \(P_{A}^{(c_{2})}\) in the scenario \(S_{2}\), and the two critical points \(P_{A}^{(c_{1})}\) and \(P_{A}^{(c_{2})}\) divide the parameter space into three parts. For values of \(P_{A}\) less than \(P_{A}^{(c_{2})}\), the scenario \(S_{1}\) yields the lower bound of \(n_{A}\) while \(S_{2}\) provides the upper bound. For \(P_{A}^{(c_{2})}<P_{A}<P_{A}^{(c_{1})}\), both scenarios establish the lower bound. For \(P_{A}>P_{A}^{(c_{1})}\) or the scenario when there are no critical points, the scenario \(S_{1}\) corresponds to the upper limit of \(n_{A}\) while \(S_{2}\) corresponds to the lower limit. By investigating the scenarios \(S_{1}\) and \(S_{2}\), the critical points and the steady states of the single opinion A with the largest committed fraction in the scenario \(S_{0}\) are well estimated.
We now analyze the opinion competition from another perspective. The key question is to determine the dynamics of opinion \(A\) as it competes against opinions \(B\) and \(\tilde{A}\). As shown in Fig. 8a, the critical point, \(P_{A}^{(c)}\), in the scenario \(S_{1}\) has a non-monotonic relationship with the number of single opinions, \(m\). Given a fixed committed fraction, \(P_{\tilde{A}}\), as \(m\) increases, the individual committed fraction, \(p_{0}\) (\(=P_{\tilde{A}}/(m-2)\)), in group \(\tilde{A}\) decreases, weakening the opposition from this group. The initial decrease of \(P_{A}^{(c)}\) reveals the validity of divide and rule policy, whereby the more opinions splits among themselves the committed agents of the group \(\tilde{A}\), the easier it is for opinion \(A\) to dominate uncommitted agents in the system. Reversing this rule reveals that the major obstacle to the opinion \(\tilde{A}\) dominance is the small number of opinions in the group \(\tilde{A}\). However, if \(m\) continues to increase, the critical point \(P_{A}^{(c)}\) also increases, suggesting that opinion \(B\) becomes the major threat. In this scenario, the strong opponent, \(\tilde{A}\), (large \(p_{0}\)) can be helpful for opinion \(A\) to dominate the system, thus making group \(\tilde{A}\) a friend of opinion \(A\), in line with the Heider balance theory rule [40] that states "The enemy of my enemy is my friend".
The critical point \(P_{A}^{(c)}\) in the scenario \(S_{0}\) can differ depending on the distribution of the committed agents. However, the scenarios \(S_{1}\) and \(S_{2}\) serve as an approximation by providing the upper and lower bounds, respectively, for this critical value. Additionally, the symmetry exhibited in the scenarios \(S_{1}\) and \(S_{2}\) results in identical evolution for opinion states with the same committed fraction in the group \(\tilde{A}\) under the homogeneous mixing condition. This reduction in complexity allows for a more efficient analysis of the system dynamics, as a satisfactory approximation can be obtained by considering the scenarios \(S_{1}\) and \(S_{2}\).
## IV Simplification by recursive relationship
In the previous section, we discussed how one can establish the symmetrical distribution of committed agents to reduce the complexity and approximate the opinion dynamics in more arbitrary scenarios. In this section, we will present a more general approach to reducing the system's complexity.
Since the largest committed opinion defines the system state, it is sufficient to focus on this opinion density evolution. We introduce a quantity \(Q_{i}^{(t)}\), which represents the probability of a single opinion \(i\) being communicated at step \(t\) from the population [36], and we establish an iteration function for the
Figure 4: Scenario \(S_{1}\) for \(m=4,5,6,7,8,9\). The critical point \(p_{A}^{(c)}\) changes with (a) \(p_{0}\) and (b) \(P_{\tilde{A}}\). It only includes discontinuous transitions. The continuous transition follows the relationship \(P_{A}^{(c)}=p_{0}\).
Figure 3: Scenario \(S_{1}\). The fraction \(n_{A}\) holding the opinion \(A\) changes with \(P_{A}\) for different values of \(P_{\tilde{A}}\). (a) \(m=4\), (b) \(m=5\), (c) \(m=6\), (d) \(m=7\), (e) \(m=8\), (f) \(m=9\).
opinion density at step \(t\) based on the state at step \(t-1\). It has been shown that the original NG dynamics and the listener-only version on the complete graph have qualitatively similar results [41]. As it is easier to derive the iterative function by considering only the state change of listeners, we develop our framework for the listener-only version. For an uncommitted node to adopt a single opinion \(i\) at step \(t\), it must have held the opinion \(i\) in its list at step \(t-1\) and received opinion \(i\) at step \(t\). Eq. (3) describes such conditions, where \(x_{i+}\) is the total fraction of all mixed states containing the opinion \(i\). The first term of Eq. (3) represents the scenario when a listener holding the single opinion \(i\) receives the signal \(i\), and the second term corresponds to the scenario when a listener in the mixed state containing the single opinion \(i\) hears the opinion \(i\). After the interaction, the listener in both scenarios either remains in the single state \(i\) or adapts to it. Eq. (4) establishes the recursive relationship of the mixed state containing two opinions, \(i\) and \(j\). Specifically, if a listener initially supports opinion \(i\) (\(j\)) and subsequently receives signal \(j\) (\(i\)), it will switch to the mixed state, \(ij\). This equation accounts for the scenario where a listener holds one opinion but is influenced by the received opinion through interaction with other agents. Similarly, the recursive relationship of the mixed state containing three single opinions is derived in Eqs. (5). One can easily generalize the iteration function of the mixed state containing \(n\) single opinions as Eq. (6), where \(\mathcal{S}_{n}(i_{1},i_{2},...,i_{n})\) represents all
Fig. 5: Scenario \(S_{0}\). (a)–(c) The critical point \(P_{A}^{(c)}\) changes with the maximum of \(P_{i}\) in the group \(\tilde{A}\) with an initial decrease followed by a linear increase. (d)–(f) only include the data of the decrease regime, which shows that \(P_{A}^{(c)}\) changes with the standard deviation (SD) of \(P_{i}\). (a) and (d) \(m=4\), (b) and (e) \(m=5\), (c) and (f) \(m=6\).
Fig. 6: The critical point \(P_{A}^{(c)}\) changes with \(p_{0}\) in three scenarios \(S_{0}\), \(S_{1}\), and \(S_{2}\). For the scenario \(S_{0}\), only the data where \(P_{A}^{(c)}\) is along the decreasing branch with \(\max\{P_{i}\}\) in Fig. 5 is included. (a) \(m=4\), (b) \(m=5\), (c) \(m=6\).
permutations of a set containing \(n\) elements.
\[x_{i}^{(t)}=x_{i}^{(t-1)}Q_{i}^{(t-1)}+x_{i+}^{(t-1)}Q_{i}^{(t-1)} \tag{3}\]
\[x_{ij}^{(t)}=x_{i}^{(t-1)}Q_{j}^{(t-1)}+x_{j}^{(t-1)}Q_{i}^{(t-1)} \tag{4}\]
\[\begin{split} x_{ijk}^{(t)}=& x_{ij}^{(t-1)}Q_{k}^{(t- 1)}+x_{ik}^{(t-1)}Q_{j}^{(t-1)}+x_{jk}^{(t-1)}Q_{i}^{(t-1)}\\ =& x_{i}^{(t-2)}Q_{j}^{(t-2)}Q_{k}^{(t-1)}+x_{j}^{(t- 2)}Q_{i}^{(t-2)}Q_{k}^{(t-1)}\\ &+x_{i}^{(t-2)}Q_{k}^{(t-2)}Q_{j}^{(t-1)}+x_{k}^{(t-2)}Q_{j}^{(t- 2)}Q_{i}^{(t-1)}\\ =&\sum_{(i^{\prime},j^{\prime},k^{\prime})\in S_{3}( i,j,k)}x_{i^{\prime}}^{(t-2)}Q_{j^{\prime}}^{(t-2)}Q_{k^{\prime}}^{(t-1)} \end{split} \tag{5}\]
\[\begin{split} x_{i_{1}i_{2}\ldots i_{n}}^{(t)}&= \sum_{(i^{\prime}_{1},i^{\prime}_{2},\ldots,i^{\prime}_{n})\in S_{n}(i_{1},i_{2 },\ldots,i_{n})}x_{i^{\prime}_{1}}^{(t-n+1)}\\ &\times Q_{i_{2}}^{(t-n+1)}Q_{i_{3}}^{(t-n)}...Q_{i^{\prime}_{n- 1}}^{(t-2)}Q_{i^{\prime}_{n}}^{(t-1)}.\end{split} \tag{6}\]
To simplify the computation and focus on the density distribution of single opinions, \(x_{i}\), the need to calculate or record all mixed states is eliminated. Instead, only \(Q_{i}\) and \(x_{i+}\) need to be tracked. The density evolution of mixed states containing opinion \(i\), such as \(x_{i\bar{i}}\), \(x_{i\bar{i}\bar{i}}\), \(x_{i\bar{i}\bar{i}}\), can be derived using Eq. (6), where \(\bar{i}\) refers to any single opinion other than opinion \(i\). In this way, the number of variables is reduced from \(2^{m}-1\) to \(m^{2}\).
By summing up Eq. (4) over a subset that includes any single opinion \(j\) other than \(i\), one can obtain \(x_{\bar{i}}^{(i)}\) as Eq. (7), where \(\mathcal{M}\) is the set of \(m\) single opinions, and \(\mathcal{M}\backslash i\) represents the set of all single opinions excluding the opinion \(i\).
\[x_{i\ddot{i}}^{(t)}=x_{i}^{(t-1)}\sum_{j\in\mathcal{M}\backslash i}Q_{j}^{(t-1)}+Q_ {i}^{(t-1)}\sum_{j\in\mathcal{M}\backslash i}x_{j}^{(t-1)} \tag{7}\]
Similarly, one can derive the general formula for the mixed state of length \(n+1\) with opinion \(i\) and other \(n\) opinions, \(x_{i\underbrace{\ldots\ \vdots}_{n}}^{(t)}\),
\[x_{i\underbrace{\ldots\ \vdots}_{n}}^{(t)}=\sum_{j\in\mathcal{M}}x_{j}^{(t-n)} \sum_{i\in(j_{1},\ldots,j_{n})\in\mathcal{M}\backslash j}Q_{j_{1}}^{(t-n)}...Q _{j_{n}}^{(t-1)} \tag{8}\]
In Eq. (8), \(j_{1}\),..., \(j_{n}\) are \(n\) distinct integers, representing \(n\) different single opinions. By definition, opinion \(i\) must be one of \(n\) distinct single opinions \(j_{1},...,j_{n}\).
The ultimate objective is to track the evolution of single opinions over time, as captured by Eq. (3). This requires computing the probability of transmitting opinion \(i\), \(Q_{i}^{(t)}\), and the density of mixed states, \(x_{i+}^{(t)}\), (\(i=1,2,...,m\)) at each interaction step \(t\). According to the interaction rule, only speakers with a single opinion \(i\) in their list can communicate opinion \(i\). Additionally, for the mixed state, each single opinion in the list has an equal probability of being transmitted. Therefore, \(Q_{i}^{(t)}\) and \(x_{i+}^{(t)}\) are expressed as Eqs. (9) and (10), respectively.
\[Q_{i}^{(t)}=x_{i}^{(t)}+P_{i}^{(t)}+\frac{1}{2}x_{i\ddot{i}}^{(t)}+\frac{1}{3}x _{i\ddot{i}\ddot{i}}^{(t)}+...+\frac{1}{m}x_{i\underbrace{\ddot{i}\ddot{i} \ddot{i}}_{n}\cdots\underbrace{\cdot\ddot{i}}_{n}}^{(t)} \tag{9}\]
\[x_{i+}^{(t)}=x_{i\ddot{i}}^{(t)}+x_{i\ddot{i}\ddot{i}}^{(t)}+x_{i\ddot{i} \ddot{i}\ddot{i}}^{(t)}+...+x_{i\underbrace{\ddot{i}\ddot{i}\ddot{i}\ddot{i} \ldots\ \ \vdots}_{n}}^{(t)} \tag{10}\]
By employing recursive functions (3), (8), (9), and (10), one can calculate the density evolution of single opinions for any initial condition. One can further simplify the computation if the system's stable state is of primary interest, which means that the probabilities of communicating opinion \(i\) at different time steps are the same. Therefore, these probabilities \(Q_{i}^{(t)}\), \(Q_{i}^{(t-1)}\),..., \(Q_{i}^{(t-n)}\) can be represented by one quantity \(Q_{i}^{(s)}\).
Comparing the system evolution obtained by the recursive approach and differential equations in Fig. 9, we find that the results are nearly identical, validating the recursive approach.
## V The multi-opinion system on random networks
While, in principle, it is possible to develop a heterogeneous (degree-based) mean-field approximation scheme [9, 42], we do not pursue that approach here. Instead, we resort to the original agent-based simulation of the Naming Game (i.e., using node-based local update rules) to study the density evolution of agents supported by different opinions. We consider a similar problem discussed in the previous sections, and the difference is that all the single opinions are supported by committed agents. The opinion with the largest committed fraction is denoted as \(A\). For simplicity, the other \(m-1\) opinions share the same fraction of committed fraction, \(p_{0}\), and are initially supported by the same number of uncommitted agents. Hence, they are classified into one group, \(\tilde{A}\) with the total committed fraction \(P_{\tilde{A}}=(m-1)p_{0}\). For the finite networked system, either the opinion \(A\) or one of the opinions in \(\tilde{A}\) would dominate the system in the steady state. We are interested in the critical point, \(P_{A}^{(c)}\), that enables the dominance by the opinion \(A\), and the influence of the number of single opinions, \(m\) on the critical point.
### _The impact of random communication topology - ER networks_
Networks generated by the Erdos-Renyi (ER) model [43] used with the same parameter may have different connectiv
Fig. 8: **Divide and rule.** The critical point \(P_{A}^{(c)}\) in the scenario \(S_{1}\) is obtained by the recursive approach in (a), and the integration of the differential equations in (b). The critical point, \(P_{A}^{(c)}\), has a non-monotonic relationship with the number of single opinions, \(m\). Dividing the committed agents into a moderate number of competing minorities can aid in the domination of uncommitted agents by opinion \(A\) in the system. The parameter is set as \(P_{\tilde{A}}=0.1,0.12,0.14,0.16\).
ities. As seen in Fig. 10, in some cases, the evolution of the system and the dominant opinion in the stable state differs from one realization to another. This variability arises due to the differences in the connectivity structure among agents across realizations and the random selection order of agents as speakers and listeners. These factors introduce randomness in finite systems, leading to variations in the system's behavior.
To represent the system state, the average fraction \(\langle n_{i}\rangle\) of agents supporting the opinion \(i\) is defined in Eq. (11), where \(L\) is the number of realizations. Additionally, we introduce the ratio \(R_{i}\) as the fraction of realizations that end up being dominated by the opinion \(i\).
\[\langle n_{i}\rangle=\frac{1}{L}\sum_{j=1}^{L}n_{i}^{(j)} \tag{11}\]
Fig. 11 shows that as the committed fraction \(P_{A}\) increases, there is a critical transition from a low density to the dominant state for the average number of agents holding opinion \(A\), \(\langle n_{A}^{(s)}\rangle\), as well as for the ratio \(R_{A}\). To further investigate the transition on networks, we define the critical point on random networks, denoted by \(P_{A}^{(c)}\), as the smallest committed fraction that enables the transition ratio \(R_{A}\) to exceed \(\frac{1}{2}\) (Note that our chosen conventional cutoff value \(\frac{1}{2}\) does not affect the findings). To analyze the relationship between the average degree \(\langle k\rangle\) and the critical point \(P_{A}^{(c)}\) on random networks, we examined complete graphs and networks with varying \(\langle k\rangle\), as shown in Figure 12. Our results indicate that as the number of single opinions \(m\) increases, the critical point \(P_{A}^{(c)}\) decreases, in line with the divide and rule policy. Additionally, we observed that the critical point decreases as the average network degree decreases, suggesting that sparse random communication topologies may amplify the impact of committed members on the system, such that opinion \(A\) with the largest committed fraction is easier to dominate the system [42].
## VI Discussions
In this study, we focus on the competition of the opinion with the largest fraction of committed agents against other opinions with committed agents and the opinion with the majority of uncommitted supporters. We study such competition using the original NG dynamics and its listener-only version. For the complete graph in the infinite-size limit, the mean-field theory can precisely describe the density evolution. For a multi-opinion system, the system complexity increases exponentially with the number of single opinions, making it impractical to directly integrate the differential equations. To address this issue, we constructed two scenarios, \(S_{1}\) and \(S_{2}\), to simplify the computation and develop the recursive approach. The critical point of the opinion with the largest committed fraction, \(P_{A}^{(c)}\), to dominate the system is well approximated by two simplified scenarios.
Comparing critical transitions in the system's three scenarios, We have observed that the distribution of committed agents within the minority committed group plays a significant role in determining the critical point. Specifically, the number and distribution of opinions in the group \(\tilde{A}\) can either aid or resist opinion \(A^{\prime}s\) ability to gain influence over uncommitted agents. For the scenarios in which opinion \(B\), without any committed followers, is the main competitor to opinion \(A\), the opinions in group \(\tilde{A}\) can aid opinion \(A\) in dominating the system by reducing the number of agents holding opinion \(B\). As a result, increasing the number of committed agents holding opinions other than \(A\) can lead to a lower critical point, \(P_{A}^{(c)}\). However, when committed agents holding opinions other than \(A\) are the main opponents, increasing their number can increase the critical point.
We also employed agent-based simulations to study the opinion dynamics on random networks of finite size. We observe the divide and rule policy in action in our experimental design in which the agents, including both committed and uncommitted ones, not of opinion \(A\) are divided equally among all other minor committed opinions. Our results demonstrate that increasing the number of minor committed groups leads to a decrease in the critical fraction of agents holding opinion \(A\) that is required for it to dominate the system.
Finally, we designed our study to be a highly abstract model of political contests. The large committed group \(A\) may represent government supporters, whose well-being may
Fig. 9: The evolution of the uncommitted fraction for the opinions \(A\), \(B\) and \(C_{1}\) (same as \(C_{2}\), \(C_{3}\), \(C_{4}\), thus denoted as \(C\)) obtained by the recursive approach and the differential equations. The number of opinions \(m=6\), \(P_{A}=0.1\), and \(P_{C_{1}}=P_{C_{2}}=P_{C_{3}}=P_{C_{4}}=0.025\). Initially, all the uncommitted agents support the opinion \(B\), \(x_{B}(t=0)=0.8\).
depend on their keeping government controlled positions. The multiple groups of holders of minor committed opinions represent divided opponents of the government. The holders of the regular (uncommitted) opinion represent voters. As we show in the paper, government supporters have the incentive to divide opponents into small groups to keep voters accepting the government's opinion. The smaller the fraction of people sharing each committed opinion is, the fewer the ruling party committed members are needed to support the government's opinion. However, when the quality of life deteriorates, the opposition can unite on a single opinion that it is time to change the government and the same number of committed opponents united has a much better chance to dominate the voters and remove governing party, as our experiments show, Once the coalition of opponents wins, the pressure to uniting disappears, and the differences between small committed groups come back. Then again, the holders of the opposite opinion with the highest fraction of loyal members may dominate less popular opinions. Often this largest opposition is the most radical.
An example of such a double-change scenario is Russia's rapid change from absoluttist Tsar to a Provisional Government in early 1917 and to the strongest opposition Bolsheviks Party that overthrew the new Government and started the Communist rule in the late 1917. A more recent example is Egypt in 2011 when the democratic-leaning protesters overthrew the military Mubarak Government but it was the strongest opposition Muslim Brotherhood Party that created the Islamist Mori Government, which was finally replaced by the military ElSisy Government in 2013.
Fig. 10: **The fraction of agents supporting the opinion \(A\) changes with the interaction time on ER networks with \(N=1000\) agents.** The number of single opinions \(m=5\), and the committed fraction of each opinion in the group \(\tilde{A}\) is \(p_{0}=0.01\). There are \(50\) realizations for each parameter setting. The average degrees are \(\langle k\rangle=6\) in panels (a) – (d), \(\langle k\rangle=8\) in panels (e) – (h), and \(\langle k\rangle=16\) in panels (i) – (l).
Figure 11: **The system stable states change with the committed fraction \(P_{A}\) on ER networks with \(N=1000\) agents.**\(P_{\bar{A}}=0.06\). \(\langle n_{A}^{(s)}\rangle\) is averaged over \(L=50\) realizations, and \(R_{A}\) is the fraction of realizations that end up with \(A\) dominant state.
Figure 12: **The critical point \(P_{A}^{(c)}\) changes with the number of single opinions on ER networks and is compared with complete graphs.** The number of agents is \(N=1000\) in (a), and \(N=10000\) in (b). The total fraction of committed agents in the group \(\bar{A}\) is \(P_{\bar{A}}=0.06\). The critical point is the smallest committed fraction which enables half of the realizations to stabilize with opinion \(A\) as a dominant state. The critical point increases as the average degree increases.
## Acknowledgements
B.K.S. was partially supported by DARPA-INCAS under Agreement No. HR001121C0165 and by the NSF Grant No. BSE-2214216
|
2310.19626
|
Transformation vs Tradition: Artificial General Intelligence (AGI) for
Arts and Humanities
|
Recent advances in artificial general intelligence (AGI), particularly large
language models and creative image generation systems have demonstrated
impressive capabilities on diverse tasks spanning the arts and humanities.
However, the swift evolution of AGI has also raised critical questions about
its responsible deployment in these culturally significant domains
traditionally seen as profoundly human. This paper provides a comprehensive
analysis of the applications and implications of AGI for text, graphics, audio,
and video pertaining to arts and the humanities. We survey cutting-edge systems
and their usage in areas ranging from poetry to history, marketing to film, and
communication to classical art. We outline substantial concerns pertaining to
factuality, toxicity, biases, and public safety in AGI systems, and propose
mitigation strategies. The paper argues for multi-stakeholder collaboration to
ensure AGI promotes creativity, knowledge, and cultural values without
undermining truth or human dignity. Our timely contribution summarizes a
rapidly developing field, highlighting promising directions while advocating
for responsible progress centering on human flourishing. The analysis lays the
groundwork for further research on aligning AGI's technological capacities with
enduring social goods.
|
Zhengliang Liu, Yiwei Li, Qian Cao, Junwen Chen, Tianze Yang, Zihao Wu, John Hale, John Gibbs, Khaled Rasheed, Ninghao Liu, Gengchen Mai, Tianming Liu
|
2023-10-30T15:19:15Z
|
http://arxiv.org/abs/2310.19626v1
|
# Transformation vs Tradition: Artificial General Intelligence (AGI) for Arts and Humanities
###### Abstract
Recent advances in artificial general intelligence (AGI), particularly large language models and creative image generation systems have demonstrated impressive capabilities on diverse tasks spanning the arts and humanities. However, the swift evolution of AGI has also raised critical questions about its responsible deployment in these culturally significant domains traditionally seen as profoundly human. This paper provides a comprehensive analysis of the applications and implications of AGI for text, graphics, audio, and video pertaining to arts and the humanities. We survey cutting-edge systems and their usage in areas ranging from poetry to history, marketing to film, and communication to classical art. We outline substantial concerns pertaining to factuality, toxicity, biases, and public safety in AGI systems, and propose mitigation strategies. The paper argues for multi-stakeholder collaboration to ensure AGI promotes creativity, knowledge, and cultural values without undermining truth or human dignity. Our timely contribution summarizes a rapidly developing field, highlighting promising directions while advocating for responsible progress centering on human flourishing. The analysis lays the groundwork for further research on aligning AGI's technological capacities with enduring social goods.
## 1 Introduction
Arts and the humanities have long been reflections of human experience, emotions, and philosophical introspection [1]. These domains, deeply rooted in subjectivity, creativity, and a nuanced appreciation of the world, have served as repositories of our history, culture, and identity. Over the past few years, however, the boundary between human creativity and machine computation has started to blur, ushering in an era where Artificial Intelligence (AI) influences artistic creation and reshapes our understanding of humanities.
Historically, AI's foray into domains requiring creativity was met with skepticism [2]. Critics posited that machines, bound by algorithms and devoid of emotions, could never truly comprehend or replicate the intricacies of artistic expression. Creativity was, after all, seen as the antithesis of computation, fueled by irregularities, out-of-box thinking, and a delicate understanding of the human
condition. These very attributes, which are the cornerstones of arts and humanities, seemed out of reach for artificial entities.
More recently, the landscape has begun to shift. Early algorithms such as "Deep Dream" (2015) [4] and various approaches in the theme of "Neural Style Transfer" [5] marked AI's early attempts at artistic endeavors. However, Deep Dream was plagued by the problem of generating repetitive canine facial features within images, and the style transfer process, while artistically intriguing, lacked the ability to create entirely new content or comprehend the underlying semantics of images. Nevertheless, these earlier attempts began a noticeable shift in perceptions, with increasing acceptance of artificial intelligence's contribution to artistic endeavors. A pivotal moment highlighting this change was when "Edmond de Belamy, from La Famille de Belamy", a portrait produced by Generative Adversarial Networks (GANs) [6], was sold by Christie's New York on Oct 19, 2018 for $432,500 [7], which is more than 40 times Christie's initial estimate. Despite facing skepticism and questions regarding its originality from other artists who work with AI, these rudimentary techniques marked the nascent stages of AI-assisted artistry.
The leap forward came in 2021 with the arrival of text-to-image algorithms. Specifically, the introduction of DALL-E [3], supplemented by the unveiling of open-source projects like VQGAN+CLIP [8, 9], catalyzed the proliferation of AI art generators. Furthermore, in 2022, the release of "Stable Diffusion" [10] by Stability AI and "Imagen" [11] by Google AI ushered in a new era of advanced AI-powered creativity. This release further democratized the Artificial Intelligence-Generated Content (AIGC) process. The field of AIGC is still extremely young. Major contributors and platforms have a relatively short operational history, spanning less than a year. However, the trajectory suggests an impending turning point where AI capabilities will become sophisticated enough to revolutionize various art-related domains. For instance, in the realm of video game development, concept and traditional artists are already harnessing AI image generation for inspiration and as tangible assets in their creative works2. Looking ahead, once the complexities of image generation are comprehensively addressed, it is plausible that the intellectual capital steering this innovation will gravitate toward other modalities. This may encompass domains like auditory processing and generation, video synthesis, and literary generation, among other multidisciplinary challenges.
Footnote 2: [https://www.scenario.com/](https://www.scenario.com/)
With the recent advancement of Large Language Models (LLMs), the rise of Artificial General
Figure 1: Some examples of AGI-generated images. **Left**: A heavily deep-dream-style photograph expressing "three men in a pool", which is difficult for humans to understand. **Middle**: An image generated by DALL-E through translation from "an illustration of a baby hedgehog in a christmas sweater walking a dog" [3]. **Right**: Image created by DALL-E 3 with the prompt "vintage 1940s cartoon featuring a robot holding a steaming coffee mug with a lightning bolt symbol on it, text bubble that reads ‘Need my charge’, sitting at a table by bay window in a coffee shop interior". The model can generate the high-quality image, and correctly understand the instruction.
Intelligence (AGI) further challenges traditional perspectives. AGI [12], with its potential to emulate holistic human cognition, promises not just to create art but to understand and appreciate it-and in fact many proponents of LLMs having early AGI capabilities espouse that these models already have a degree of understanding of the physical world and humans [see references. They need to be added to citations]. LLMs' integration into arts and humanities could revolutionize everything from literary synthesis, capturing the depth of human emotion, to creating multi-sensory art experiences and reinterpreting historical narratives.
This paper delves deep into the rapidly evolving nexus of AGI, arts, and the humanities. While celebrating the transformative potential of AGI, it also critically examines the following underlying questions: Can AGI truly be creative? Will it ever appreciate art the way humans do? And most importantly, as AGI blurs the lines between machine capability and human creativity, what does it mean for the future of arts and humanities? Through this discourse, we seek to navigate the promising yet perplexing frontier of AGI-infused artistry.
## 2 Background
### Generative AI: From GAN to ChatGPT
A recent survey paper [13] provides a comprehensive review of the field of AI-generated content (AIGC). AIGC refers to content like text, images, music, and code that is generated by AI systems rather than created directly by humans.
The authors review the history of generative AI models, beginning with early statistical models like Hidden Markov Models and Gaussian Mixture Models. They then discuss the rise of deep learning models like GANs, VAEs, and diffusion models, with the transformer architecture (2017) identified as a key breakthrough that enabled large-scale pre-trained models like GPT-3 [14], ChatGPT [15], and GPT-4 [16].
The paper categorizes generative models as either unimodal, which generate content in a single modality like text or images, or multimodal, which combine multiple modalities. For unimodal models, they provide an in-depth review of state-of-the-art generative language models like GPT-3 [14], BART [17], T5 [18] and vision models such as Stable Diffusion [10] and DALL-E 2 [19].
For the multimodal generation, the survey examines vision-language models like DALL-E and GLIDE [20] as well as text-audio, text-graph, and text-code models. These allow cross-modal generation between modalities. The authors discuss applications like chatbots, art creation, music composition, and code generation.
They also cover techniques that help align model outputs with human preferences, such as reinforcement learning from human feedback as used in ChatGPT. The paper analyzes challenges around efficiency, trustworthiness, and responsible use of large AIGC models. Finally, open problems and future research directions are explored.
### Opportunities and Challenges of General AIGC
The enthusiastic reception of conversational agents like ChatGPT underscores AIGC's vast potential. However, researchers must grapple with critical challenges around data bias, computational efficiency, output quality, and ethical implications as AIGC rapidly gains traction [21].
On the opportunities front, AIGC holds promise for boosting productivity in creative fields by acting
as an intelligent assistant that can synthesize draft content. Cross-modal generation techniques can potentially bridge content formats, enabling applications like generating videos from text descriptions. In industry verticals like e-commerce, AIGC can scale the creation of catalog descriptions and customized landing pages. For news and entertainment, AIGC may enhance automation in production pipelines. The multi-task learning abilities of foundation models could spur innovation if applied judiciously. On the consumer side, AIGC can deliver more personalized, interactive, and immersive experiences.
However, substantial challenges remain. Massive computational resources are needed to develop and deploy the latest AIGC models [22], which may concentrate power in fewer hands. More crucially, the data used to train AIGC models inherits human biases [23] that are reflected in outputs. Curating high-quality datasets is an arduous task. While human-in-the-loop approaches may improve model alignments, transparency and accountability are still lacking. Safeguards against toxic outputs remain inadequate as interactions uncover harmful edge cases. For high-stakes domains like healthcare, the risks of errors loom large. Despite great enthusiasm, researchers should adopt a measured approach while addressing these concerns through technical and ethical diligence.
While AIGC represents an exciting frontier for AI research with immense potential upside, responsible development calls for holistic solutions encompassing data curation and hygiene, efficient systems, user feedback loops, and transparency. With care and consideration for societal impacts, AIGC could usher in an era where generative AI assists and augments human creativity rather than displacing it. This survey provides a timely overview of the state-of-the-art as of late 2023, and a roadmap to guide progress in this rapidly evolving domain.
## 3 Text Analysis and Generation
Text analysis and generation are crucial domains in natural language processing influencing myriad applications. At the core, text analysis delves into comprehending intricate patterns, meanings, and sentiments in textual data, whereas text generation aspires to craft human-like text based on certain criteria or prompts [15]. With the advent of sophisticated model architectures, the boundaries of what machines can comprehend and produce have been ceaselessly expanded. This section delineates the technical advancements underpinning these capabilities, including seminal models like Transformers, and extends into their pragmatic applications across diverse sectors such as poetry, music, law, advertising, and governance.
### Technical Advances
The Transformer [24] architecture has undoubtedly carved a pivotal role in the progression of natural language processing models. Introduced by Vaswani et al. [24], the architecture abandoned recurrent layers, traditionally used for sequence data, in favor of attention mechanisms. The core concepts emanating from this architecture, including encoders, decoders, BERT, and autoregressive language models, have since dominated state-of-the-art results in various NLP tasks.
#### 3.1.1 Transformer Architectures
At the heart of the Transformer model lies the pivotal _self-attention mechanism_, which computes a weighted sum of all words in a sequence relative to each other. This empowers the model to capture the intricate relationships between words, regardless of their positions in the sequence. Unlike recurrent models such as RNNs [25] or LSTMs [26] which process sequences iteratively, Transformers
handle the entire sequence in parallel. This approach, coupled with additional design elements like positional encodings [27] and residual connections [28], empowers Transformers to deliver both efficiency and effectiveness, even when confronted with lengthy sequences.
#### 3.1.2 BERT (Bidirectional Encoder Representations from Transformers)
Emerging from the Transformer paradigm, BERT [29] represents a monumental shift in pre-training methods. Introduced by Google in 2018, this model captures bidirectional contexts by considering both preceding and following words in all its layers. BERT's pre-training phase involves a _masked language model_ objective wherein it attempts to predict randomly masked words in a sentence. Once pre-trained on vast corpora such as Wikipedia, BERT can be adeptly fine-tuned on specific tasks using small labeled datasets, by just adding appropriate task-specific layers [30]. This approach has made BERT highly versatile, allowing it to be applied to a wide range of NLP tasks.
#### 3.1.3 Autoregressive Language Models
_Autoregressive modeling_[14] uses a step-by-step approach, where predicting the next item in a sequence depends on what came before it. In the context of language modeling, when given part of a sentence, these models try to guess what words come next. Once properly trained, the models good at guessing what word comes after the previous ones in the sequence. When autoregressive models generate text, they can use different methods, like beam search [31], greedy decoding [32], or probabilistic sampling [33]. A well-known example of an autoregressive language model is OpenAI's GPT (Generative Pre-trained Transformer) [34], which, in contrast to BERT's bidirectionality, is unidirectional and is primed mainly for text generation.
### Real-world Applications
This section elucidates the diverse arts and humanities landscape of AGI by categorizing its applications into four distinct but interrelated subsections: 1) Literature Search and Analytics; 2) Linguistics and Communication; 3) Creative Endeavors. Each of these subsections represents a unique facet of AI's ever-expanding repertoire, showcasing its adaptability to perform tasks ranging from the systematic retrieval of knowledge to nuanced linguistic interactions, from meticulous analytics to imaginative, creative endeavors.
#### 3.2.1 Literature Search and Analytics
Large language models, once equipped with common sense knowledge, can provide valuable assistance in various liberal arts domains such as history, classics, and philosophy that heavily rely on literature search and analysis. Based on [35], LLMs can be helpful in the following aspects.
* **Automated Literature Review:** LLMs can quickly scan and summarize large volumes of text. They can identify key concepts, themes, and relevant passages, saving researchers significant time and effort.
* **Cross-Referencing:** LLMs can cross-reference texts, identifying connections and references between historical events, philosophical works, and classic literature, helping researchers explore intertextual relationships.
* **Summarization:** LLMs can generate concise summaries of lengthy texts, making complex philosophical or historical writings more accessible to a broader audience.
* **Question Answering:** LLMs excel in answering specific questions related to historical events, philosophical theories, or classic literature. They can provide concise and accurate responses by drawing on their vast knowledge base.
* **Content Generation:** LLMs can assist in generating preliminary content by retrieving relevant information in the literature [36]. They can provide background information, context, and even propose arguments based on the input provided.
* **Teaching and Learning:** LLMs can be used as educational tools to provide explanations, generate practice questions, and engage students in discussions related to historical, classic, or philosophical topics.
Some specific examples of the ongoing and potential applications are provided as below.
**Anthropology:** LLMs can process large amounts of anthropological data. Through text mining, LLMs deeply research documents, interviews, and historical texts to find information on specific cultures, social groups, or topics, and promote a deeper understanding of human social evolution and cultural differences [15]. Second, LLMs can help analyze social surveys and public opinion polls, to gain an in-depth understanding of attitudes, beliefs, and behaviors in human society, helping researchers understand social trends and changes in public opinion [37]. Finally, through cross-cultural research, LLMs support the comparison of similarities and differences between different cultures, provide translation services, analyze cross-cultural communication, and conduct in-depth research on global issues such as globalization and cultural exchanges.
**Classics:** LLMs contribute to art historical research, providing in-depth insights into a specific period or style by analyzing textual descriptions of classical artworks, historical documents, and the lives of relevant artists. In terms of art education and popularization, LLMs can assist in the
Figure 2: An example of using GPT-3.5 for learning history. The right part shows a follow-up question regarding the answer of the first question in the left part.
creation of art education materials, explain works of art so that more people can understand and appreciate classical art, and generate explanatory texts on art history for use in education, museum exhibitions, and cultural dissemination [38].
**Philosophy:** LLMs can be used for literature reviews to help researchers understand the current state of research on specific philosophical issues or thinkers [39]. They can also analyze philosophical texts and understand the author's ideas, argument structure, and logic. In addition, LLMs can also be used to analyze the structure and effectiveness of philosophical arguments, helping researchers better understand and evaluate philosophical papers.
**Psychology:** LLMs can conduct literature reviews to help researchers understand the latest research and theories on specific psychological topics [40]. LLMs can also generate questionnaire questions to ensure they are clear and effective. In addition, LLMs can analyze comments on social media to understand meudinal health and mental health issues of people, and provide support and resources.
**History:** LLMs can analyze historical texts to help understand events, generate summaries, extract key information, and improve information processing efficiency [41]. LLMs can also calibrate time in historical text, track events, and help establish a detailed historical timeline. They can also help extract character relationships, help build a relationship map, and conduct in-depth research on the influence of historical figures.
#### 3.2.2 Linguistics and Communication
LLMs can be highly beneficial in applications related to linguistics and communications due to their natural language processing capabilities and extensive knowledge base. An example that shows some these abilities is in Figure 3.
* **Language Understanding:** LLMs can be used to analyze the structure, grammar, and semantics of languages, aiding linguists in their research on syntax, morphology, and linguistic phenomena.
* **Translation Assistance:** LLMs can assist linguists and translators in translating text between different languages, helping bridge linguistic and cultural gaps.
Figure 3: An example of using GPT-4 to analyze the background and design philosophy of the lyrics of the UEFA (The Union of European Football Associations) Champions League Anthem. The AI model can easily handle the multilingual content, and even point out the "spirit of unity" and "diversity" behind the design.
* **Sentiment Analysis:** LLMs can perform sentiment analysis on text, enabling businesses and organizations to gauge public sentiment towards their products, services, or policies.
* **Speech Recognition:** LLMs can enhance speech recognition systems, improving the accuracy of voice-to-text transcriptions.
Based on the above capabilities, some specific examples of the ongoing and potential applications of LLMs are provided as below.
**Linguistics:** LLMs can generate new language texts and expand understanding of grammatical structures and vocabulary usage [42, 43]. Scientists can conduct semantic analysis through LLMs and conduct in-depth studies of lexical meanings, contextual relevance [44], and semantic relationships of language expressions [39]. These models can also be used to develop language learning tools to help students learn vocabulary, grammar rules, and other language knowledge. In studying language disorders, LLMs can reveal the manifestations and effects of language disorders in different contexts. However, some scholars have raised objections, believing that LLMs lack human cognition [45].
**Language Studies:** LLMs can analyze the grammatical, semantic, and pragmatic features of different languages. Based on this, LLMs can generate teaching materials and exercises with explanations to provide students with strong support in learning grammar, vocabulary, and expressions. In addition, LLMs excel in translation, supporting cross-language communication and translation [29]. At the same time, LLMs play an important role in writing and creation and can create articles, compose essays, and generate various literary works. In addition, LLMs support speech recognition technology, which allows speech input to be easily converted into text, facilitating speech interaction and speech recognition applications [38]. By processing large amounts of historical texts, LLMs can also assist researchers in tracing the evolution, change, and development of the language.
**Communitcation Studies:** LLMs can analyze large amounts of news and advertising, to reveal patterns, trends, and factors that influence the spread of information. LLMs can also analyze emotions and interactions on social media [46], studying how information spreads in social networks and its impact on public opinion [47]. Researchers use LLMs to analyze the language and framing of news reports and study the way news media report events and their impact on audience perceptions. In terms of multilingual content, LLMs have the advantages of translation and understanding and are helpful in studying language differences and cultural factors in cross-cultural communication.
#### 3.2.3 Creative Endeavors
This subsection starts with examples of several applications where AI might models generate "creative" contents, followed by a further discussion of whether AI can genuinely attain a level of creativity comparable to that of humans.
**Song Lyrics:** LLMs such as the GPT family can can write song lyrics that "tell coherent stories with rhyming words" +. Based on that, the AI models could be used to create new melodies to accompany the lyrics. GPT-4 is significantly better than GPT-3.5 at this due to better reasoning, complex instruction understanding, and creativity.
Footnote †: [https://towardsdatascience.com/writing-songs-with-gpt-4-part-1-lyrics-3728da678482](https://towardsdatascience.com/writing-songs-with-gpt-4-part-1-lyrics-3728da678482)
**Poetry:** There have been some research work on intelligent poetry writing and intelligent couplets [48]. However, the continuous development of LLMs has greatly facilitated research in this area. Figure 4 shows an example of using GPT-4 to write poems. Besides, a website + shows the procedure to write
a poetry using LLMs with only four steps.
**Advertising:** The creation of effective and creative advertisements is a collaborative process that engages professionals with diverse skills and roles. Nonetheless, it is conceivable that certain roles may be assumed by Large Language Models (LLMs) in the future. LLMs can help advertisers and marketers in creating content faster and potentially with quality akin to that of human content creators (see Figure 4). Moreover, given the abundance of successful advertising case studies available for reference in the field, LLMs with strong transfer capabilities such as GPT-4 can further improve the accuracy of advertising word generation through multi-shots to achieve the results desired by users [49]. LLMs can also analyze the promotional trends across a broad spectrum of advertisements, which enables conducting more efficient research, gaining deeper understanding of customer preferences, and addressing the complexities tied to information summarization [50].
With the rapid development of AI models, a question arises: Will AI eventually replace human creativity, or will humans continue to be the paramount source of innovation and originality? A brief creativity comparison between humans and AI is as follows.
* **Human creativity** is influenced by personal experiences, emotions, and imagination, while it has limitations in terms of time, resources, knowledge, and experience, in addition to external factors like societal and economic influences.
* **AI creativity** is primarily grounded in algorithms and data, so a dominant view is that AI can only work with previous data and patterns, and cannot come up with entirely novel ideas on its own. Moreover, AI's deficiency in emotion and empathy poses another restriction. It is unable to replicate human emotions or grasp the emotional depth that art or music carries, potentially resulting in AI-generated content lacking the profound emotional impact typically attributed to human creativity.
## 4 Graphics Analysis and Generation
Graphics encompass various formats, including 2D images, 3D point clouds, 3D meshes, and design schematics. These can be categorized based on their nature as either static or dynamic. The input
Figure 4: An example of using GPT-4 to write poems (left) and personalized advertisement (right).
types for graphic generation and analysis can also vary, ranging from images, text, and even other multidimensional data sources.
### Technical Advances
There are numerous technical advancements that have propelled the fields of graphics analysis and generation to new heights.
#### 4.1.1 Generative Adversarial Networks (GANs)
In the mid 2010's, GANs [51] ushered in a new era in the field of image generation. At the heart of a GAN framework are two intertwined neural networks: the generator and the discriminator. The generator creates images either from random noise in the case of unconditional GANs [6] or guided by text/categories for conditional GANs [52]. Concurrently, the discriminator evaluates these generated images against real images. Through iterative refinement and adversarial training within a minimax game framework, the generator refines its outputs, aiming to create images indistinguishable from real ones while the discriminator learns to be an increasingly better judge of real versus AI-created images. This adversarial process has led to the generation of exceptionally high-quality and realistic images, significantly surpassing previous methods such as autoregressive models, Variational Autoencoder [53], and normalizing flows [54]. Moreover, the versatility of the GAN framework has extended beyond traditional imagery modality to other graphics formats such as 2D/3D point clouds [55, 56, 57], graphs [58], 3D object shapes [59], and so on.
#### 4.1.2 Style Transfer Techniques
Neural style transfer [60] has emerged as a captivating application of deep learning in graphics. By leveraging the intricate structures within neural networks, style transfer algorithms can take the artistic style from one image and apply it to another, enabling the creation of unique, artistically rendered outputs.
#### 4.1.3 Generative Models for 2D Images
GANs excel in producing high-quality 2D images, often to the point of being indistinguishable from real photographs. Additionally, Variational Autoencoders (VAEs) offer a probabilistic framework to generate 2D images [61] while capturing the underlying data distribution. Both models can utilize inputs such as noise vectors, existing images, or textual descriptions to guide the generation process. A seminal work in this domain is alignDRAW [62], which generates captions for images based on VAE and an attention mechanism.
#### 4.1.4 Generative Models for 3D Images and Point Clouds
GANs and VAEs have been extended to generate 3D voxel grids or point cloud representations [55, 56, 57]. Moreover, models like PointGAN [55] focus specifically on generating high-quality point cloud data, capturing intricate 3D structures. Inputs for these models can range from 2D projections, textual descriptions, or even other 3D structures for tasks like super-resolution in 3D space.
#### 4.1.5 Generative Models for Designs
Design generation, especially for aspects like logos, user interfaces, or architectural layouts, has seen innovation through models like CreativeGAN [63]. These models can take inputs in the form
of design constraints, user preferences, or textual descriptions to generate design mockups. The produced designs can be static (like a logo) or dynamic (like an interactive UI prototype).
#### 4.1.6 Static vs. Dynamic Generation
While many generative models focus on producing static outputs, there's a growing interest in dynamic content generation, especially in domains like video synthesis or interactive designs. Recurrent neural networks (RNNs), especially the Long Short-Term Memory (LSTM) networks, combined with GANs (like VideoGAN [64]), as well as the recent video transformer [65, 66] have made strides in generating video sequences. This aligns with the broader trend of moving from static images to dynamic, time-evolving sequences in synthetic media. We will discuss this in detail in Section 5.1.
#### 4.1.7 Diverse Input Types
A hallmark of modern generative models is their ability to handle a variety of input types. While noise vectors remain a staple, there is a growing trend of models using textual descriptions to guide synthesis, allowing for more controlled and descriptive generation. This has been evident in models like AttnGAN [67] and Df-GAN [68], where textual descriptions can guide the fine details of image synthesis, ensuring alignment between described content and the generated image.
#### 4.1.8 Diffusion Models
Diffusion Models (DMs) [69, 70, 71, 72] are innovative techniques conceptually inspired by non-equilibrium thermodynamics [69]. These models progressively introduce Gaussian noise during the forward (diffusion) process and subsequently learn to reverse the diffusion process to reconstruct the image from noise by predicting the previously added noise and then denoising. This unique approach has made them one of the best at synthesizing images and more. One great feature of these models is that they can be directed or controlled in how they generate images without the need for extensive retraining.
**Denoising Diffusion Probabilistic Models:** However, the Denoising Diffusion Probabilistic Models (DDPMs) [70], as one of the pioneering works in diffusion models, have a drawback - given the fact that both the diffusion forward process and the denoising reverse process in DDPMs involve long Markov chains which consist of thousands of steps and DDPMs generally work directly with the individual pixels of an image, they usually require a tremendous amount of computational power and time for both model training and image sampling. In fact, optimizing these models to their best performance can take hundreds of days using powerful graphics processing units (GPUs), and using them can also be costly in terms of resources.
**Denoising Diffusion Implicit Models:** Consequently, to tackle the low sampling speed issue, Denoising Diffusion Implicit Models (DDIMs) [73] was proposed as fast sampling diffusion models closely related to DDPMs. DDIMs maintain the same marginal noise distributions as DDPM but diverge with a non-Markovian diffusion process and deterministically map noise to images. Consequently, DDIMs can generate high-quality images while significantly reducing the generation steps from 1000 in DDPM to just 50.
**Conditional Diffusion Models:** In addition to the aforementioned unconditional diffusion models, researchers have developed DMs that are conditioned on additional inputs such as class labels, reference images, or text sequences [74, 75, 76, 10] to better guide the generation process.
Latent Diffusion Models:To make DMs more efficient without sacrificing their performance, researchers have also started training DMs by using the underlying structures or "latent spaces" of already trained models, known as autoencoders [77]. This approach reduces the computational burden of the process while still retaining the important details that make the images look realistic.
Stable Diffusion:Latent Diffusion Models (LDMs) [10] have been instrumental in advancing the domain of image synthesis. These models incorporate the robust synthesis capabilities inherent to traditional DMs but with an added advantage: the flexibility of operating in latent space. This transition to latent space doesn't just add flexibility, it also introduces a remarkable equilibrium. The LDMs are designed to minimize model complexity without compromising the richness of image details. As a result, there is a noteworthy improvement in visual fidelity in these models versus pixel-based DMs, making output images sharper and more true-to-life. One of the standout features introduced to LDMs is the integration of cross-attention layers. This inclusion is not merely a technical enhancement but a transformation in adaptability. With these layers, LDMs are equipped to handle a diverse range of conditioning inputs. Whether it is textual data or bounding boxes, the model processes them with equal proficiency. This versatility is pivotal, especially when high-resolution image synthesis is the goal. LDMs have shown the capability to generate these detailed images using a convolutional approach, offering a blend of clarity and detail that was until recently very challenging to achieve. Another advantage of LDMs is the low computational overhead. One of the pressing challenges in image modeling has always been the computational demands, especially with pixel-based DMs that tend to be resource-intensive. LDMs present a solution to this long-standing problem. Despite their advanced features and superior performance, they operate with a significantly reduced computational overhead. This efficiency ensures that high-quality image synthesis is not just the domain of those with vast computational resources but is accessible to a broader spectrum of researchers and practitioners using small GPU clusters or even desktop or mobile devices.
### Real-world Applications
Recent advancements in generative AI, particularly in image models, have gained significant popularity not only in research but also in real-world applications. An increasing presence of AI-generated content (AIGC) can be observed in websites, advertisements, posters, and magazines. These models have the capability to generate diverse yet coherent graphics from cartoon illustrations to realistic photographs, eliciting interest across various industries. Figure 5 illustrates the trending popularity of renowned generative AI tools over the past year, indicating a promising future for their real-world applications.
#### 4.2.1 Cartography and Mapping
As the field of studying, designing, and using maps, cartography is considered a discipline that encompasses both art and science by many cartographers [78]. Cartography includes various important scientific questions such as map projection [79, 80, 81], map generalization [82, 83, 84], building pattern recognition [85, 86, 87], drainage pattern classification [88], and so on. Because of the nature of cartography, most of these tasks require an AI model to manipulate or generate geospatial vector data (e.g., points, polylines, and polygons) [89, 90, 91]. Although there are multiple existing foundation models, most of them are unable to handle this kind of vector data which makes these foundation models inapplicable for various cartography tasks [92, 93]. However, there are also various important cartography tasks that current foundation models are able to handle such as historical map data extraction. For example, various multimodal foundation models such as KOSMOS-2[94] and GPT-4V [16] can be used for extracting and linking text from a historical map
Figure 5: Google trends (top) and subreddit subscriber growth (bottom) for the past 12 months of the top 3 AI art generation tools: Midjourney, Stable Diffusion, and DALL-E. Data source: Google Trends and Subredditstats.
of Georgia [95, 96]. Figure 6 shows one illustrative example and the response from GPT-4V. It's evident that even without task-specific fine-tuning, GPT-4V can identify various place names from maps. Additionally, although the accuracy may not always be very high, GPT-4V can generate map coordinates for these places. Moreover, foundation models can be also used for map reading and map-based question answering for topographic maps, thematic maps, or even narrative maps [97]. Despite these success stories, applying foundation models and AIGC on cartographic applications can also lead to ethical issues such as inaccuracies, unanticipated features, and reproducibility [98]. So the pros and cons of foundation models on cartographic applications need to be investigated further.
#### 4.2.2 Environmental Design
AIGC, especially text-to-image generation, provides valuable tools for designers. These technologies can offer inspiration and improve workflow efficiency in the field of environmental design, including landscape architecture[99, 100], urban design[101, 102], architecture[103, 104], and interior design[105, 106]. In the initial design phase, AI sparks inspiration by generating diverse intentional images in various styles. It is particularly imaginative in the generation of special-shaped buildings[107]. Providing diverse reference styles also helps to confirm the tone and style of the work. In the design
Figure 6: An illustration of using GPT-4V to do place name extraction and localization from a historical map of Georgia, USA as Kim et al. [96] did. The input to GPT-4V is the historical map and the prompt shown in the blue box. The answer from GPT-4V is shown in the orange box which provides a list of extracted place names as well as their map coordinates. Based on these map coordinates, we plot the corresponding numbers on the historical maps.
review stage, AI can perform rapid partial replacement, helping designers to clarify the replacement effect and improve the speed of modification. Take architecture as an example[108]. Designers can compare the effects of different surface materials, body proportions, and facade details with AI. In design analysis, AIGC's ability to generate images with multiple perspectives and scales supports designers in producing analysis diagrams such as streamlining analysis and functional partitioning. Finally, after the design plan is finalized, AI can accelerate rendering and offer dimension choices, like spatial scale, weather, and night scenes. Since environmental design is a graph-oriented industry, the application of newly emerging multi-modal foundation models (FMs) in this field is in a more auxiliary position compared to text-to-image generative AI. Multi-modal FMs can assist designers in understanding statistic diagrams and then enhance scientific support for designs. They can also identify and illustrate images, including remote sensing images, architecture, and interior photos, which can be used for case studies and style reference. They can even evaluate design works and give suggestions for improvement. Figure 7 shows an example of LLaVA's recommendations for architecture design work. In this example, LLaVA extracts several building features like windows, balcony, garden, and roof from a photo of an architectural model, as well as information from the prompt to offer advice. This example proves LLaVA's capacity to analyze architecture functionally, although it has not shown insights into aesthetic and social meanings, which remain the exclusive domain of architects.
Figure 7: An example of AI’s suggestion for architecture design (Generated by LLaVA)
#### 4.2.3 Photography and Editing
Foundation models and AIGC have transformative forces to improve image quality and image editing efficiency, and even extend the domain of photography to artificially generated "photographs." In terms of image quality, AIGC can be used for refining and upscaling historical and/or low-resolutions photos with the so-called image restoration [109, 110] and image superresolution [111, 112] capability. Furthermore, AIGC's capacity to eliminate reflections saves a lot of photos ruined by reflections from glasses. When it comes to enhancing photo editing efficiency, foundation models shine in various aspects. Firstly, FMs such as SAM [113] excel in objective segmentations without any model finetuning. Secondly, with the so-called image inpainting [10, 109, 114] ability, FMs can be applied to remove the recognized objects and automatically replace the target area with a coherent background. Figure 8 shows one example with Adobe Firefly in which the background of University of Georgia's Arch is changed from summer to autumn style. FMs can be also used to generate the missing part of an object when this object is blocked or outside of the image scene. In addition, FMs can change the characteristics of these objects, including color, texture, and even style, which used to be a time-consuming task. FMs can also generate entirely new objects from text prompts or scribbles [115] which perfectly fits the light conditions and angle of the photo. Finally, Diffusion Models (DMs) are excellent at creating photorealistic images from text prompts or other images [116]. This synthetic image generation is a step-change in creative photography, and even calls into question the traditional definition of photography, which has traditionally been associated with recording photons onto analogue or digital recording devices (e.g., chemicals on a plastic sheet or CMOS image sensors). Figure 9 illustrates three synthetic photographies generated by three widely used generative diffusion models.
#### 4.2.4 Illustration
While currently, it cannot entirely replace professional illustrators, AIGC significantly aids the initial conceptualization stage[117], much like its role in environmental design. AIGC's rapid iterations from inspiration to finished drawing allow illustrators to quickly determine the composition, elements, and style of a painting. Since AI has significantly reduced the difficulty of illustration, in situations where super-precise drawings are not needed, the images AIGC generates can even be directly applied to illustrated books, comics, and print advertisements. Illustrators use LLMs to give instruction for storyboards, character design, and painting style, then employ multi-modal FMs to generate complete illustrations[118], that have consistency in scenes, characters, and style. With the help of image editing tools, some of which are also powered by AIGC, typesetting work can also be
Figure 8: An example of Photo editing. **Left**: Before editing. **Right**: After editing by Adobe Firefly with the prompt "Turn the background into autumn".
completed. Using the tools mentioned above, AIGC completely supports the entire illustration process and effectively reshapes the traditional workflow.
#### 4.2.5 Graphic Design
AIGC has a wealth of applications in graphic design, including logo design, print advertising, product packaging design, and mock-ups[119]. AI-generated logos can be suitable for printing or can be used in richer scenarios such as storefront signs and building facades after fine-tuning by the designer. The production process of print advertisements with AI is close to that of illustrations. These tools' fast, low-cost, and uniform style has made them favored tools among print advertisers. AIGC also has a role in product packaging design and mock-ups, where it can generate packaging for a series of products under the same subject, and provide a variety of usage scenarios for products. These processes replace traditional photography or rendering, greatly reducing time and cost expenditures.
#### 4.2.6 Font Design
AI provides a rapid and easy way to conduct font design. Supplying AI with letter references, whether in vector form or rough hand-drawn sketches, the AIGC models are capable of comprehending their unique style and seamlessly adapting it by maintaining harmonious counters and bodies. These refined letter forms can then be seamlessly integrated into existing texts[120]. In addition to learning variables in type design, AIGC also treats references as graphs, considering elements such as color, texture, shading, reflection, glow, or other effects[121]. Therefore, these graphic features can be transferred to letters. Moreover, natural language also provides enough information to design new fonts. Figure 10 is an example of art font design accomplished by Adobe Firefly. Both texture and shape are successfully generated according to the prompt, although there are some imperfections around the edges.
#### 4.2.7 3D Design
3D design plays a pivotal role in the animation, video game, and film industries. The application of AIGC and FMs in this domain unfolds into two categories. One use case is text-to-3D, an extension of image generation[122]. Just as text-to-image generative models, FMs can be used to generate 3D
Figure 9: Synthetic photography generated by three current generative diffusion models: DALL-E 3 (left), Midjourney (center), and Stable Diffusion XL (right). All images are generated by the following identical prompt for image generation: “hip mother age 30 looking from the baby’s perspective, lens: 35mm, focus: mother’s face, style: modern realistic, fashion: chic, fall colors, no patterns.”
models based on text prompts. This can be applied in the prototyping of scenes and characters, enriching the creative process. The other use case is 3D model manipulation. Given a 3D model, AI can adjust its posture automatically according to reference pictures or user instructions instead of manually adjusting joint positions[123]. This feature caters not only to professional designers but also fosters accessibility for novice users in 3D model creation. Moreover, the surface of 3D models can be also generated by text prompts. Combined with image generation models, AI-enhanced 3D models improve the efficiency of 3D character generation, scene rendering, and even product design.
#### 4.2.8 Fine Art
Perhaps most divisively, AI-based image generators can be used to create works traditionally associated with fine art[124], or art with no purpose but to amaze and please its audience. Fine art painting and photography are considered the epitome of human skill and creativity, yet AIGC, specifically in the form of diffusion models (DMs), has created work that many consider on the level of highly skilled photographs and paintings. Needless to say, many practitioners and critics state that DMs cannot now or ever replace human creativity and skill. At present there is no clear answer to the question of whether AI, or AI in combination with a human, will be able to create work on the level of the highest human artistic achievements, but this is an area that should be watched in the coming months and years.
#### 4.2.9 Evolutionary Creativity
Evolutionary art and evolutionary music are innovative fields of generative AI [125]. They belong to the general field of evolutionary creativity and leverage Evolutionary Computation to generate esthetically pleasing visual arts or music. Evolutionary computation [126] is a collection of methods based on the principles of Darwinian Evolution. They simulate a population of solutions evolving over time through operations of selection, mutation, and recombination, better solutions are found. The field of evolutionary creativity includes multiple approaches which could be divided into human-in-the-loop approaches where the evolutionary algorithm generates art or music and a human either assigns a score (fitness) or compares different pieces of art or music and picks the best. Other approaches rely on an objective measure of merit based on some rules of thumb in music composition for example.
## 5 Video and Audio Analysis and Generation
### Technical Advances
Video content (including audio) is a predominant form of information consumption and communication in the digital age. With the exponential growth in video data, there arises an acute need for
Figure 10: An example of font design generated by Adobe Firefly with the prompt "pink hawaiian hibiscus flowers and leaves realistic, and the shapes of flowers and leaves can be out of letters" and the text "Arts and Humanities".
effective video analysis and generation tools powered by artificial intelligence (AI). The following section delves into the technical advances in the domain of video analysis and generation.
#### 5.1.1 Early Approaches
Generative adversarial networks (GANs) were first applied to generate simple synthetic videos. Models like TiVGAN [127] and MoCoGAN [128] pioneered GAN-based video generation. However, these early GAN models were limited to generating short, low-resolution videos focused on specific domains like human actions. The quality and diversity were lacking.
#### 5.1.2 Autoregressive Models
Compared to GANs, autoregressive models can model density explicitly and conduct stable training, thus they are widely used in visual synthesis. Autoregressive models [129, 130, 131, 132] tried to generate higher-resolution videos by modeling pixel distributions sequentially. But they were slow and hard to scale up.
#### 5.1.3 Diffusion Models
As with still images, Diffusion Models have become very popular for high-quality image generation. Video Diffusion Models (VDM) [133] extended image diffusion models to the video domain by training on both images and videos. Imagen Video [116] built a cascade of VDMs to generate longer, high-resolution videos. However, it requires large-scale training and latent optimization. Tune-A-Video [134] optimized the latent space of a diffusion model on a single reference video to adapt it for video generation. This reduced training but still requires optimization. A recent study by Text2Video-Zero [135] proposes a zero-shot text-to-video approach without any training on video data. It leverages a pre-trained text-to-image diffusion model and modifies it with motion dynamics in latent space for background consistency and cross-frame attention to preserve foreground details. This allows high-quality video generation from text without costly training. It also enables applications like controlled/specialized video generation and text-driven video editing. Ablations show the contributions of the modifications for temporal consistency. The zero-shot ability and lack of training are advantages over prior techniques.
Figure 11: Some examples of Video Generation. **Left**: Editing a movie with the prompt. **Right**: Video created by DALL-E 3 with the prompt.
### Real-world Applications
#### 5.2.1 Film Industry
New AI technologies such as LLMs or Multi-Modal FMs have the potential to revolutionize the film industry at different stages of the movie-making process [136, 137]. First, LLMs can analyze the draft scripts and generate unique storylines [138], which help filmmakers write and revise scripts more efficiently [139]. In addition to scriptwriting, LLMs also can simplify the movie pre-production process [140, 141, 142]. Specifically, they can make shooting schedules, find exterior film locations and props, speed up casting person search, and estimate the success and potential revenues the film may earn. Second, LLMs can generate instructions for technical staff during filming [143, 144], including lighting, shot prediction, audio recording, etc. LLMs are capable of identifying the director's personalized filming style and thus can generate filming instructions specific to the director's style. Third, Multi-modal LLMs serve as a good editing tool in post-production. LLMs can synthesize multiple clips and even create special effects based on the scripts [145]. They can also generate trailers and synopses for promotion purposes [143, 138]. LLM-based music composition tools can also be used to find or create an Original Sound Track (OST) that adapts to the movie plot.
#### 5.2.2 Social Media
Increasingly, social media is shifting towards video content over text-based posts. From Podcasts to short-form videos (e.g., TikTok) to longer-form user-generated content (e.g. YouTube), users both create and consume video content at ever-growing rates. AI and AIGC are in the early stages of disrupting this industry, but in the near future, this disruption is likely to grow rapidly. One of the most interesting nascent applications is in high-quality AI-based language translation. Several startups have popped up recently that ingest video in a given language (e.g., English) and reproduce it in any number of output languages (e.g., Spanish or Mandarin). The output video can match the original creator's voice characteristics and even make the lips move as if the creator natively speaks the output language+.
Footnote †: dagger}\)
#### 5.2.3 Journalism and Communications
As illustrated in Sec 2.2, LLMs can analyze large amounts of textual data, including news, social media, and advertisements. Researchers can use LLMs to study how information spreads and impacts the public [146]. Video is also an important modality in communication. Nowadays, short user-generated videos are gaining popularity, alongside traditional media like TV and newspapers [147]. The multi-modal FMs have the advantage of analyzing vast amounts of news data in different modalities, including a large quantity of information uploaded by the public. This helps researchers understand how information spreads in the network and track the personal behavior of each user.
#### 5.2.4 Music Analysis
Multi-modal LLMs have been proposed to empower frozen LLMs with the capability of understanding both visual and auditory content in videos [148]. Multi-modal FMs can perceive the gestures and movements of the music performers in a video [149, 150, 151, 152, 153, 154], for example, fingering analysis on piano. Based on the visual perception, the FMs can further understand the content, emotion, and intention of the performance [155, 156, 157] and reveal the cultural characteristics. The visual understanding provided by FMs helps musicians improve their performance and composition
skills [158]. In addition to audio analysis, these models can help with the generation of music. Diffusion Models (DMs) have been repurposed from images to audio recently, allowing for original musical creations based on text input. In a similar fashion to how a user can interact with an image-based DM to request a given image, a user can also type in a textual description of a requested audio composition and get a sound file based on this description.
## 6 Responsible AGI
**Is AI Threatening Humanity?** The popularity of AI-generated content, spanning writing, photography, art, and music, has surged dramatically. However, this meteoric rise has also sparked significant backlash, with some people rejecting AI-generated art and even asserting that its widespread adoption signals potential concerns for humanity. The question of whether AI is threatening humanity is a complex and debated topic. For example, AI itself is a tool created and controlled by humans, which could automate tedious tasks for humans but could also cause job displacement to human society; AI could generate art works efficiently, but they could not serve as a deeper communicative medium of human experience [159]; AI could improve healthcare but could also pose threats to public safety. Some essential components of responsible AGI are discussed as below.
### Factuality
Large language models are susceptible to hallucinations [160], wherein they may produce content that includes non-factual information or deviates from established world knowledge [161]. This poses challenges in numerous applications, such as legal research and historical studies where factual accuracy is crucial. In addition to natural language processing, factuality-related concerns also extend to the field of computer vision. A typical challenge arises in the form of generative models, such as stable diffusion, struggling to accurately generate realistic human hands with the correct number of fingers [162] as well as remote sensing images with correct geographic layout [93]. Non-realistic AI-generated images or videos may pose challenges in engaging viewers emotionally or intellectually compared to traditional ones.
Common strategies to tackle the above issues include factuality evaluation and generation regularization. For factuality evaluation in generated content, several typical methods stand out. ROUGE [163] offers a metric that evaluates the quality of computer-generated summaries by measuring their overlap with human-created reference summaries in terms of n-grams, word sequences, and word pairs. Similarly, BLEU [164] provides an automatic machine translation evaluation technique renowned for its high correlation with human evaluations, positioning it as a swift and efficient alternative to more labor-intensive human assessments. In a more recent development, a model-based metric [165] has been introduced, specifically designed to assess the factual accuracy of the generated text, further enhancing and complementing the capabilities of traditional methods like ROUGE [163] and BLEU [164]. For generation regularization, "Truthful AI" [166] is proposed to focus on enhancing the integrity and accuracy of AI-generated outputs. By setting rigorous standards, the initiative seeks to prevent "negligent falsehoods", achieved through selected datasets and close human-AI interaction, aligning with societal norms and legal constraints.
### Public Safety
Despite the rapid advancement of generative AI technology, such as ChatGPT and Midjourney, which can generate human-like texts, images, and videos, it also raises critical concerns related to
public safety, encompassing issues of privacy, cybersecurity, national security, individual harassment, and the potential for machine misuse [167, 168].
* **Misinformation.** AIGC such as texts, images, and videos can be used to create and spread false or misleading information, leading to public confusion, panic, or harm [169, 170]. For example, Midjourney can accept prompts like "a hyper-realistic photograph of a man putting election ballots into a box in Phoenix, Arizona", and produce high-quality images that could be used to support the news [171]. The issue is particularly concerning in areas like public health, elections, and emergencies. In addition, AI-generated deep fake images [172, 173] and videos [174] can impersonate individuals, including public figures, and spread false or defamatory content. Meanwhile, they can be used to invade individuals' privacy by creating content without their consent, leading to serious ethical and legal implications. Repeated exposure to deceptive AI-generated content can damage reputations, incite social unrest, and erode public trust in authorities. Moreover, the National Geospatial-Intelligence Agency (NGA) also alarmed us with the risk of deep fake satellite images from generative AI being used as a terrifying AI-powered weapon [175, 176].
* **Phishing.** Phishing is a type of cyber-attack where attackers attempt to deceive individuals into revealing sensitive or personal information such as login credentials, credit card numbers, or personal information. AI can be used in various ways to enhance phishing campaigns. (i) **Spear Phishing** is a targeted cyber-attack approach that uses _personalized_ details to trick individuals into revealing confidential information [177]. Modern LLMs have the ability to produce convincing human-like texts, which can be used to create personalized spam phishing messages on a large scale and at a low cost. For instance, using advanced models like Anthropic's Claude, a hacker can easily generate 1,000 spear phishing emails for just $10 in less than two hours [178]. (ii) **AI voice cloning** is another noteworthy technology, as nowadays only a short voice sample is needed to create a realistic imitation. For instance, Google's AI system can mimic someone's voice with just a five-second sample [179]. This technology can be misused in cases where fake audio is used to impersonate authoritative figures in media settings. (iii) **AI-created phishing websites** benefit from the capabilities of multimodal foundation models. These AI-generated websites not only display a remarkable proficiency in emulating the appearance and functionality of established brands, but they also possess the ability to integrate advanced methodologies that can bypass conventional anti-phishing protocols [180].
* so-called geopolitical favouritism which is defined as the over-amplification of certain country representation (eg. countries with higher GDP, geopolitical stability, military strength, etc) in the generated content [184].
Mitigating safety issues caused by AIGC is still an ongoing challenge that requires a collaborative effort from AI developers, regulators, educators, and the broader society. Inspired by "magic must defeat magic", given the large volume of web content, researchers have been actively working on developing AI-based classifiers to detect online content produced by AI models [185]. As highlighted by the work of Ippolito et al. [186], they rely on the supervised learning approach. Their study specifically fine-tuned the BERT model [29] using a mix of texts from human authors and those generated by LLMs. This method magnifies the subtle differences between human and AI-produced writings, thus enhancing the model's capability to pinpoint AI-generated content. In the field of misinformation detection, AI also plays a crucial role. Zhou et al. [169] investigated the distinct features of AI-generated misinformation and introduced a theory-guided technique to accumulate such content. This facilitates a systematic comparison between human-authored misinformation and its AI-generated counterpart, aiding in the identification of their inherent differences. On another front, AI models are equipped to detect biases within AIGC. Fang et al. [181] selected articles from reputable, impartial news outlets, such as The New York Times and Reuters. By using headlines from these sources as prompts, they assessed the racial and gender biases in LLM-generated content, comparing it with the original articles to highlight discrepancies.
Another line of research focuses on enhancing AI models to reduce the likelihood of misbehavior. For instance, a recent study found that AIGC produced by ChatGPT exhibits a lower level of bias, in part due to its reinforcement learning from human feedback (RLHF) feature [181].
### Toxicity
To ensure the dependable deployment of AI, it is imperative to prevent AI models from generating toxic or harmful content, which encompasses hate speech, biases, cyberbullying, and other objectionable material. Toxic content can harm individuals and communities, perpetuate discrimination, and create a hostile online environment. Although detecting hate speech and offensive language has long been a subject of research [187, 188], the study of toxic AI-generated content is a more recent direction. For example, recent findings indicate that ChatGPT can consistently generate toxic content on a broad spectrum of topics when it is assigned a persona [189]. Pre-trained language models can produce toxic text even when prompted with seemingly innocuous inputs [190]. Thus, many organizations were actively working on research and technology to improve AI content generation while reducing harmful outputs. These recent efforts can be divided into two categories, including training-time and inference-time detoxification.
**Training-time Strategies.** There are two primary methods for refining large foundation models: _pre-training_ and _fine-tuning_. To improve model pre-training, one approach involves the identification and filtering of undesirable documents from the training data [191]. Additionally, we could augment the training data with information pertaining to its toxicity, towards guiding the LM to detect toxic content and hence generate non-toxic text [192]. During fine-tuning, it is possible to align language models with human preferences by employing human feedback as a reward signal [193, 194, 195]. A well-known example is InstructGPT [194] developed by OpenAI, which could generate less toxic outputs than those from GPT-3 by using properly designed prompts.
**Inference-time Strategies.** There are two major methods for reducing the toxicity of AI-generated content during inference time, including prompt learning and decoding-time steering. Prompt learning offers a versatile method to assess and tailor the output of large language models, such as toxicity classification, toxic text span detection, and detoxification [196]. First, given a sentence, an initial step involves mapping its label to either "Yes" or "No" and subsequently refining the prompt
to enhance its guidance for the language model. Second, toxic text span detection identifies the specific segments (i.e., the word offsets) that make the text toxic. Third, to rephrase the toxic text into a non-toxic version while preserving its semantic meaning. On the other hand, decoding-time steering [197, 198, 190] manipulates the output distribution to avoid generating mindless and offensive content.
## 7 Conclusion
The swift evolution of artificial general intelligence (AGI) is transforming the landscape of art and humanities in profound ways. As demonstrated in this paper, AGI systems like large language models and creative image generators have already exhibited impressive capabilities across diverse artistic domains including literature, visual arts, music, and more. However, as boundaries between human creativity and machine capabilities blur, difficult questions emerge around truth, toxicity, biases, accountability, and social impacts.
While celebrating the immense potential of AGI to augment human expression, we must thoughtfully navigate its responsible development. Multi-stakeholder collaboration and public discourse are vital to steer these systems in directions that uphold cultural values, pluralism, dignity, and truth. Technical solutions such as robust factuality evaluations, toxicity filters, and bias detectors can help instill reliability and trustworthiness in AGI systems. Ultimately, however, cultural shifts toward responsible innovation, centered on human flourishing over profits or progress for its own sake, are crucial.
By harnessing AGI as a partner for human creativity, while proactively addressing its pitfalls, we can usher in an era where machine intelligence promotes knowledge, empowers imagination, and expands access to the arts. The onus lies on researchers, developers, policymakers, and society at large to align AGI's technological promise with enduring human values. Through principled efforts, we can ensure these rapidly evolving systems enrich rather than undermine our shared cultural heritage.
AcknowledgementWe would like to thank Prof. John Hale from the Linguistics Department, University of Georgia for his thoughtful comments on the opportunities of AIGC and foundation models' applications on various art and humanities tasks.
|
2307.02811
|
Machine Learning Classification of Repeating FRBs from FRB121102
|
Fast Radio Bursts (FRBs) are mysterious bursts in the millisecond timescale
at radio wavelengths. Currently, there is little understanding about the
classification of repeating FRBs, based on difference in physics, which is of
great importance in understanding their origin. Recent works from the
literature focus on using specific parameters to classify FRBs to draw
inferences on the possible physical mechanisms or properties of these FRB
subtypes. In this study, we use publicly available 1652 repeating FRBs from
FRB121102 detected with the Five-hundred-meter Aperture Spherical Telescope
(FAST), and studied them with an unsupervised machine learning model. By
fine-tuning the hyperparameters of the model, we found that there is an
indication for four clusters from the bursts of FRB121102 instead of the two
clusters ("Classical" and "Atypical") suggested in the literature. Wherein, the
"Atypical" cluster can be further classified into three sub-clusters with
distinct characteristics. Our findings show that the clustering result we
obtained is more comprehensive not only because our study produced results
which are consistent with those in the literature but also because our work
uses more physical parameters to create these clusters. Overall, our methods
and analyses produced a more holistic approach in clustering the repeating FRBs
of FRB121102.
|
Bjorn Jasper R. Raquel, Tetsuya Hashimoto, Tomotsugu Goto, Bo Han Chen, Yuri Uno, Tiger Yu-Yang Hsiao, Seong Jin Kim, Simon C. -C. Ho
|
2023-07-06T07:02:32Z
|
http://arxiv.org/abs/2307.02811v2
|
# Machine Learning Classification of Repeating FRBs from FRB121102
###### Abstract
Fast Radio Bursts (FRBs) are mysterious bursts in the millisecond timescale at radio wavelengths. Currently, there is little understanding about the classification of repeating FRBs, based on difference in physics, which is of great importance in understanding their origin. Recent works from the literature focus on using specific parameters to classify FRBs to draw inferences on the possible physical mechanisms or properties of these FRB subtypes. In this study, we use publicly available 1652 repeating FRBs from FRB121102 detected with the Five-hundred-meter Aperture Spherical Telescope (FAST), and studied them with an unsupervised machine learning model. By fine-tuning the hyperparameters of the model, we found that there is an indication for four clusters from the bursts of FRB121102 instead of the two clusters ("Classical" and "Atypical") suggested in the literature. Wherein, the "Atypical" cluster can be further classified into three sub-clusters with distinct characteristics. Our findings show that the clustering result we obtained is more comprehensive not only because our study produced results which are consistent with those in the literature but also because our work uses more physical parameters to create these clusters. Overall, our methods and analyses produced a more holistic approach in clustering the repeating FRBs of FRB121102.
keywords: (transients:) fast radio bursts - stars: magnetars - stars: neutron - methods: data analysis
## 1 Introduction
Fast Radio Bursts (FRBs) are bright millisecond-duration radio flashes of extragalactic origin (Lorimer et al., 2007; Thornton et al., 2013; Petroff et al., 2016). They are characterized by their anomalously high dispersion measure (DM) and millisecond duration, indicating high brightness temperature and isotropic energy release (Ravi et al., 2015; Tendulkar et al., 2017; Zhang, 2018; Bannister et al., 2019; Ravi et al., 2019; Li et al., 2021b; Bochenke et al., 2020). FRBs are usually classified as either'repeating' or 'non-repeating'. Repeating FRBs have multiple bursts, while non-repeating FRBs have one-off bursts (Cordes & Chatterjee, 2019). Currently, there are \(>\) 600 FRBs that are reported as of April 2022 (Petroff et al., 2016; Li et al., 2021b; CHIME/FRB Collaboration et al., 2021).
FRB121102, first discovered in 2014 (Spitler et al., 2014) and identified as a repeater in 2016 (Spitler et al., 2016), is the most extensively studied FRB across a broad range of radio frequencies from 600 MHz up to 8 GHz (Josephy et al., 2019; Gajjar et al., 2018).The repetition allowed for localization with a high precision of 100 mas, leading to the first unambiguous identification of an FRB host galaxy at \(\sim\)1 Gpc (\(z=0.193\)) and its association with a persistent radio source (Chatterjee et al., 2017; Bassa et al., 2017; Marcote et al., 2017; Tendulkar et al., 2017; Kokubo et al., 2017)). Many theoretical models have been developed to explain the physical nature of FRB121102 (see Platts et al. 2019 for review). In particular, it has been suggested that FRB121102 might have originated from a young magnetar (Kashiyama & Murase, 2017; Metzger et al., 2017; Beloborodov, 2017; Margalit et al., 2018). Performing follow-up observations using the Arecibo Telescope, Spitler et al. 2016 found ten additional bursts for FRB121102. Shortly after, Scholz et al. 2016 found six bursts from two different telescopes. Five from the Green Bank Telescope (GBT) at 2 GHz, and one from the Arecibo Telescope at 1.4 GHz. Michilli et al. 2018 detected 16 bursts from FRB121102 using the William E. Gordon Telescope at the Arecibo Observatory at 4.1-4.9 GHz. Most recently, Li et al. 2021a found 1652 bursts using the Five-hundred-meter Aperture Spherical radio Telescope (FAST) at 1.05-1.45 GHz. In addition to these, Rajwade et al. 2020 discovered a tentative period of 157 d with a duty cycle of 56 percent, and Hessels et al. 2019 showed that FRB121102 exhibits a complex time-frequency structure.
Machine Learning (ML) has been proven helpful in astronomy and its related fields. In the field of FRB research, ML has found its
applications in the works of Zhang et al. 2018, wherein they used a combination of neural network detection with dedispersion verification to work on pulse detection and periodicity of FRB121102; Wagstaff et al. 2016 in the development of automated methods in identifying events of interest; Connor and van Leeuwen 2018 in applying deep learning to single-pulse classification and developing a hierarchical framework for ranking events by their probability of being astrophysical transients; and most recently, Chen et al. 2022 where an unsupervised machine learning algorithm, namely Uniform Manifold Approximation and Projection or UMAP (McInnes et al. 2018), was used to understand, classify, and identify possible FRB repeaters from a sample of 501 non-repeating and 93 repeating FRBs.
Despite these developments, there is still little understanding about the nature of the repeating FRBs (e.g., Kim et al. (2022); Hashimoto et al. (2022)). Thus, the main purpose of this research is to shed light on the underlying physical mechanisms of repeating FRBs by studying FRB121102. Specifically, this study focuses on determining and characterizing burst subtypes of FRB121102 in order to unveil latent features or properties of repeating FRBs. Also, we would limit the focus of this paper to classifying FRBs leaving the discussion of the possible mechanisms to future theoretical studies.
This paper is structured as follows: Section 2 (Data Preprocessing) discusses the selection of the samples from the archival data shown in the Supplementary Table 1 of the Li et al. (2021a, dataset) paper. Section 3 (Unsupervised Machine Learning) is divided into two subsections. Section 3.1 (Uniform Manifold Approximation and Projection (UMAP)) focuses on finding the low-dimensional representation of the data using UMAP and Section 3.2 (Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN)) discusses how HDBSCAN was used to cluster the data. In Section 4 (Results) we show the parameter coloring of the UMAP embedding results to see any trends and investigate the properties of each cluster. In Section 5 (Discussion) we discuss the implication of the results, change in cluster membership (Section 5.1, Cluster membership change), and compare it to the results found in the literature (Section 5.2, Comparison with other results). Lastly, Section 6 (Conclusions) summarizes the findings and conclusions of this study. An Appendix A has also been included to show other important results that is used in the analysis but not central to the goal of the paper.
## 2 Data Preprocessing
In this paper, we used the archival data from FAST as presented in the Supplementary Table 1 of Li et al. (2021a, dataset). Wherein, they reported 1652 independent bursts in a total of 59.5 hours all throughout the continuous monitoring campaign of FRB121102 from August 29, 2019, up until October 29, 2019, using the FAST. The archival data from the Supplementary Table 1 of Li et al. (2021a, dataset) have the following parameters:
* Burst Arrival Time (MJD)
* Dispersion Measure (pc \(\cdot\) cm\({}^{-3}\))
* Time Width (ms)
* Bandwidth (GHz)
* Peak Flux (mJy)
* Fluence (Jy \(\cdot\) ms)
* Energy (erg)
We want to include as many parameters as possible to ensure the veracity of the results. Thus, we included waiting time, which is defined to be the arrival-time difference between two subsequent bursts.
For the parameters that we used in the unsupervised machine learning we excluded the Burst Time (MJD) parameter because the observational periods of the monitoring campaign are not uniform based on Figure 1.a of Li et al. (2021a, dataset). The Dispersion Measure (pc \(\cdot\) cm\({}^{-3}\)) is also excluded for the reason that it is not intrinsic to the FRB source, and it is mainly related to the distance of the source. Some of these parameters are known to have correlation with each other, but we still included them in the analysis because their inclusion does not introduce bias as what Lindner et al. (2020) found in their analysis and exploration on the treatment of collinearity in quantitative empirical research. Additionally, it does not hurt to include as many parameters as possible. Thus, the parameters that are used for the unsupervised machine learning are Time Width (ms), Bandwidth (GHz), Peak Flux (mJy), Fluence (Jy \(\cdot\) ms), Energy (erg), and Waiting Time (s).
It is also important to realize that since the observation period is not uniform, we need to exclude the data points that have a waiting time of longer than or equal to one day. As this is just an artifact of the monitoring campaign and has no use for our analysis in this paper. As we can see from Figure 1, the red dotted line represents a waiting time value of one day and the blue dash-dotted line represents a waiting time of half a day. We exclude the data points beyond the blue dash dotted line because of the observational cadence of FAST.
From the 1652 independent bursts reported by Li et al. (2021a, dataset), after following the data selection method explained above, the number of independent burst samples we will use for the unsupervised machine learning is 1613. It is known that FRB121102 have a bimodal waiting time distribution (e.g., Li et al. 2021a) and as Figure 2 shows; the exclusion of 39 data points did not affect this property.
## 3 Unsupervised Machine Learning
### Uniform Manifold Approximation and Projection (UMAP)
Our data have 1613 rows and 7 columns after the preprocessing. Now, we employ a dimension reduction algorithm to visualize our data and conduct our unsupervised learning. This can be done by using Uniform Manifold Approximation and Projection (UMAP)
Figure 1: Waiting time values of each burst throughout the monitoring campaign done by Li et al. (2021a, dataset).
(McInnes et al., 2018). Based on the ideas from topological data analysis and manifold learning techniques, UMAP finds a low dimensional representation of a given data by using basic Riemannian geometry to bring the data much closer to the underlying assumptions of the topological data analysis algorithm.
UMAP has four basic hyperparameters which significantly affect the resulting embedding of the data. These hyperparameters are min_dist, metric, n_components, and n_neighbors. It is important to realize that in this work we would like to uncover if there are underlying physical mechanisms or properties that make an FRB a repeater. Thus, we tune these parameters in a way that we will be able to notice a structure in the embedding.
min_dist restricts the clumping of the points in the resulting embedding. Providing the minimum distance apart that the points are allowed to be in the low dimensional representation. Meaning the closer the value of min_dist is to zero the clumper the embedding of the locally connected points. Since we would like to, as much as possible, see clustering in the embedding, we set min_dist = 0.
metric defines the way the distance between two points is measured. For our purpose of extracting intuitive realizations, we set metric = euclidean.
n_components is just the dimension of the resulting embedding. This hyperparameter helps us to visualize the data in the reduced dimension space of our own choosing and since we want to visualize our result in the two-dimensional (2D) plane, we set n_components = 2.
n_neighbors constrains the size of the local neighborhood UMAP considers when estimating the manifold structure of the data. This hyperparameter focuses much more on the local structure when it has low values and on the global structure when it has a higher value. In our analysis, we considered a range of values for n_neighbors. Namely, n_neighbors = 5,6,7,8, and 9 which is a reasonable range of values as these provide us with distinct clusters. However, for our interests, we will be only focusing on the clustering result of n_neighbors = 9.
As shown in Figure 3, the UMAP embedding shows the lower left of the plot has a higher density of data points compared to the upper right of the plot. It is also evident that as the value of n_neighbors increases more focus on the overall structure of the data is highlighted (See A1). This is why the researchers only considered these values for the n_neighbors because if we include higher values we will see embeddings that do not have a clear division or separation between data points and this will not prove useful to us in investigating the underlying mechanisms of the FRB.
### Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN)
In this paper, we use the clustering algorithm developed by Campello, Moulavi, and Sander (Campello et al., 2013), namely Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN), to cluster the UMAP results we got from Section 3.1. In HDBSCAN there are only four major parameters that can be tuned. These parameters are min_cluster_size, min_samples, cluster_selection_epsilon, and alpha. Each of these have significant effects on the clustering result of the data points.
min_cluster_size affects the size of grouping that can be considered a cluster. The bigger the value of this parameter the lesser the number of the resulting clusters. For our purposes, we set min_cluster_size = 200.
min_samples controls the number of points that will be declared as noise. Considering points that are far from dense areas to be noise. The larger the value of this parameter the larger number of points that will be considered noise. In this study, we have min_samples = 10.
cluster_selection_epsilon when tuned allows micro-clusters in high concentration region to be merged. Preventing clusters to be split up any further than the given threshold. Since we obtained our desired clustering we allowed this parameter to retain its default value, cluster_selection_epsilon = 0.
Lastly alpha, this parameter is usually not changed or modified. However, if the clustering result from the changes made in min_samples and cluster_selection_epsilon is unwieldy, one can then adjust alpha and make the clustering more conservative. Meaning more points will be considered as noise. Similar to cluster_selection_epsilon, the researchers allowed this parameter to retain its value, alpha = 1.
After using UMAP to find a low dimensional representation of our data, we now use HDBSCAN to cluster this embedding.Looking at Fig. 4 and the figures of A2, it is evident that the HDBSCAN Clustering results for n_neighbors = 7, n_neighbors = 8, andn_neighbors = 9 have three clusters with noise while n_neighbors = 5 have three clusters without noise,
Figure 3: 2-dimensional UMAP embedding for n_neighbors = 9
Figure 2: Bimodal waiting time distribution of FRB121102.
and n_neighbors = 6 have two clusters without noise. Since similar cluster number of different n_neighbors value are not identical to each other we introduce the following nomenclature: nn[n_neighborsvalue].c[clusternumber]. As an example, nn5.c1 refers to the cluster 1 of n_neighbors = 5. For the Noise clusters we will use a "N" in place of the [clusternumber], e.g., nn5.cN.
## 4 Results
Since this paper is focused on determining what physical mechanisms or properties underlie repeating FRBs. It is important to see how the parameters of our data behave or manifest in our clustering. This can be achieved by colouring the clustering result with the parameters. Doing so will allow us to recognize trends, patterns, and properties among the clusters that can be helpful in investigating the repeating FRBs. Since we have six parameters, we have six plots with colouring based on a single parameter. In Figure 5, we have the Bandwidth colouring (GHz) 5a, the Energy colouring (erg) 5b, the Fluence colouring (Jy ms) 5c, the Peak Flux colouring (mJy) 5d, the Time Width colouring (ms) 5e, and the Waiting Time colouring (s) 5f. The other parameter colouring results that we have considered in our analyses can be found at Figures A3 - A8.
From these, we can see that parameter coloring enables us to easily characterize each cluster. From Figure 5a (Bandwidth), nn9.c1, nn9.c3, and nn9.cN are narrowband and nm9.c2 is broadband. Figures 5b - 5d (Energy, Fluence, and Peak Flux) all exhibit a similar trend for their clusters. nn9.c1 and nn9.cN both exhibits low energy, low fluence, and low peak flux. Majority of nn9.c3 have high energy, high fluence, and high peak flux while nn9.c2 have diverse energy, diverse fluence, and diverse peak flux. Figure 5e (Time Width) exhibits an interesting coloring among the clustering results. Points with longer duration tend to be located near the center of the clustering, and points with shorter duration tend to be located away from the center of the embedding. Lastly, Figure 5f (Waiting Time) tend to cluster points with long waiting times into small clusters and scattered among the bigger clusters of short waiting time points. Showing that regardless of cluster, FRB 121102 can be described to have a very short waiting time. In addition, all of these clusters do have a bimodal distribution for the waiting time. It is also important to realize that the significance of these properties were also supported by the histograms which we included in Appendix A4. Inspecting the histograms, especially the Bandwidth A9, Energy A10, Fluence A11, and Peak Flux A12 histograms, we see that the clusters were distinct from each other, showing different distributions for each cluster. One of the most notable is the bandwidth distributions, where all clusters show different distributions. Thus, supporting the idea that the resulting clusters are significantly different.
We can then summarize the characterization of each cluster regardless of n_neighbors value as shown in Table 1. It is important to keep in mind that the qualitative description of the clusters is based relative to the range of values for each parameter on a given cluster. As shown in Table 1, each cluster contains a unique set of properties that remains constant throughout the change of n_neighbors value which highlights the difference in physics among clusters. We may refer to these properties as "invariant" cluster properties. Identifying these invariant cluster properties is essential in describing the underlying physical mechanisms that we might discover based on the number of clusters that we found. Thus, supporting the idea that the resulting clusters are significant.
## 5 Discussion
The model used in this study is similar to what was employed in the work of Chen et al. 2022. UMAP was used to find the low dimensional representation of the data and then HDBSCAN was used to cluster the data points. Several key differences can be pointed out between our work and Chen et al. 2022. First, Chen et al. 2022 used both non-repeater and repeater sources in their study while we used a single repeating FRB which is FRB121102. Second, one of the main goals of their work is to evaluate the assumption that non-repeating FRBs are contaminated by repeating FRBs while this work focuses on characterizing or identifying the underlying properties of FRB12102 to further understand repeating FRBs. Lastly, the work of Chen et al. 2022 provided a new way to classify repeating and non-repeating FRBs while this study aims to provide a classification of repeating FRBs from FRB121102. Nevertheless, this study also introduced additional analysis which helped us to qualitatively characterize FRB121102, such as parameter colouring, identification of invariant cluster properties, and the cluster membership change of the data points/FRBs that will be discussed in the succeeding subsection.
### Cluster membership change
In Section 4, we found the invariant cluster properties (see Table 1) regardless of the n_neighbors values. However, it is easy to see that the number of clusters as the n_neighbors value change did not remain the same. Four clusters (including the Noise cluster) were
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Invariant & \multirow{2}{*}{Cluster 1} & \multirow{2}{*}{Cluster 2} & \multirow{2}{*}{Cluster 3} & \multirow{2}{*}{Noise} \\ Cluster & & & & \\ Properties & & & & \\ \hline Bandwidth & Narrowband & Broadband & Narrowband & Narrowband \\ \hline Peak Flux & Low & Diverse & Diverse & Diverse \\ \hline Fluence & Diverse & Diverse & Diverse & Low \\ \hline Energy & Low & Diverse & High & Low \\ \hline Time width & Short & Short & Short & Short \\ \hline Waiting time & Very short & Very short & Very short & Very short \\ \hline \end{tabular}
\end{table}
Table 1: Cluster properties that remain constant regardless of n_neighbors value. The qualitative description of the clusters is based relative to the range of values for each parameter on a given cluster
Figure 4: HDBSCAN Clustering result for n_neighbors = 9
Figure 5: Parameter colouring of the clustering result for n_neighbors = 9. The data which are not surrounded by lines correspond to Noise clusters.
found for n_neighborss = 7,8,9, three clusters were found for n_neighborss = 5, and two clusters were found for n_neighborss = 6. This suggests that the FRB clusters of n_neighbors = 5,6 might be shared in more than one cluster of n_neighbors = 7,8,9. Therefore, we investigate the change in the cluster membership of the FRBs as the n_neighbors values varies to see whether this is true. This can be done using an alluvial diagram, as shown below in Figure 6.
Looking at Figure 6, the axes (oriented vertically) represents the n_neighbors values 5,6,7,8, and 9, respectively. The stratum contained in each axis represents the clusters we found using HDBSCAN which are Cluster 1, Cluster 2, Cluster 3, and Noise. The flow between axes connecting two strata represent the change of the cluster membership of those data points across an n_neighbors value change. An alluvium in our diagram consists of four flows connecting different strata or clusters on different axes or clustering. From the diagram we find the following observations; (i) Majority of the data points tend to retain their cluster membership over n_neighbors value change, (ii) The number of clusters increases as the n_neighbors value increases, and (iii) If we only consider "thick flows" (i.e., flows that contains majority of the data points) to be significant, we can divide alluvial diagram into two alluvioms. One of which is the alluvium connecting the clusters nn5.c2, nn6.c1, nn7.c2, nn8.c2, and nn9.c2.
The result (i) we have, based on the diagram, shows that the clustering we found on the data is because of the data itself and not an effect of the clustering algorithm. This then lends to a more physical interpretation and characterization of the clusters. Result (ii) compared to (i) can also be attributed to how the clustering algorithm works. Particularly, the hyperparameters n_neighbors from UMAP, min_cluster_size, and min_samples from HDBSCAN affect the resulting embedding and clustering of the data points. But to this effect, looking at the alluvium from nn5.c2 to nn9.c2 we can see that nn5.c2 evidently bears greater and greater significance as we increase the n_neighbors value. Showing that there are data points that are once grouped into nn5.c2 are now considered a major part of nn9.c2. This observation is also true for alluvioms with significant flows. Lastly, result (iii) implies that the two significant clusters (nn6.c2 and nn6.c1) are really made up of four clusters (nn9.c1, nn9.c2, nn9.c3, nn9.cN) and one of these two major clusters, nn6.c2, can be split up further into three clusters, namely nn9.c1, nn9.cN, and nn9.cN.
Since the data have noise, it is to be expected that our unsupervised learning might produce non-physical clusters. Regarding this matter, we can use the alluvial diagram which keeps track of the cluster membership change of each FRB as an additional cross-check. Enabling us to look past the cluster membership assigned to each FRB by considering thick flows to be significant and see how each cluster evolves throughout n_neighbors change. This eliminates complete dependency of our final results on a certain set of hyperparameter values, showing that certain groups of FRB/data points remain as a group all throughout n_neighbors change. Also, this supports the idea of certain cluster properties being carried over from one cluster with different/similar n_neighbors value into another cluster with different/similar n_neighbors value. This entails that the clustering result is based on the difference in physics which is also supported by the one-dimensional histograms (see Section A4).
### Comparison with other results
In relation to the number of clusters we have from the HDBSCAN clustering, there are similar results from literature that found the same number of clusters as our results for the n_neighbors = 6 clustering. Using the same dataset from the Supplementary Table 1 of Li et al.2021a, Xiao & Dai 2022 found two clusters by assigning a critical brightness temperature (\(T_{B,crit}\)) of \(10^{33}\) K. In their work, they used the brightness temperature of the FRBs as a criterion to cluster the bursts of FRB121102 because it directly relates to the radiation mechanism of FRBs. These clusters contain bursts depending on whether the bursts have a \(T_{B}\) value greater than the equal or less than the \(T_{B,crit}\). "Classical" bursts are bursts that have \(T_{B}\geq T_{B,crit}\) while the "Alypical" bursts are bursts that have \(T_{B}<T_{B,crit}\). Xiao & Dai 2022 also found that the 76 "Classical" bursts have a tight width - fluence (T - \(\mathcal{F}_{\nu}\)) relation described by \(\log(\mathrm{T})=0.306\cdot\log(\mathcal{F}_{\nu})+0.399\) with a correlation coefficient of \(r=0.936\). Given that this relation does not hold true for the case of "Atypical" bursts and the total bursts of FRB121102, it leads Xiao & Dai 2022 to suggest that these "Atypical" bursts may be further grouped into several subtypes consisting of different radio transient types.
Marking the "Classical" and "Atypical" bursts of Xiao & Dai 2022 in our UMAP results and checking their cluster membership based on the clusters we identified, we have Figure 7. In the figure, the clustering based on HDBSCAN is represented by a convex hull boundary and the number of "Classical" bursts within a cluster is indicated within the parentheses. Since there are 76 "Classical" bursts and not all "Classical" bursts are members of clusters 1, 2, and 3. Then, these other "Classical" bursts are members of the Noise cluster. Tracking the change of "Classical" bursts membership using Figure 6. We found that the change in cluster membership of the majority (\(\geq 75\%=57\)) of "Classical" bursts follow the alluvium connecting nn5.c2 and nn9.c2. Suggesting that nn6.c1 corresponds to the "Classical" bursts while nn6.c2 corresponds to the "Atypical" bursts.
However, compared to the work of Xiao & Dai 2022 which only used a single parameter to group or cluster the FRBs of FRB121102. This study used seven of the parameters from the Supplementary Table 1 of Li et al.2021a to cluster the FRBs giving a more robust clustering result. Nevertheless, the agreement between the clustering results implies that there is an existing structure in the FRBs of FRB121102 regardless of clustering method that is used.
The work of Chaikova et al.2022 also agrees with the findings of Xiao & Dai 2022 and this study in terms of the number of clusters. Using the version of the CHIME/FRB catalog data (Amiri et al.2018) which contains 536 events (repeters and non-repeaters). Chaikova et al.2022, also found two clusters with significant differences in their morphology by using the frbmclust software (Chaikova & Kostunin 2022). The first cluster is described to have broad widths, low flux, several peaks per event (13.4% of events have >1 peak), mean boxcar width = 24.79 ms, median flux = 0.56 Jy, and has 28 repeaters. The second cluster is described to have narrow widths, high flux, single peaks per event (6.3% of events have >1 peak), mean boxcar width = 4.12 ms, median flux = 1.08 Jy, and has 33 repeaters. From these descriptions of each cluster, Chaikova et al.2022 concluded that what they identified as second cluster corresponds to the "Classical" bursts of Xiao & Dai 2022 and what they identified as first cluster resembles the findings of Xiao & Dai 2022 for the broad population of sources. This result of Chaikova et al.2022 then suggests that the clustering of FRB121102 into two according to Xiao & Dai 2022 must also be the same to the 536 events (repeaters and non-repeaters) they studied.
In addition to these, Li et al.2021a also suggested that the bimodality of the energy distribution points to more than one emission mechanism or emission site or beam shape. Whereas Xiao & Dai 2022 pointed out that the bimodal burst energy distribution found by Li et al.2021a already hints (if not indicate) that there are two
subtypes of FRBs and the subsequent work of Chaikova et al. 2022 supports this result and along with ours. Thus, we find that the number of significant clusters we found for n_neighbors\({}=6\) corresponds to clusters found by Xiao & Dai 2022, Chaikova et al. 2022, and Li et al. 2021.
Since we have established that the clusters of n_neighbors\({}=6\) is consistent with the results of Xiao & Dai 2022, Chaikova et al. 2022, and Li et al. 2021 and that the nn6.c2 based on Figure 6 is really composed of three clusters. It then follows that what we found as nn9.c1, nn9.c3, and nn9.cN must correspond to the "Atypical" bursts described in the work of Xiao & Dai 2022. Showing that these "Atypical" bursts can be further split into three clusters with distinct properties (see Table 1). Therefore, we can describe the properties of these "Atypical" bursts based on the properties of nn9.c1, nn9.c3, and nn9.cN. However, since the primary focus of our work revolves around classifying repeating FRBs, discussions of the physical mechanisms of each cluster will be left for future theoretical works.
Figure 6: Cluster membership change of each FRB as n_neighbors is varied.
Figure 7: “Classical” and “Atypical” bursts in UMAP results for (7a) n_neighbors = 5, (7b) n_neighbors = 6, (7c) n_neighbors = 7, (7d) n_neighbors = 8, and (7e) n_neighbors = 9.
### Clustering performance
In evaluating the agreement or similarity of the clustering results presented in this paper. A clustering performance metric is employed namely the Rand Index (Hubert and Arabie, 1985) and its corrected-for-chance version Adjusted Rand index (Steinley, 2004). This metric and its adjusted form will give us as a Rand score and Adjusted Rand score for each pair of clustering result we compare. Wherein a high score suggests that the two clustering result are in very good agreement. In addition, we also considered the Rand score and Adjusted score for the case where the Cluster 3 and Noise of each clustering result is merged. The reason for this is rooted in the results presented in Figures A9 - A14 where the existence of Cluster 3 is not a "firm detection" but an "indicator" of another cluster; one that is an intermediate one between Cluster 1 and Cluster 2.
As shown in Figure 8, we present the Rand and Adjusted Rand scores using a scatter plot of their values. Each point is annotated with the pair of n_neighbors values that the scores are calculated. Thus, all the data points are pairs of clustering results with their corresponding Rand and Adjusted Rand scores as their coordinates. Figure 8 shows the calculated scores for each pair of clustering result where Cluster 3 and Noise is not merged while Figure 8 shows the calculated scores for each pair of clustering result where Cluster 3 and Noise is merged. In both scatter plots, the clustering results for n_neighbors = 5,7,8, and 9 exhibited Rand scores greater than 0.80 and Adjusted Rand scores of at least 0.60. This indicates the clustering results were in very good agreement with one another with the exclusion of n_neighbors = 6. Now, comparing the scores for the clustering results for the two cases one where the Noise is not merged with Cluster 3 (Figure 8) and one where it is merged (Figure 8). We can see that there is no significant difference or significant improvement in the scores of the clustering result pairs after merging the Noise and Cluster 3. This implies that there is no significant merit in merging Cluster 3 and Noise. Lastly, both figures show that either the clustering result for n_neighbors = 8 and 9 is a good representative of the clustering for our dataset. In this paper, we adopt the case of Noise separate from Cluster 3.
## 6 Conclusions
With the above underpinnings, this paper concludes the following:
* Using parameter colouring, we have identified the invariant cluster properties of each cluster regardless of n_neighbors value. Showing that describing the FRB subtypes without any dependence on the set value of n_neighbors permits comparison with other works that aims at classifying FRBs. Invariant cluster properties also aids in determining possible physical mechanisms that corresponds to the characterization of each cluster which can be further discussed in future theoretical works.
* Investigating and plotting the change in cluster membership of the FRBs proved to be useful in pointing out connections between clusters of different n_neighbors value. This analysis led us to understand more about the underlying structure of the FRBs of FRB121102 by showing that certain clusters may have complex composition and consist of smaller distinct clusters. As shown, we have found that the "Arypical" cluster of FRB121102 can be further split up into three smaller clusters.
* Compared to the existing results in the literature, our clustering result does not only depend on a single parameter to create a grouping of FRBs. Using pertinent physical parameters, if not all parameters, the model we have used created a more robust classification of the repeating FRBs from FRB121102. Nevertheless, a certain degree of agreement with other results (e.g., being able to recover the FRB classification used by Xiao and Dai, 2022) exhibits consistency and foundation on physical parameters of the clusters.
Figure 8: Adjusted Rand score and Rand score scatter plot of 8a) Clustering results with Cluster 3 separated from Noise and 8b) Clustering results with Cluster 3 merged with Noise. Each point is annotated with the pair of n_neighbors values that the scores are calculated. Thus, all the data points are pairs of clustering results with their corresponding Rand and Adjusted Rand scores as their coordinates.
## Acknowledgments
We thank the anonymous referee for many insightful comments, which improved the paper significantly. TG acknowledges the support of the National Science and Technology Council of Taiwan through grants 108-2628-M-007-004-MY3, 111-2112-M-007-021, and 111-2123-M-001-008-. TH acknowledges the support of the National Science and Technology Council of Taiwan through grants 110-2112-M-005-013-MY3, 110-2112-M-007-034-, and 111-2123-M-001-008-. The authors would also like to extend their utmost gratitude to Professor Wang Pei and his colleagues for sharing and making the data openly available in Science Data Bank. The authors would also like to thank Dr. Shotaro Yamasaki for his valuable insights and suggestions.
## Data Availability
The data underlying this article is available in the work of Li et al. (2021a, dataset). The dataset were derived from Science Data Bank, at [http://doi.org/10.11922/sciencedb.01092.DOI:10.11922/sciencedb.01092](http://doi.org/10.11922/sciencedb.01092.DOI:10.11922/sciencedb.01092).
|
2303.01637
|
Multimass modelling of Milky Way globular clusters -- I. Implications on
their stellar initial mass function above 1 M$_{\odot}$
|
The distribution of stars and stellar remnants (white dwarfs, neutron stars,
black holes) within globular clusters holds clues about their formation and
long-term evolution, with important implications for their initial mass
function (IMF) and the formation of black hole mergers. In this work, we
present best-fitting multimass models for 37 Milky Way globular clusters, which
were inferred from various datasets, including proper motions from Gaia EDR3
and HST, line-of-sight velocities from ground-based spectroscopy and deep
stellar mass functions from HST. We use metallicity dependent stellar evolution
recipes to obtain present-day mass functions of stars and remnants from the
IMF. By dynamically probing the present-day mass function of all objects in a
cluster, including the mass distribution of remnants, these models allow us to
explore in detail the stellar (initial) mass functions of a large sample of
Milky Way GCs. We show that, while the low-mass mass function slopes are
strongly dependent on the dynamical age of the clusters, the high-mass slope
($\alpha_3; m > 1 M_\odot$) is not, indicating that the mass function in this
regime has generally been less affected by dynamical mass loss. Examination of
this high-mass mass function slope suggests an IMF in this mass regime
consistent with a Salpeter IMF is required to reproduce the observations. This
high-mass IMF is incompatible with a top-heavy IMF, as has been proposed
recently. Finally, based on multimass model fits to our sample of Milky Way
GCs, no significant correlation is found between the high-mass IMF slope and
cluster metallicity.
|
Nolan Dickson, Vincent Hénault-Brunet, Holger Baumgardt, Mark Gieles, Peter Smith
|
2023-03-03T00:03:35Z
|
http://arxiv.org/abs/2303.01637v2
|
Multimass modelling of Milky Way globular clusters - I. Implications on their stellar initial mass function above 1 M\({}_{\odot}\)
###### Abstract
The distribution of stars and stellar remnants (white dwarfs, neutron stars, black holes) within globular clusters holds clues about their formation and long-term evolution, with important implications for their initial mass function (IMF) and the formation of black hole mergers. In this work, we present best-fitting multimass models for 37 Milky Way globular clusters, which were inferred from various datasets, including proper motions from Gaia EDR3 and HST, line-of-sight velocities from ground-based spectroscopy and deep stellar mass functions from HST. We use metallicity dependent stellar evolution recipes to obtain present-day mass functions of stars and remnants from the IMF. By dynamically probing the present-day mass function of all objects in a cluster, including the mass distribution of remnants, these models allow us to explore in detail the stellar (initial) mass functions of a large sample of Milky Way GCs. We show that, while the low-mass mass function slopes are strongly dependent on the dynamical age of the clusters, the high-mass slope (\(\alpha_{3};m>1\,\mathrm{M}_{\odot}\)) is not, indicating that the mass function in this regime has generally been less affected by dynamical mass loss. Examination of this high-mass mass function slope suggests an IMF in this mass regime consistent with a Salpeter IMF is required to reproduce the observations. This high-mass IMF is incompatible with a top-heavy IMF, as has been proposed recently. Finally, based on multimass model fits to our sample of Milky Way GCs, no significant correlation is found between the high-mass IMF slope and cluster metallicity.
keywords: galaxies: star clusters - globular clusters: general - stars: kinematics and dynamics - stars: luminosity function, mass function - stars: black holes
## 1 Introduction
The stellar initial mass function (IMF) plays a key role in the evolutionary history and properties of populations of stars, and understanding it is vital to understanding and interpreting both observations and simulations of star clusters and galaxies.
Globular clusters (GCs) consist of very large numbers of stars of similar iron abundance and age, providing us with one of the best avenues for investigating the shape of the stellar IMF, and how it may vary with environment. The IMF plays a particularly important role in the evolution of GCs, where it controls the populations of stellar remnants, the degree and timescale of mass segregation, the lifetime of the clusters before dissolution, and the contribution of GCs to observed gravitational waves (e.g. Haghi et al.2020; Weatherford et al.2021; Wang et al.2021).
The universality of the stellar IMF is a debated topic. While the typically assumed (canonical) formulations of the IMF, determined empirically through observations of solar-neighbourhood and Milky Way cluster stars (e.g. Salpeter1955; Kroupa2001; Chabrier2003), seem to demonstrate that it is universal among star-forming systems, the exact shape and universality of the IMF is still under investigation (see review by Bastian et al.2010). For example, observations of the cores of early-type galaxies, (both spectroscopic; van Dokkum and Conroy2010 and kinematic; Cappellari et al.2012) have pointed towards a "bottom-heavy" IMF, enriched with low-mass stars, in those environments (although see also Smith2014, 2020). Meanwhile, recent theoretical studies of star and cluster formation have indicated that certain environment-dependent processes, such as radiative feedback or cooling from dust-grains, could imply a varying IMF (Krumholz et al.2011; Chon et al.2021).
In particular for GCs, some recent observational works have also showcased trends which could be explained by a varying IMF. These observations, however, have also been shown to be explainable without the need to invoke such a non-canonical IMF. Strader
et al. (2011) demonstrated that dynamical mass measurements of 200 globular clusters in M 31 showed a decreasing trend in the dynamical mass-to-light ratio with increasing cluster metallicity. This result is opposite to what standard stellar population models would predict while assuming a canonical IMF. Haghi et al. (2017) showed that these results could be explained by introducing a non-canonical, metallicity-dependent IMF, with an increasing level of top-heaviness for low metallicity clusters (Marks et al., 2012). However Baumgardt et al. (2020), in a study of Milky Way GCs, also noted that such a discrepancy in the mass-to-light ratios compared to population synthesis models could be accounted for once the low-mass depleted present-day mass function (PDMF) of the metal-rich clusters was taken into consideration. Metallicity-dependent stellar evolution models were also able to account for the difference in the metal-poor clusters. Shanahan & Gieles (2015) also demonstrated that not accounting for mass segregation in integrated-light studies of M 31 clusters introduces a bias in the inferred dynamical mass, dependent on metallicity (see also Sippel et al., 2012), and argued that there is no need for variations in the IMF to explain the Strader et al. results. Because the majority of stars form in GCs at low metallicities (Larsen et al., 2012), a substantially flatter IMF at high masses in GCs would have important consequences for the amount of ionization radiation at high redshift (Schaerer & Charbonnel, 2011; Boylan-Kolchin, 2018); the chemical evolution of galaxies; the amount of stellar-mass black holes (BHs) formed and subsequent binary BH mergers (Schneider et al., 2018). A flat IMF also predicts that there should be more white dwarfs (WDs) at the present age. WDs contribute \(\sim 30\%\) to the total mass at \(\sim 10\,\)Gyr for a canonical IMF, before accounting for the preferential loss of low-mass objects in GCs. The fractional contribution to the central density is higher because of mass segregation, so it is feasible to look for an excess of WDs in the kinematics of GCs.
In this work, the hypothesis of metallicity-dependent, variable and non-canonical stellar IMFs in globular clusters is investigated, in particular in the high-mass regime, where stars in old globular clusters have, by the present day, evolved into stellar remnants. To do so, we fit multimass dynamical models to various observables for a large sample of Milky Way clusters, over a range of metallicities. We infer their global stellar mass functions and simultaneously constrain their distributions of stellar remnants. The multimass limepy models and mass function evolution algorithm used are explained in more detail in Section 2. Section 3 describes the methods and sources used to obtain all observational data used to fit the models, as well as how the cluster sample was chosen. The model fitting procedure, including descriptions of all probability distributions and Bayesian sampling techniques, as well as the software library and fitting pipeline which was created to facilitate this fitting, is presented in Section 4. The results of the fitting of all clusters in our sample based on these methods are given in Section 5. The (initial) mass function results for all clusters are presented and explored in more detail in Section 6. Finally, we conclude in Section 7.
The inferred present-day populations of stellar-mass BHs in our sample of globular clusters based on our best-fitting models will be examined in detail in a separate paper (Dickson et al., in prep.; hereafter Paper II).
## 2 Models
To model the mass distribution of the globular clusters analyzed in this work, we use the limepy multimass distribution-function (DF) based models (Gieles & Zocchi, 2015)1. DF based models are equilibrium models built around a distribution function \(f\) which describes the particle density of stars and satisfies the collisionless-Boltzmann equation. This DF is used to self-consistently solve for the system's potential (\(\phi(r)\)) using Poisson's equation.
Footnote 1: Available at [https://github.com/mgieles/limepy](https://github.com/mgieles/limepy)
A variety of quantities can be derived from the DF which can be used to describe a globular cluster, including the projected velocity dispersion (the second velocity moment), the projected surface density, the total mass, the potential energy and the system entropy (e.g. Spitzer, 1987; Gieles & Zocchi, 2015). Observational data can be used to compare and constrain the models based on these quantities.
Multimass models allow for a more accurate description of real globular clusters, which are made up of a spectrum of stellar masses. Multiple mass components are necessary in order to describe the distributions of different stellar and remnant populations within the system and, in turn, examine both the process and effects of mass segregation (e.g. Da Costa & Freeman, 1976).
The DF of the multimass version of the limepy models is given by the sum of component DFs for every mass bin \(j\), each as a function of the specific energy \(E\) and angular momentum \(J\) in the form of:
\[f_{j}(E,J^{2})=\begin{cases}A_{j}\,\exp\left(-\dfrac{J^{2}}{2\chi_{A,j}^{2}s_ {j}^{2}}\right)\,E_{g}\left(-\dfrac{E-\phi(r_{0})}{s_{j}^{2}}\right)&E<\phi(r_ {t}),\\ 0&E\geq\phi(r_{t}),\end{cases} \tag{1}\]
where \(A_{j}\) and \(s_{j}\) are the mass-dependant normalization and velocity scales, \(r_{\rm a,j}\)and \(r_{\rm a}\)are the anisotropy and truncation radii, and the function \(E_{g}\) is defined using the regularized lower incomplete gamma function and the truncation parameter \(g\):
\[E_{g}(x)=\begin{cases}\exp(x)&g=0,\\ \exp(x)\dfrac{\gamma(g,x)}{\Gamma(g)}&g>0,\end{cases} \tag{2}\]
These parameters and how they are used are explained in more detail in Section 2.1 below.
### Model parameters
Our models are defined by 10 free parameters (listed in Table 1) which dictate the mass function and physical solution of the limepy DF.
The overall structure of these models is controlled by the (dimensionless) central potential parameter \(\dot{\phi}_{0}\), which is used as a boundary condition for solving Poisson's equation and defines how centrally concentrated the model is. The cluster model is spherical out to the truncation radius of the system, where its energy is reduced, mimicking the effects of the host galaxy's tides, which reduce the escape velocity of stars, making it easier for them to escape. The sharpness of this energy truncation is defined by the truncation parameter \(g\). Lower \(g\) values result in a more abrupt energy truncation, increasing up to models with the maximum possible finite extent at \(g=3.5\), while finite models with realistic values of \(\dot{\phi}_{0}\) are typically limited to \(g\lesssim 2.5\)(Gieles & Zocchi, 2015).
The mass and size scales of the model can be expressed in any desired physical units by adopting corresponding values for the normalization constant \(A\) and the global velocity scale \(s\). We opt
to scale the models to match observations using the parameters for total cluster mass \(M\) and 3D half-mass radius \(r_{\rm h}\) as mass and size scales, which are used internally to compute the \(A\) and \(s\) scales.
lmeppy models allow for velocity anisotropy through an angular momentum term in the DF. With this term, the system is isotropic in the core, gains a degree of radial velocity anisotropy near the anisotropy radius \(r_{\rm a}\), and then becomes isotropic once more near the truncation radius. This parametrization mimics how GCs naturally develop radially-biased velocity anisotropy throughout their evolution as a result of two-body relaxation and tides (Zocchi et al., 2016; Tiongco et al., 2016). The two-body relaxation process drives the core of clusters to isotropy, however scattering (on preferentially radial orbits) of stars outside the core acts to increase the radial component of the velocity dispersion. Finally, a combination of the tidal torque from the host galaxy, which induces a transfer of angular momentum near the Jacobi radius to stellar orbits in the tangential direction (Oh & Lin, 1992), and the preferential loss of stars on radial orbits (Tiongco et al., 2016), act to increase the tangentiality of the outer stars, damping the amount of radial anisotropy and leading to a return to isotropy near the immediate edge of the system. The anisotropy radius \(r_{\rm a}\) dictates the amount of radial velocity anisotropy present in the models. The smaller the value of \(r_{\rm a}\), the more anisotropic the system. In the limit \(r_{\rm a}\to\infty\), the models become entirely isotropic. In practice, models with \(r_{\rm a}\) greater than the cluster truncation radius can be considered isotropic.
The exact meaning of both the \(\dot{\phi}_{0}\) and \(\dot{r}_{\rm a}\) parameters depends on the definition of the mean mass (Peuten et al., 2017). In this work we adopt the global mean mass, that is, the mean mass of all stars in the entire cluster.
The multimass version of the limepy DF is defined by the sum of similar component DFs for each mass bin \(j\), with mass-dependent velocity (\(s_{j}\)) and anisotropy radius (\(r_{\rm a,j}\)) scales. The mass-dependent velocity scaling captures the trend towards kinetic energy equipartition among stars of different masses and models the effects of mass segregation (Gieles & Zocchi, 2015; Peuten et al., 2017; Henault-Brunet et al., 2019). The velocity scale is defined based on the parameter \(\delta\), such that \(s_{j}\propto sm_{j}^{-\delta}\), where \(s\) is defined as above. The mass-dependent anisotropy radius is defined in a similar fashion, using a parameter \(\eta\) (\(r_{\rm a,j}\propto r_{\rm a}m_{j}^{\eta}\)). For the analysis presented in this paper we have chosen to fix \(\eta\) to 0, defining the anisotropy to be identical among all mass bins, the default assumption in multimass DF-based models. Our observations do not contain the information that would allow us to constrain the mass-dependence of the velocity anisotropy (e.g. Peuten et al., 2017), and thus the \(\eta\) parameter.
Finally, the constituent discrete mass components which approximate the mass spectrum of a GC are represented in the multimass limepy models by the total (\(M_{j}\)) and mean (\(m_{j}\)) masses of each mass bin. These must be defined a _priori_ by external methods, based on the mass function (\(\alpha_{1},\alpha_{2},\alpha_{3}\)) and BH retention percentage (\(\rm BH_{\rm ret}\)) parameters. The algorithm, which takes into account stellar evolution to predict the mean and total mass in stellar remnant bins, is described in detail in Section 2.2 below.
External to the limepy models themselves, we also employ a few extra parameters to aid in the fitting of the models to observations. These parameters are explained in more detail in Section 4.1.
### Mass function evolution
DF-based models, such as limepy, compute the distribution of mass and velocity in a system in equilibrium. They are instantaneous "snapshot" models, and do not directly simulate any temporal astrophysical processes during their computation, including stellar evolution. As such, in order to determine the realistic mass populations for which the model will determine the phase-space distribution, we must incorporate a separate prescription for stellar evolution from an initial mass function, over the age of the cluster, to the present-day stellar and remnant mass functions.
In keeping with the formulation of canonical IMFs (e.g. Kroupa
\begin{table}
\begin{tabular}{c l l} \hline Parameter & Description & Prior \\ \hline \(\dot{\phi}_{0}\) & Dimensionless central potential & Uniform (\(L=2.0,\ U=15.0\)) \\ \(M\) & Total system mass \(\left[10^{6}\,\mathrm{M}_{0}\right]\) & Uniform (\(L=0.001,\ U=2.0\)) \\ \(g\) & Truncation parameter & Uniform (\(L=0.0,\ U=3.5\))\({}^{*}\) \\ \(r_{\rm h}\) & Half-mass radius [pc] & Uniform (\(L=0.5,\ U=15.0\)) \\ \(\log(\dot{r}_{\rm a})\) & Dimensionless anisotropy radius & Uniform (\(L=0.0,\ U=8.0\)) \\ \(\delta\) & Velocity-scale mass dependence & Uniform (\(L=0.3,\ U=0.5\))\({}^{*}\) \\ \hline \(\alpha_{1}\) & MF exponent (\(0.1\ \mathrm{M}_{0}<m\leq 0.5\ \mathrm{M}_{0}\)) & Uniform (\(L=-1.0,\ U=2.35\))\({}^{*}\) \\ \(\alpha_{2}\) & MF exponent (\(0.5\ \mathrm{M}_{0}<m\leq 1\ \mathrm{M}_{0}\)) & Uniform (\(L=-1.0,\ U=\min(2.35,\ \alpha_{1})\))\({}^{*}\) \\ \(\alpha_{3}\) & MF exponent (\(1\ \mathrm{M}_{0}<m\leq 100\ \mathrm{M}_{0}\)) & Uniform (\(L=1.6,\ U=\min(4.0,\ \alpha_{2})\))\({}^{*}\) \\ \(\rm BH_{\rm ret}\) & Black hole retention fraction [\%] & Uniform (\(L=0.0,\ U=30.0\)) \\ \hline \(F\) & Mass function nuisance parameter & Uniform (\(L=1.0,\ U=7.5\)) \\ \(s^{2}\) & Number density nuisance parameter \(\left[\mathrm{arcmin}^{-4}\right]\) & Uniform (\(L=0.0,\ U=15.0\)) \\ \(d\) & Heliocentric distance [kpc] & Gaussian(\(\mu=d_{\rm hi},\ \sigma=\delta d_{\rm hi}\)) \\ \hline \end{tabular}
\end{table}
Table 1: List of all free parameters, their descriptions and the prior probability distributions used to bound their values. The first six are structural limepy parameters (Section 2.1), while the next four define the mass function (Section 2.2). The final three parameters aid in comparing models to observations (Section 4.1). The prior distributions shown here, when not motivated by physical or model constraints (marked here by an asterisk; see Section 4.1.2), are chosen to bound a large enough area of parameter space containing all valid parameter values. The bounds here represent the largest extents used. In reality, the bounds may be reduced slightly during the sampling of certain clusters in order to improve the performance of the sampler, while still including a large area surrounding the most likely parameter values. The literature values and uncertainties used in the prior on the distance are taken from Baumgart & Vasiliev (2021).
2001), we use a 3-component broken power law:
\[\xi(m)\propto\begin{cases}m^{-\alpha_{1}}&0.1\ \mathrm{M}_{\odot}<m\leq 0.5\ \mathrm{M}_{\odot},\\ m^{-\alpha_{2}}&0.5\ \mathrm{M}_{\odot}<m\leq 1\ \mathrm{M}_{\odot},\\ m^{-\alpha_{3}}&1\ \mathrm{M}_{\odot}<m\leq 100\ \mathrm{M}_{\odot},\end{cases} \tag{3}\]
where the \(\alpha_{i}\) parameters define the power-law'slope' of each component, and are allowed to vary freely during model fitting, and \(\xi(m)\,\mathrm{d}m\) is the number of stars with masses within the interval \([m,\ m+\mathrm{d}m]\). It should be noted here that our exact choices of break masses (0.1, 0.5, 1, 100 \(\mathrm{M}_{\odot}\)) are different than that of Kroupa (2001), to allow for a more specific study of the high-mass (\(m>1\ \mathrm{M}_{\odot}\)) regime.
To evolve the population of stars to the present day we follow the algorithm first described by Balbinot & Gieles (2018) and expanded upon in the ssptools2 library. This method is summarized below.
Footnote 2: Available at [https://github.com/SMU-clusters/ssptools](https://github.com/SMU-clusters/ssptools)
To begin, the rate of change of the number of main-sequence stars over time is given by the equation:
\[\dot{N}(m_{\mathrm{to}})=-\left.\frac{\mathrm{d}N}{\mathrm{d}m}\right|_{m_{ \mathrm{to}}}\left|\frac{\mathrm{d}m_{\mathrm{to}}}{\mathrm{d}t}\right|, \tag{4}\]
where the amount of initial stars per unit mass (\(\mathrm{d}N/\mathrm{d}m\)) at the turn-off mass (\(m_{\mathrm{to}}\)) is given by the IMF, and the rate of change of the turn-off mass can be derived by approximating the lifetime of main-sequence stars as a function of initial mass:
\[t_{\mathrm{ms}}=a_{0}\exp(a_{1}m^{\alpha_{2}}), \tag{5}\]
where the \(a_{i}\) coefficients are interpolated from the Dartmouth Stellar Evolution Program models (Dotter et al., 2007, 2008). This equation can then be inverted and differentiated to find the rate of change:
\[\frac{\mathrm{d}m_{\mathrm{to}}}{\mathrm{d}t}=\frac{1}{a_{1}a_{2}}\frac{1}{t }\left(\frac{\log(t/a_{0})}{a_{1}}\right)^{1/a_{2}-1}. \tag{6}\]
This set of equations dictates the amount of stars which evolve off the main sequence by the present-day cluster age \(t\).
In this work we use 30 discrete stellar mass bins, each logarithmically-spaced within the bounds of the three components defined by the IMF (5 bins in the low and intermediate-mass regime and 20 bins in the high-mass regime) and 30 identically spaced remnant mass bins, which are filled by the remnants resulting from stars evolving off the main-sequence. As these stars evolve, the stellar remnants they will form (both in type and in mass), and thus the final remnant mass bins they will populate, depends on their initial mass and metallicity, and a functional initial-final mass relation (IFMR).
The WD IFMR is computed as a 10th order polynomial:
\[m_{\mathrm{WD}}=\sum_{j=0}^{10}b_{j}m_{i}^{j} \tag{7}\]
where \(m_{i}\) is the initial mass of the star, \(m_{\mathrm{WD}}\) is final mass of the formed remnant, and the coefficients \(b_{j}\) and the maximum initial mass which will form a WD are interpolated, based on metallicity, from the MIST 2018 isochrones (Dotter, 2016; Choi et al., 2016).
The BH IFMR, as well as the minimum initial mass required to form a BH, is interpolated directly from a grid of stellar evolution library (SSE) models (Banerjee et al., 2020), using the rapid supernova scheme (Fryer et al., 2012), and is also dependent on metallicity. These relations are shown in Figure 1. All stars with initial masses between the WD and BH precursor masses are assumed to form neutron stars (NS). For simplicity, their final mass is always assumed to be \(1.4\mathrm{M}_{\odot}\), regardless of the initial mass.
The amount and final mass of these remnants (as dictated by Equation 4) must then be scaled downwards by an "initial retention fraction" \(f_{\mathrm{ret}}\), in order to mimic the loss of newly formed remnants due to natal kicks. For WDs we assume this is always equal to 100%. In this analysis, we assume a NS retention fraction of 10%, as is common (e.g. Pfahl et al., 2002), however, as shown in Henault-Brunet et al. (2020), our results are insensitive to this exact value.
The mass function evolution algorithm includes two more specific prescriptions for the loss of BHs, accounting for dynamical ejections in addition to natal kicks.
Firstly the ejection of, primarily low-mass, BHs through natal kicks is simulated. We begin by assuming that the kick velocity is drawn from a Maxwellian distribution with a dispersion of \(265\ \mathrm{km}\,\mathrm{s}^{-1}\), as has been found for neutron stars (Hobbs et al., 2005). This velocity is then scaled down linearly by the "fallback fraction" \(f_{b}\), the fraction of the precursor stellar envelope which falls back onto the BH after the initial supernova explosion. This fraction is interpolated from the same grid of SSE models used for the BH IFMR. The fraction of BHs retained in each mass bin is then found by integrating the Maxwellian kick velocity distribution from 0 to the system escape velocity. The initial system escape velocity of each cluster was estimated by assuming that about half of the initial cluster mass was lost through stellar evolution, while adiabatically expanding the cluster to a present-day half-mass radius a factor of two larger than the initial value, resulting in an initial escape velocity twice as large as the present-day value. A set of preliminary models were computed for all clusters, and the initial escape velocity was computed based on the best-fitting central density as \(v_{\mathrm{esc}}=2\sqrt{-2\phi_{0}}\), where \(\phi_{0}\) is the central potential. It should be noted that clusters with an escape velocity \(\gtrsim 100\ \mathrm{km}\,\mathrm{s}^{-1}\) will retain nearly all BHs (Antonini, Gieles & Gualandris, 2019).
Black holes are also ejected over time from the core of GCs due to dynamical interactions with one another (e.g. Breen & Heggie, 2013, 2013). This process is simulated through the removal of BHs, beginning with the heaviest mass bins (with larger gravitational
Figure 1: Adopted metallicity-dependent initial-final mass relations for white dwarf (top panel) and black hole (bottom panel) formation. Lower metallicities generally result in higher final remnant masses.
interaction cross-sections) through to the lightest (Morscher et al., 2015; Antonini and Gieles, 2020), until the combination of mass in BHs lost through both the natal kicks and these dynamical ejections leads to a retained mass in BHs corresponding to the percentage of the initial mass in BHs specified by the BH mass retention fraction parameter (BH\({}_{\rm ret}\)).
The final avenue for cluster mass loss is through the escape of stars and remnants driven by two-body relaxation and lost to the host galaxy. Such losses, in a mass segregated cluster, are dominated by the escape of low-mass objects from the outer regions of the cluster. Determining the overall losses through this process is a complicated task, dependent on the dynamical history and orbital evolution of the cluster, which we do not attempt to model here. We thus opt to ignore this preferential loss of low-mass stars and do not further model the escape of any stars, apart from through the processes described above. This means that the low-mass \(\alpha\) exponents determined here may, in most cases, describe most accurately the PDMF rather than the low-mass IMF of our clusters. This is discussed in more detail in Section 6.
## 3 Cluster data
In this work, we determine best-fitting model parameters for 37 Milky Way globular clusters through the comparison of the phase-space distribution of stars in the limppy models to analogous observations of GC structure and kinematics.
### Cluster selection
The clusters analyzed in this work were selected from the population of Milky Way GCs in order to best study the possible relationship of the mass function with metallicity. To do so, we choose clusters over a range of metallicities (taken from Harris, 1996), with most clusters in our sample being metal-poor (\(\left[{\rm Fe/H}\right]\lesssim-1.0\)). The greatest discerning factor used in cluster selection was the quantity and quality of data available. We searched the catalogue of observational datasets presented by Baumgardt (2017); Baumgardt and Hilker (2018) and Baumgardt et al. (2023)3 for clusters with a combination of adequate mass function depth and radial coverage from HST photometry, and sufficient kinematic data to constrain the models. These selection criteria lead to the choice of 37 final clusters.
Footnote 3: Available at [https://people.smp.uq.edu.au/HolgerBaumgardt/globalvar/](https://people.smp.uq.edu.au/HolgerBaumgardt/globalvar/)
### Datasets
Models are fit to all chosen GCs through comparison with a variety of observational datasets, which help directly constrain both the distribution of visible cluster stars through direct stellar number counts, and the overall total mass of the cluster through accurate kinematic profiles. This, in turn, provides indirect constraints on the amount and distribution of dark mass (in both faint low-mass stars and dark remnants) making up the difference between the visible and total mass, as, together with mass segregation, the possible distribution of cluster mass among different components has limited flexibility. Key model parameters, in particular \(\alpha_{3}\), which sets the amount of high-mass stars and remnants in the models, can thus be constrained by this combination of datasets. We utilize a large number of observables from various sources, while aiming to provide as much homogeneity between clusters as possible. All literature sources used for each cluster are listed in Appendix A.
#### 3.2.1 Proper motions
Radial profiles of the dispersion of proper motions (PMs) of cluster stars are used to constrain the cluster velocity dispersion profiles, and in turn the total cluster mass and its distribution. By incorporating the kinematics in both the radial and tangential directions in the plane of the sky, we are also able to constrain the amount of velocity anisotropy in the system. We define these components, on the sky, such that the radial component is positive outwards from the cluster centre, and the tangential component is positive in the counterclockwise rotational direction on the sky. Given the proper motions of a star in a cluster-centred orthographic projection (e.g. equation 2 in Gaia Collaboration et al., 2018), the radial (\(\mu_{R}\)) and tangential (\(\mu_{T}\)) components are defined as:
\[\mu_{R}\equiv\frac{(x\mu_{x}+\gamma\mu_{y})}{R},\hskip 56.905512pt\mu_{T} \equiv\frac{(\mu_{x}-x\mu_{y})}{R}, \tag{8}\]
where \(x\), \(y\), \(\mu_{x}\) and \(\mu_{y}\) are the orthographic positions and proper motions and \(R=\sqrt{x^{2}+y^{2}}\) is the projected distance from the cluster centre, which is taken from Baumgardt (2017).
We extract our own PM dispersion profiles in both components from Gaia Early Data Release 3 (EDR3; Gaia Collaboration et al., 2021) proper motions for all clusters.4. The catalogue of cluster stars, along with their membership probabilities, is taken from Vasiliev and Baumgardt (2021). Following the conclusions of Vasiliev and Baumgardt (2021), in order to account for underestimations in the statistical uncertainty of proper motions of Gaia sources in dense regions, we scale the PM uncertainties of each star by a density-dependent factor \(\eta\):
Footnote 4: Extracted Gaia EDR3 PM dispersion profiles for all clusters are available for download from [https://github.com/mdlickson/GCfit-results](https://github.com/mdlickson/GCfit-results)
\[\eta=\left(1+\frac{\Sigma}{\Sigma_{0}}\right)^{\zeta}, \tag{9}\]
where \(\Sigma\) is the nearby stellar density, \(\Sigma_{0}=10\,\mathrm{stars/arcmin^{2}}\) and \(\zeta=0.04\)(from Table 1 in Vasiliev and Baumgardt, 2021). We then follow a similar methodology to Vasiliev (2019) and Vasiliev and Baumgardt (2021) to construct radially binned dispersion profiles in both directional components by fitting a multivariate Gaussian distribution to the proper motions of all the stars in each bin which pass the quality flags described in section 2 of Vasiliev and Baumgardt (2021).
We supplement the Gaia proper motion datasets of specific clusters, where further PM studies are available from the Hubble Space Telescope (HST). Libralato et al. (2022) presented profiles of proper motion dispersions in the central regions of 57 globular clusters, based on archival HST photometry. This catalogue overlaps with our sample for 35 clusters, in which case we utilize both the radial \(\sigma_{R}\) and tangential \(\sigma_{T}\) components. For two of the clusters in our sample not covered by Libralato et al. (2022) (NGC 5139 and NGC 6266) we instead utilize the the total dispersion \(\sigma=\sqrt{(\sigma_{T}^{2}+\sigma_{R}^{2})/2}\) and anisotropy ratio \(\sigma_{T}/\sigma_{R}\) profiles presented by Watkins et al. (2015), based on the HST catalogues of Bellini et al. (2014). The coverage of the core of NGC 6723 is also extended by
the dispersion profiles of Taheri et al. (2022) (using Gemini South GeMS).
#### 3.2.2 Line-of-sight velocities
The kinematic data is also supplemented by line-of-sight (LOS) velocity dispersion profiles, providing a 3-dimensional view of the cluster dynamics.
The majority of the LOS dispersion profiles used come from compilations of different surveys and programs. Baumgardt (2017) gathered, from the literature, 95 publications with large enough LOS velocity datasets for 45 GCs, and Baumgardt & Hilker (2018) expanded on this catalogue by including additional ESO/Keck archival data of the LOS velocities of stars in 90 GCs. In both cases, the different datasets were combined by shifting them to the cluster's mean radial velocity. Baumgardt et al. (2019) derived the velocity dispersion profiles of 127 GCs using the Gaia DR2 radial velocity data. This catalogue of stars was matched to that of Baumgardt & Hilker (2018), and scaled to a common mean velocity. Finally, this catalogue was enhanced again by Baumgardt et al. (2023) with the inclusion of data from various more recent large scale radial-velocity surveys. Dalgleish et al. (2020) supplemented this work with radial velocity measurements in 59 GCs from the WAGGS survey, using the WiFeS integral field spectrograph. These datasets were further complemented in the cores of 22 clusters by the LOS dispersion profiles presented by Kamann et al. (2018), who gathered data within the half-light radius of 22 GCs using the MUSE integral-field-unit spectrograph on the VLT. Further coverage of the central region of NGC 6266 is provided by the profiles presented by Lutzgendorf et al. (2013), based on observations by the VLT/FLAMES integral-field-unit spectrograph, and of NGC 6362 by Dalessandro et al. (2021), based on VLT/MUSE observations.
#### 3.2.3 Number density profiles
Radial profiles of the projected number density of stars in our GCs are vital in constraining the spatial structure and concentration of the clusters.
The projected number density profiles of all clusters are taken from de Boer et al. (2019), who utilized counts of member stars from Gaia DR2, binned radially, for 81 Milky Way clusters. Membership was determined, for stars up to a faint magnitude limit of \(G=20\), based on the Gaia proper motions. To aid with the coverage of the cluster centres, where Gaia is incomplete and struggles with crowding in all but the least dense GCs, the authors stitched the Gaia profiles together with profiles from HST photometry (Miocchi et al., 2013) and a collection of ground-based surface brightness profiles (Trager et al., 1995). These profiles from the literature were scaled to match the Gaia profiles in the regions where they overlap, with the final profile being constructed of Gaia counts in all regions with a density lower than \(10^{5}\) stars/\(\deg^{2}\) and literature profiles otherwise. de Boer et al. (2019) also computed a constant background contamination level for each cluster, computed as the average stellar density between 1.5 and 2 Jacobi radii, which we subtract from the entire profile before fitting.
#### 3.2.4 Mass functions
To provide constraints on the global present-day mass function of the clusters, the degree of mass segregation and the total mass in visible stars, we compare our models against measurements of the stellar mass function in radial annuli and mass bins obtained from deep HST photometry.
The mass function data for each cluster was derived from archival HST photometry by Baumgardt et al. (2023) and includes data from large-scale archival surveys (e.g. Sarajedini et al., 2007; Simioni et al., 2018). Stellar photometry and completeness correction of the data was done using DOLPHOT(Dolphin, 2000, 2016). Stellar number counts were then derived as a function of stellar magnitude and distance from the cluster centre and were then converted into stellar mass functions through fits to DSEP isochrones (Dotter et al., 2008). See Baumgardt et al. (2023) for more details on the extraction and conversion of these mass functions 5. The compilation of images is made up of several HST fields for each cluster, at varying distances from the cluster centres. The observations typically cover stars within a mass range of \(\sim 0.16-0.8\,\mathrm{M}_{\odot}\). The large radial and mass ranges covered allow us to constrain the varying local stellar mass function as a function of distance from the cluster centre, and therefore the degree of mass segregation in the cluster.
Footnote 5: To fit the isochrones, Baumgardt et al. (2023) begins with the cluster heliocentric distances from Baumgardt & Vasiliev (2021), and allows the distance to vary slightly. This is similar to our methodology (see Section 4.1.2), but may result in slightly different final distances to ours. This may introduce a slight inconsistency but, given the distances are all in agreement within uncertainties, will have a negligible impact on our results.
## 4 Model fitting
The models described in Section 2 are constrained by the data described in Section 3 in order to provide distributions of the best-fitting model parameters that describe each cluster, which are determined through Bayesian parameter estimation techniques.
### Probability distributions
Given a model \(M\), the probability associated with a given set of model parameters \(\Theta\), subject to some observed data \(\mathcal{D}\) is given by the Bayesian posterior:
\[P(\Theta\mid\mathcal{D},M)=\frac{P(\mathcal{D}\mid\Theta,M)P(\Theta\mid M)}{P( \mathcal{D}\mid M)}=\frac{\mathcal{L}(\Theta)\pi(\Theta)}{\mathcal{Z}}, \tag{10}\]
where \(\mathcal{L}\) is the likelihood, \(\pi\) is the prior and \(\mathcal{Z}\) is the evidence.
#### 4.1.1 Likelihood
In this work, the total log-likelihood function \(\ln(\mathcal{L})\), for all data \(\mathcal{D}\) considered for a certain cluster, is given simply by the summation of all log-likelihood functions for each individual dataset \(\mathcal{D}_{l}\):
\[\ln(\mathcal{L})=\sum_{l}^{\mathrm{datasets}}\ln(P(\mathcal{D}_{l}\mid\Theta))= \sum_{l}\ln(\mathcal{L}_{l}(\Theta))), \tag{11}\]
and each observational dataset, as described in Section 3.2, has its own component likelihood function \(\ln(\mathcal{L}_{l})\), detailed below.
In order to compare all observed quantities with model predictions, certain quantities which involve angular units (radial distances, proper motions, cluster radii, etc.) must be converted to the
projected, linear model lengths. To do so, we introduce the heliocentric distance to the GC as a new free parameter \(d\), and use the velocity and position conversions:
\[v_{T}=4.74\,\mathrm{km/s}\ \frac{d}{\mathrm{kpc}}\ \frac{\mu}{\mathrm{mas/yr}}, \tag{12}\]
\[r=2\ d\tan\left(\frac{\theta}{2}\right), \tag{13}\]
where \(v_{T}\) is the plane-of-the-sky velocity, \(\mu\) is the observed proper motion, \(r\) is the distance to the cluster centre in projection and \(\theta\) is the observed angular separation.
In all likelihood functions below, the modelled quantities, unless otherwise stated, are taken from the mass bin most closely corresponding to the masses of stars observed in each dataset.
Velocity dispersion profilesThe likelihood function used for all velocity dispersions (LOS and PM) is a Gaussian, over a number of dispersion measurements at different projected radial distances:
\[\ln(\mathcal{L}_{i})=\frac{1}{2}\sum_{j}\left[\frac{(\sigma_{j,\mathrm{obs}}- \sigma_{j,\mathrm{model}})^{2}}{\delta\sigma_{j,\mathrm{obs}}^{2}}-\ln\left( \delta\sigma_{j,\mathrm{obs}}^{2}\right)\right], \tag{14}\]
where \(\sigma_{j}\equiv\sigma(r_{j})\) corresponds to the dispersion at a distance \(r_{j}\) from the cluster centre, with corresponding uncertainties \(\delta\sigma_{j}\equiv\delta\sigma(r_{j})\). Dispersions with subscript _obs_ correspond to the observed dispersions and uncertainties, while subscript _model_ corresponds to the predicted model dispersions.
Number density profilesThe likelihood function used for the number density profile datasets is a modified Gaussian likelihood.
The translation between the surface brightness measurements and discrete star counts (both considered for the number density profiles, as discussed in Section 3.2.3), is difficult to quantify exactly. To compare star counts above a magnitude limit to the integrated light of a surface-brightness profile would require precise knowledge of the mass-to-light ratio for each mass bin, which is an uncertain quantity, especially for evolved stars. To account for this in the fitting procedure, the model is actually fit on the _shape_ of the number density profile, rather than on the absolute star counts. To accomplish this the number density profile of the model is scaled to have the same mean value as the observed profiles. As in Henault-Brunet et al. (2020), the constant scaling factor \(K\) is chosen to minimize the chi-squared:
\[K=\frac{\sum_{j}\Sigma_{j,\mathrm{obs}}\Sigma_{j,\mathrm{model}}/\delta\Sigma_{ j}^{2}}{\sum_{j}\Sigma_{j,\mathrm{model}}^{2}/\delta\Sigma_{j}^{2}}, \tag{15}\]
where \(\Sigma_{j}\equiv\Sigma(r_{j})\) are the modelled and observed number density, with respective subscripts, at a distance \(r_{j}\) from the cluster centre.
We also introduce an extra "nuisance" parameter (\(s^{2}\)) to the fitting. This parameter is added in quadrature, as a constant error over the entire profile, to the observational uncertainties to give the overall error \(\delta\Sigma\):
\[\delta\Sigma_{j}^{2}=\delta\Sigma_{j,\mathrm{obs}}^{2}+s^{2}. \tag{16}\]
This parameter adds a constant uncertainty component over the entire radial extent of the number density profile, effectively allowing for small deviations in the observed profiles near the outskirts of the cluster. This enables us to account for certain processes not captured by our models, such as the effects of potential escapers (Claydon et al., 2017, 2019).
The likelihood is then given in similar fashion to the dispersion profiles:
\[\ln(\mathcal{L}_{i})=\frac{1}{2}\sum_{j}\left[\frac{(\Sigma_{j,\mathrm{obs}}- K\Sigma_{j,\mathrm{model}})^{2}}{\delta\Sigma_{j}^{2}}-\ln\left(\delta\Sigma_{j}^{2} \right)\right]. \tag{17}\]
Mass functionsTo compare the models against the mass function datasets, the local stellar mass functions are extracted from the models within specific areas in order to match the observed MF data at different projected radial distances from the cluster centre within their respective HST fields.
To compute the stellar mass functions, the model surface density in a given mass bin \(\Sigma_{k}(r)\) is integrated, using a Monte Carlo method, over the area \(A_{j}\), which covers a radial slice of the corresponding HST field from the projected distances \(r_{j}\) to \(r_{j+1}\). This gives the count \(N_{\mathrm{model},k,j}\) of stars within this footprint \(j\) in the mass bin \(k\):
\[N_{\mathrm{model},k,j}=\int_{A_{j}}\Sigma_{k}(r)dA_{j}. \tag{18}\]
This star count can then be used to compute the Gaussian likelihood:
\[\ln(\mathcal{L}_{i})=\frac{1}{2}\sum_{j}^{\mathrm{radial\ mass}}\sum_{k}^{ \mathrm{bins}}\left[\frac{(N_{\mathrm{obs},k,j}-N_{\mathrm{model},k,j})^{2}}{ \delta N_{k,j}^{2}}-\ln\left(\delta N_{k,j}^{2}\right)\right], \tag{19}\]
which is computed separately for each HST program considered.
The error term \(\delta N_{k,j}\) must also account for unknown and unaccounted for sources of error in the mass function counts, as well as the fact that our assumed parametrization of the global mass function (eq. (3)) may not be a perfect representation of the data. Therefore we include another nuisance parameter (\(F\)) which scales up the uncertainties:
\[\delta N_{k,j}=F\cdot\delta N_{\mathrm{obs},k,j}. \tag{20}\]
This scaling, rather than adding in quadrature as with the \(s^{2}\) nuisance parameter, boosts the errors by a constant factor. This allows it to capture additional unaccounted-for uncertainties (e.g. in the completeness correction or limitations due to the simple parametrization of the mass function) across the full range of values of star counts, while simply adding the same error in quadrature to all values of star counts would lead to negligible error inflation in regions with higher counts.
#### 4.1.2 Priors
The prior probability distribution \(\pi\) for our set of model parameters \(\Theta\) is given by the product of individual, independent priors for each parameter in \(\Theta\):
\[\pi(\Theta)=\prod_{i}^{N_{\mathrm{params}}}\pi_{i}(\theta_{i}). \tag{21}\]
The priors for individual parameters can take a few possible forms.
Uniform, or flat, priors are used to provide an uninformative prior to most parameters. The uniform distribution is defined as
constant between two bounds \((L,U)\), with a total probability normalized to unity:
\[\pi_{i}(\theta_{i})=\begin{cases}(U-L)^{-1}&\text{for $L\leq\theta_{i}\leq U$},\\ 0&\text{otherwise}.\end{cases} \tag{22}\]
The upper and lower bounds are chosen, for most parameters, to simply bound a large enough area of parameter space containing all valid parameter values, whereas for certain parameters the bounds are specifically set to disallow values outside a certain range. All parameters except the heliocentric distance (described below) use uniform priors.
The truncation parameter \(g\) is limited to values between 0 and 3.5 for all clusters, corresponding to the absolute limit of models of finite extent (Gieles & Zocchi, 2015).
The mass-dependant velocity scale \(\delta\) is given an upper limit of 0.5, corresponding to the typical value reached by fully mass segregated cluster, and a lower limit of 0.3. Comparisons between limey models and \(N\)-body simulations of GCs have shown that even less evolved clusters, still containing a large number of BHs, are best-fit by a \(\delta\sim 0.35\)(Peuten et al., 2017). In this work, not even \(\omega\) Cen reaches the low limit of our prior range.
Finally, the mass function exponents \(\alpha_{i}\) are limited to reasonable regimes. The low and intermediate mass components \(\alpha_{1}\) and \(\alpha_{2}\) are given bounds between -1 and 2.35, confining the MF to remain shallower than the canonical high-mass IMF, and allowing for an increasing mass function with increasing masses, which may best describe the most evolved clusters. The high-mass exponent \(\alpha_{3}\) is restricted to values between 1.6 and 4.0. The lower-bound of 1.6 is chosen as it has been shown that clusters this "top-heavy" are expected to have dissolved by the present day (Weatherford et al., 2021; Haghi et al., 2020). The upper limit of 4 is chosen as, above this value, lower-mass globular clusters will contain very few heavy remnants and no neutron stars or black holes, in contradiction with observations of stellar remnants within clusters. All exponents are also required to decrease from the lower to the higher mass regimes, such that \(\alpha_{1}\leq\alpha_{2}\leq\alpha_{3}\), following currently observed constraints, Although note that tests with this final rule relaxed resulted in no significant differences.
Gaussian priors are used for the parameters which are informed by previous and independent analyses, and take the form of a Gaussian distribution centred on the reported value \(\mu\) with a width of corresponding to the reported uncertainty \(\sigma\):
\[\pi_{i}(\theta_{i})=\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{ \theta_{i}-\mu}{\sigma}\right)^{2}}. \tag{23}\]
In particular for this analysis, we adopt a Gaussian prior for the distance parameter \(d\), with a mean and standard deviation taken from Baumgardt & Vasiliev (2021). This allows the distance to vary in order to accommodate other observational constraints used in this work, while still being strongly influenced by the robust value obtained through the averaging of a variety of distance determinations from different methods from by Baumgardt & Vasiliev (2021).
The priors used for all parameters are listed in Table 1.
### Sampling
The posterior probability distribution \(P(\Theta\mid\mathcal{D},M)\) of the parameter set \(\Theta\) cannot be solved analytically, but must be estimated through numerical sampling techniques, which aim to generate a set of samples that can be used to approximate the posterior distribution.
Nested sampling (Skilling, 2004; Skilling, 2006) is a Monte Carlo integration method, first proposed for estimating the Bayesian evidence integral \(\mathcal{Z}\), which works by iteratively integrating the posterior over the shells of prior volume contained within nested, increasing iso-likelihood contours.
Samples are proposed randomly at each step, subject to a minimum likelihood constraint corresponding to the current likelihood contour. This sampling proceeds from the outer (low-likelihood) parameter space inwards, until the estimated remaining proportion of the evidence integral, which arises naturally from the sampling, reaches a desiewly small percentage. This well-defined stopping criterion is a great advantage of nested sampling, as in most other sampling methods convergence can be difficult to ascertain.
Nested sampling has the benefit of flexibility, as the independently generated samples are able to probe complex posterior shapes, with little danger of falling into local minima, or of missing distant modes. It also does not depend, like many other sampling methods, on a choice of initial sampler positions, and will always cover the entire prior volume. In cases of well-defined priors and smoothly transitioning posteriors, as is the case in this work, the sampling efficiency can exceed that of the typical Markov chain Monte Carlo (MCMC) samplers.
Dynamic nested sampling is an extension of the typical nested algorithm designed to re-tune the sampling to more efficiently estimate the posterior (Higson et al., 2019). This algorithm effectively functions by spending less time probing the 'outer' sections of the prior volume which have little impact on the posterior. In this work, we have chosen to utilize dynamic nested sampling for its speed and efficiency, and to ensure that no separate, distant modes in the posterior are missed.
All methodology in this work, from data collection to model fitting, is handled by the software library and fitting pipeline GCfit6, which was created to facilitate the fitting of limepy models to a number of observables through a parallelized sampling procedure. All nested sampling is handled by the dynesty software package (Speagle, 2020). The sampler is run, for all clusters, using the default (multi-ellipsoid bounded, random-walk) dynamic sampling (see Speagle, 2020 for more details). The sampling is continued until it reaches an effective sample size (ESS; Kish, 1965) of at least 5000:
Footnote 6: Available at [https://github.com/mndickson/GCfit](https://github.com/mndickson/GCfit)
\[\text{ESS}=\frac{\left(\sum_{i=1}^{n}w_{i}\right)^{2}}{\sum_{i=1}^{n}w_{i}^ {2}}, \tag{24}\]
where \(w_{i}\) is the importance weight of the sample \(i\) in the set of generated samples.
## 5 Results
We present in this section the results of the fits based on the methodology of Section 4. First we introduce the resulting posterior probability distributions of all model parameters, and the corresponding fits they give to the relevant data. We then briefly discuss the distribution between clusters of some structural parameters of interest. The stellar mass functions of the clusters are explored in more detail in Section 6.
### Fitting Results
#### 5.1.1 Parameter distributions
The set of weighted samples retrieved from the nested sampler, after sampling until the stopping condition described in Section 4.2, are used to construct posterior probability distributions for all model parameters.
Figure 2 shows an example of the resulting posterior distributions for the cluster NGC 104. The best-fitting parameter values for all clusters can be found in Table 27.
Footnote 7: All fit results, figures and tables from this paper are also available for all clusters by download from [https://github.com/mdmickson/GCfit-results](https://github.com/mdmickson/GCfit-results). Figures showing the fits for all clusters in the sample are also available as supplementary material in the electronic version
The vast majority of marginalized posterior distributions for the cluster parameters follow a unimodal and approximately Gaussian distribution. The marginalized posterior probability distribution of some parameters are skewed towards or hitting the boundaries of the prior ranges, however, as indicated in Section 4.1.2, this is only allowed to occur for parameters with physically motivated prior boundaries.
The posterior parameter distributions of one cluster (NGC 6723) are not single Gaussians, but instead show two separate peaks, both containing comparable posterior probability. This cluster will be discussed in more detail in Paper II, as these models differ most significantly in their BH populations. In all figures in this paper this cluster may appear as a single point with a very large errorbars in one direction, due to the fact that one of these peaks is larger than the other, and the median of the entire distribution falls entirely within this peak.
Two parameters (\(\log(\hat{r}_{\mathrm{a}})\) and \(\mathrm{BH}_{\mathrm{ret}}\)) often have a broader posterior probability distribution. The anisotropy radius may be unconstrained above a certain minimum value, illustrating the fact that all values of the anisotropy radius greater than the truncation radius effectively lead to an entirely isotropic cluster. The BH retention fraction may be completely unconstrained in models with a very small number of BHs initially formed (e.g. due to a "top-light" mass function), in which case the fraction of BHs retained has a negligible effect on the models. These parameters are examined below and in Paper II, respectively.
A fraction of our clusters are core-collapsed, and they are expected to not retain any significant populations of BHs (Giersz & Heggie 2009; Breen & Heggie 2013a). However, our best-fitting models of four such clusters (NGC 6266, NGC 6624, NGC 6752, NGC 7078) do possess BHs, and may not be physical. Core-collapsed clusters have a cusp in the inner surface brightness profile which is difficult to reproduce with the library models, which are scored. We therefore do not trust the result for BH retention of these clusters as we suspect that \(\mathrm{BH}_{\mathrm{ret}}\) was used as an additional degree of freedom in an attempt to describe the inner profiles. In these cases we recompute the models, this time with the amount of retained BHs at the present day fixed to 0 (by fixing the \(\mathrm{BH}_{\mathrm{ret}}\) parameter to 0%). These models are used, for these four clusters, in all analysis presented in this paper. This phenomenon, these models and the limitations of our limepy models in representing core-collapsed clusters will all be examined and discussed in more detail in Paper II. However, both sets of models demonstrate good fits to the data, and there was no significant change in the best-fit mass function slopes or in any of the correlations presented in this paper, when considering either set of models.
#### 5.1.2 Best-fitting models
Figures 3 and 4 show an example (also for NGC 104) of the observables predicted by the best-fitting models, overlaid with the observational datasets used to constrain them.
The best-fitting models for the majority of clusters match the given data extraordinarily well. There are, however, a small number of clusters, from our original sample of 37 clusters, where the fits do not reproduce certain datasets adequately. This tends to occur in systems with small amounts of PM and LOS velocity data. Having few kinematic datapoints, as compared to the mass function and number density datasets, means that these models are less able to constrain the non-visible mass and are prone to overfitting the mass functions, at the expense of the kinematics. As fitting both the visible and dark components well is vital to our analysis of the high-mass mass function and the remnant populations, we choose to remove these clusters from our sample going forward. Three such clusters (NGC 4590, NGC 6656, NGC 6981) were discarded due to their unsatisfactory fits. The remaining 34 clusters have best-fitting models that are well matched to all datasets and will make up the set of clusters used in all further analysis.
Figure 2: Marginalized and 2D projections of the posterior probability distributions of all model parameters for the fit to NGC 104. Contours indicate \(1\sigma\), \(2\sigma\) and \(3\sigma\) levels on the 2D posterior probability distributions.
## References
Figure 3: Model radial profiles (blue contours) of surface number density (\(\Sigma\)), line-of-sight velocity dispersions (\(\sigma_{\rm LOS}\)), total (\(\sigma_{\rm PM,tot}\)), radial (\(\sigma_{\rm PM,R}\)) and tangential (\(\sigma_{\rm PM,T}\)) proper motion velocity dispersions and proper motion anisotropy ratio (\(\sigma_{\rm PM,T}/\sigma_{\rm PM,R}\)), for the fit of NGC 104. The dark and light shaded regions represent the \(1\sigma\) and \(2\sigma\) credible intervals of the model fits, respectively. The observational datasets used to constrain the models are shown alongside their \(1\sigma\) uncertainties by the orange and green points and errorbars. The models are fit only on the radial and tangential components of the proper motion individually, while the total and anisotropy ratio are included here solely to demonstrate the fit. The background value subtracted from the number density profile is shown by the dashed line.
Figure 4: Model local stellar mass fractions fit to the observations of NGC 104. Each panel (centre and right columns) shows the number of stars per unit mass as a function of stellar mass, for different distance ranges from the cluster centre. The dark and light shaded regions represent the \(1\sigma\) and \(2\sigma\) credible intervals of the model fits, respectively. The measurements used to constrain the models are shown alongside their \(1\sigma\) uncertainties by the points and errorbars. Each individual HST observing program is denoted by a separate colour, and the corresponding fields for each are shown on the left panel.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline Cluster & \(\delta_{b}\) & \(M\) [\(10^{4}\) M\({}_{\odot}\)] & \(\gamma_{b}\) [pc] & \(\rm{kHz}_{b}(f_{b})\) & \(\rm{\it F}\) & \(\rm{\it F}\) & \(\rm{\it F}\) & \(\rm{\it F}\) & \(\rm{\it F}\) & \(\rm{\it F}\) \\ \hline NGC108 & 6.3\({}^{+1.2}_{-0.12}\) & 0.99\({}^{+0.00}_{-0.00}\) & 6.9\({}^{+0.00}_{-0.00}\) & 1.4\({}^{+0.05}_{-0.00}\) & 1.5\({}^{+0.05}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 1.37\({}^{+0.05}_{-0.00}\) & 2.3\({}^{+0.05}_{-0.00}\) & 0.73\({}^{+0.05}_{-0.00}\) & 4.46\({}^{+0.05}_{-0.00}\) & 0.01\({}^{+0.05}_{-0.00}\) & 2.64\({}^{+0.05}_{-0.00}\) \\ NGC108 & 3.6\({}^{+0.07}_{-0.00}\) & 0.85\({}^{+0.02}_{-0.00}\) & 1.0\({}^{+0.03}_{-0.00}\) & 0.4\({}^{+0.02}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 1.37\({}^{+0.05}_{-0.00}\) & 2.35\({}^{+0.05}_{-0.00}\) & 0.74\({}^{+0.05}_{-0.00}\) & 8.80\({}^{+0.05}_{-0.00}\) & 0.40\({}^{+0.05}_{-0.00}\) & 1.44\({}^{+0.05}_{-0.00}\) \\ NGC108 & 3.6\({}^{+0.07}_{-0.00}\) & 0.78\({}^{+0.02}_{-0.00}\) & 3.8\({}^{+0.02}_{-0.00}\) & 1.0\({}^{+0.03}_{-0.00}\) & 0.4\({}^{+0.02}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 1.37\({}^{+0.05}_{-0.00}\) & 2.30\({}^{+0.05}_{-0.00}\) & 0.45\({}^{+0.05}_{-0.00}\) & 0.40\({}^{+0.05}_{-0.00}\) & 1.44\({}^{+0.05}_{-0.00}\) \\ NGC108 & 5.4\({}^{+0.05}_{-0.00}\) & 0.77\({}^{+0.05}_{-0.00}\) & 1.4\({}^{+0.05}_{-0.00}\) & 1.4\({}^{+0.05}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 2.30\({}^{+0.05}_{-0.00}\) & 2.45\({}^{+0.05}_{-0.00}\) & 0.40\({}^{+0.05}_{-0.00}\) & 2.35\({}^{+0.05}_{-0.00}\) \\ NGC108 & 3.4\({}^{+0.05}_{-0.00}\) & 0.77\({}^{+0.05}_{-0.00}\) & 1.0\({}^{+0.03}_{-0.00}\) & 1.1\({}^{+0.03}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 1.0\({}^{+0.05}_{-0.00}\) & 1.0\({}^{+0.05}_{-0.00}\) & 2.30\({}^{+0.05}_{-0.00}\) & 2.21\({}^{+0.05}_{-0.00}\) & 1.63\({}^{+0.05}_{-0.00}\) & 0.000\({}^{+0.05}_{-0.00}\) & 4.42\({}^{+0.05}_{-0.00}\) \\ NGC108 & 5.7\({}^{+0.05}_{-0.00}\) & 0.77\({}^{+0.05}_{-0.00}\) & 1.0\({}^{+0.03}_{-0.00}\) & 1.1\({}^{+0.03}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 2.30\({}^{+0.05}_{-0.00}\) & 2.24\({}^{+0.05}_{-0.00}\) & 1.22\({}^{+0.05}_{-0.00}\) & 0.000\({}^{+0.05}_{-0.00}\) & 2.35\({}^{+0.05}_{-0.00}\) \\ NGC108 & 4.7\({}^{+0.05}_{-0.00}\) & 0.77\({}^{+0.05}_{-0.00}\) & 3.9\({}^{+0.05}_{-0.00}\) & 1.0\({}^{+0.05}_{-0.00}\) & 1.0\({}^{+0.05}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 1.19\({}^{+0.05}_{-0.00}\) & 2.21\({}^{+0.05}_{-0.00}\) & 1.45\({}^{+0.05}_{-0.00}\) & 1.00\({}^{+0.05}_{-0.00}\) & 2.45\({}^{+0.05}_{-0.00}\) \\ NGC108 & 4.7\({}^{+0.05}_{-0.00}\) & 0.77\({}^{+0.05}_{-0.00}\) & 3.9\({}^{+0.05}_{-0.00}\) & 1.0\({}^{+0.05}_{-0.00}\) & 1.0\({}^{+0.05}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 1.37\({}^{+0.05}_{-0.00}\) & 2.21\({}^{+0.05}_{-0.00}\) & 4.45\({}^{+0.05}_{-0.00}\) & 1.00\({}^{+0.05}_{-0.00}\) & 2.45\({}^{+0.05}_{-0.00}\) & 1.00\({}^{+0.05}_{-0.00}\) \\ NGC109 & 4.7\({}^{+0.05}_{-0.00}\) & 0.77\({}^{+0.05}_{-0.00}\) & 3.9\({}^{+0.05}_{-0.00}\) & 1.0\({}^{+0.05}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 1.37\({}^{+0.05}_{-0.00}\) & 2.30\({}^{+0.05}_{-0.00}\) & 1.37\({}^{+0.05}_{-0.00}\) & 0.45\({}^{+0.05}_{-0.00}\) & 1.00\({}^{+0.05}_{-0.00}\) & 2.45\({}^{+0.05}_{-0.00}\) & 1.00\({}^{+0.05}_{-0.00}\) & 2.35\({}^{+0.05}_{-0.00}\) \\ NGC109 & 5.4\({}^{+0.05}_{-0.00}\) & 0.77\({}^{+0.05}_{-0.00}\) & 4.5\({}^{+0.05}_{-0.00}\) & 1.0\({}^{+0.05}_{-0.00}\) & 0.4\({}^{+0.05}_{-0.00}\) & 0.4\(
### Cluster Parameters
Given this set of best-fitting models, we next examine the distributions of various model parameters and compare with other results from the literature. The best-fitting model, mass function and nuisance parameters for all clusters are shown in Table 2.
evolution and initial conditions of the cluster. Therefore, in our models, the IMF can most directly be inferred only in the high-mass (\(\alpha_{3},m>1\,\mathrm{M}_{\odot}\)) regime, while the lower-mass exponents (\(\alpha_{1},\alpha_{2}\)) are more representative of the present-day mass function, which may have evolved away from the IMF significantly.
To quantify this assertion, we must examine the dynamical evolution of our clusters, as the dynamical loss of stars is not necessarily limited entirely to the lower-mass regime. In very dynamically evolved clusters, which have lost a substantial amount of their total initial mass to escaping stars, the characteristic mass of preferentially escaping stars will increase, potentially depleting even the population of higher-mass stars and WDs, which had initial masses above \(1\,\mathrm{M}_{\odot}\), and in such cases the inferred mass function exponent \(\alpha_{3}\) may also be shallower and less directly representative of the IMF. To account for this effect, we must determine which clusters have lost a large amount of their initial mass by the present day. We estimate this _remaining mass fraction_ by the equation:
\[\frac{M_{\mathrm{today}}}{M_{\mathrm{initial}}}=0.55\times\left(1-\frac{\mathrm{ Age}}{\tau_{\mathrm{diss}}}\right) \tag{25}\]
where the factor \(0.55\) reflects the typically assumed mass loss from stellar evolution of \(\sim 55\%\) of the initial cluster mass in the first Gyr of a cluster's evolution and the dissolution time \(\tau_{\mathrm{diss}}\) represents the estimated total lifetime of the cluster. The estimated lifetimes of our clusters were computed according to the approach described in Section 3.2 of Baumgardt et al. (2019), using the updated models of Baumgardt et al. (2023). This method is based on integrating the orbit of the clusters backwards in the Milky Way galactic potential (Irrgang et al., 2013), and estimating the resulting mass loss. A related quantity is the "dynamical age", which we define as the ratio of the cluster's age over its half-mass relaxation time (\(\tau_{\mathrm{rel}}\)).
We have taken both the relaxation and dissolution times from the best-fitting models of Baumgardt et al. (2023), a companion study which determined the mass functions of 120 MW, LMC and SMC globular clusters by comparing the same HST mass function datasets as used in this work with a grid of direct \(N\)-body simulations. While we could technically extract the relaxation times self-consistently from our own set of models, we utilize the values obtained by Baumgardt et al. (2023) in order to most easily compare our results. Given the good agreement (on average) between total mass and half-mass radii of their \(N\)-body models, as shown in Figure 5, the differences should be negligible.
These quantities, and their relationships with all mass function exponents, are shown for all clusters in Figure 8. The clusters to
Figure 5: Comparison of the heliocentric distance, total system mass, and half-mass radius of all cluster fits against the distances computed by Vasiliev & Baumgardt (2021) and the properties inferred from the \(N\)–body model fits of Baumgardt et al. (2023). The top row compares the median and \(1\sigma\) values of both both sources, with the grey diagonal representing perfect agreement. The bottom row shows the distribution (represented by a Gaussian KDE) of the fractional differences among all clusters, divided by their combined uncertainties. The dashed black line shows a Gaussian, centred on 0 with a width of \(\sigma=1\), which represents perfect agreement. The ratio of the FWHM of the fractional difference distributions to that of the Gaussian is noted in the top right corners. NGC 5139 is excluded from these figures due to its very large mass but shows similar agreement, and will be discussed in more detail in Paper II.
the left of these plots are thought to have lost a large amount of their initial mass and be more dynamically evolved. In these cases the lower-mass \(\alpha_{1}\) and \(\alpha_{2}\) slopes are shallower (even becoming negative in the most dynamically evolved clusters), and the \(\alpha_{3}\) slope may also have been modified by dynamical evolution. As such, caution is advised when interpreting the slopes in these clusters as representative of the IMF. These quantities cannot be used to define an exact division of where the global mass function parameters reflect the IMF, but it does provide useful context to our proceeding analysis of the IMF.
We can clearly see that both lower-mass MF exponents (\(\alpha_{1},\alpha_{2}\)) have distinct correlations with these two quantities, with the increasingly evolved clusters (short lifetimes / relaxation times compared to their ages) substantially more depleted in low-mass stars than their less evolved counterparts. This trend, and the IMF in the low and intermediate-mass regime are explored in more detail in Baumgardt et al. (2023). No such correlation exists with \(\alpha_{3}\), which supports our assertion that the high-mass regime is less affected by the cluster's dynamical evolution, and thus, overall, most representative of the IMF. However, as stated before, caution should still be applied when interpreting the \(\alpha_{3}\) of the clusters to the left side of this figure. We will examine this parameter in more detail in Section 6.1.1 below.
The evolution of the remnant mass fraction \(f_{\rm remnant}\), which includes all types of stellar remnants, is also shown at the bottom of Figure 8, where a strong relationship with the dynamical age of the clusters is evident, as might be expected. As a cluster evolves and loses mass, as mentioned before, the mass lost is preferentially in the form of lower-mass stars, rather than the heavy remnants, and as such the fraction of mass in remnants should increase as the cluster's low-mass MF is depleted. Interestingly, some of the most dynamically evolved clusters have nearly 75% of their mass in dark remnants at the present day, which could have important implications for the mass-to-light ratios and inferred masses of unresolved GCs in distant galaxies.
#### 6.1.1 High-mass IMF
Figure 9 shows the posterior probability distributions of \(\alpha_{3}\) for all clusters. From this figure we can see that the distributions are, in the vast majority of cases, compatible within uncertainties with the typically assumed canonical high-mass (\(m>1\,{\rm M}_{\odot}\)) IMF formulations (e.g. Salpeter, 1955; Kroupa, 2001), however with a large spread of \(\alpha_{3}\) values between \(\sim 2\)-\(3\). The median and \(1\sigma\) values over all clusters are \(\alpha_{3}=2.37^{+0.48}_{-0.25}\). This matches remarkably well with the canonical IMFs, a striking result given the large freedom in the mass function of our models. This result is also in agreement with the high-mass slopes determined by Baumgardt et al. (2023) through the examination of similar HST mass function datasets in younger clusters in the Large (LMC) and Small Magellanic Clouds (SMC), where more massive stars, yet to evolve off the main sequence, can still be observed directly. Similar results were also obtained by Weisz et al. (2015) for young clusters in M 31. It is even clearer that our fits _do not_ favour any more extreme IMFs, neither exceedingly top-heavy nor top-light, especially when ignoring the most dynamically evolved clusters (shown in Figure 9 by the more yellow colours).
This result is counter to some recent suggestions in the literature of top-heavy IMFs in GCs. It has been shown that clusters with top-heavy IMFs are expected to have lost a very large fraction of their mass early in their lifetimes due to stellar mass loss and supernova explosions (Haghi et al., 2020), produce a large amount of BHs and could contribute significantly to the observed rate of binary BH
Figure 6: Violin plot of the posterior distribution of the (log) anisotropy radius, normalized by the cluster half-mass radius, for all clusters. The median, \(1\sigma\) and \(2\sigma\) values are denoted by the horizontal ticks on each distribution.
mergers (Weatherford et al., 2021; Antonini et al., 2022). Given that our results seem to preclude any clusters as top-heavy as \(\alpha_{3}\sim 1.6\), there is thus no obvious need to consider top-heavy IMFs in estimates of BBH merger rates in globular clusters. Due to the smaller dissolution times of top-heavy GCs of typical masses (\(\sim 10^{5}\) M\({}_{\odot}\)), there remains the possibility that some GCs had formed with a more top-heavy IMF, and have simply dissolved to such an extent by the present day that they are undetectable. These clusters could still contribute significantly to the rate of BBH mergers and gravitational waves. However, given the large range of GC parameter space covered by our models, it is unclear what would cause these top-heavy GCs to form alongside clusters with a more canonical IMF as we see here. As shown in Haghi et al. (2020), the dissolution times of GCs scale smoothly with the IMF, resulting in, for example, lifetimes \(\sim 3\) times shorter for typical-mass clusters with \(\alpha_{3}\approx 1.8\), compared to clusters with a canonical IMF. Given that the spread in initial masses of the GC population is likely on the order of \(\sim 100\)(e.g. Balbinot & Gieles, 2018), we would expect to still find some surviving clusters with an \(\alpha_{3}<1.8\), if they had formed alongside our sample.
It should be noted again that, as mentioned in Section 5.2, the uncertainties on these parameters represent only the statistical uncertainties on the fits, and the actual errors could be larger. This extra uncertainty on the model would be difficult to quantify exactly, however, examining in particular \(\alpha_{3}\), based on a reduced chi-squared test, if all the scatter in these results (away from the mean) were explained by the uncertainties in our models (beyond the statistical uncertainties shown), it would indicate a typical error of approximately 0.4 on \(\alpha_{3}\).
### Relationship with metallicity
We next examine possible correlations between the high-mass stellar IMF of GCs and metallicity. Variations of the initial mass function with metallicity have been suggested in the past based on theoretical studies of star and cluster formation, which indicate that increasing metallicity leads to more efficient cooling and helps limit stellar accretion, and thus should reduce the characteristic mass of formed stars and produce an increasingly bottom-heavy IMF in more metal-rich clusters (Larson, 1998; De Marchi et al., 2017; Chon et al., 2021). Marks et al. (2012) proposed a linear relationship between the high-mass IMF slope and metallicity, which begins with more top-heavy values of \(\alpha_{3}\) at lower metallicities (\(\alpha_{3}=1\) at \(\rm[Fe/H]=-2.5\)), and reaches the canonical Kroupa value of 2.3 only at metallicities \(\rm[Fe/H]>-0.5\). Given the large amount of freedom available in our mass function slopes, and the excellent constraints we are able to place on the dark remnant populations in this mass regime, our model fits, which span nearly the full range of Milky Way GC metallicities, present an excellent opportunity to examine this potential relationship.
Figure 10 shows the relationship between \(\alpha_{3}\) and cluster metallicity \(\rm[Fe/H]\). As can be seen in the left panel, while there does seem to be an absence of more top-light clusters at lower metallicities, no clear overall trend or relationship seems to emerge. Most clusters
Figure 7: Relations between all three mass function exponent parameters. Gray shaded areas represent the parameter space which is disallowed by the priors on the mass function slopes.
are, as stated before, scattered around the canonical \(\alpha_{3}\) value, with a large spread but no apparent dependence on metallicity.
To probe for any potential trend further, we attempt to directly fit a linear relation (\(\alpha_{3}=m\times\left[\mathrm{Fe/H}\right]+b\)) to this plot, using a simple MCMC sampler with a Gaussian likelihood (using **emcee**; Foreman-Mackey, 2016). To account for any biases and underestimated uncertainties in our inferred \(\alpha_{3}\) values (as discussed in Sections 5.2 and 6.1.1), we also include a nuisance parameter, added in quadrature to the statistical errors. The right panel of Figure 10 shows the results of this fit. The linear fit shows a very slightly positive median slope, but within uncertainties is entirely consistent with no correlation at all. The best-fit nuisance parameter value is also remarkably similar to the estimated model uncertainties computed in Section 6.1.1.
Figure 8: Relations between all three mass function exponent parameters and the fraction of cluster mass in all stellar remnants (white dwarfs, neutron stars, black holes) versus the dynamical age and remaining mass fraction of all clusters. Clusters with higher remaining mass fractions and relaxation times greater than their ages provide more reliable probes of the IMF.
This analysis is somewhat limited by the smaller number of clusters with a larger remaining mass fraction at both extremes of the metallicity range of Milky Way GCs, which largely drive the results of this fit. Further extension of this work to more metal-poor and metal-rich clusters would aid in definitively supporting or excluding the existence of a correlation between the metallicity and the stellar IMF of GCs.
It is clear though, even with these caveats, that the very top-heavy IMF-metallicity relationship proposed by Marks et al. (2012) is not compatible with our results. As discussed in Section 6.1.1, none of our clusters favour a top-heavy IMF, and even our most metal-poor clusters have a value of \(\alpha_{3}\) much closer to the canonical \(\sim 2.3\) than suggested by the fundamental plane of Marks et al. (2012).
Figure 10: Relation between the high-mass IMF exponent \(\alpha_{3}\) and the cluster metallicity for all clusters. Colours represent the remaining mass fraction. The corresponding value of the Kroupa (2001) canonical high-mass (\(m>1\,\mathrm{M}_{\odot}\)) IMF formulation is shown by the red dashed line. On the right panel, over-plotted in grey is the best-fit linear relation representing the given clusters, shown by 500 random draws of the converged MCMC chain. The median and \(1\sigma\) uncertainties of the parameters of these fits are given in the upper-left corner of each panel.
Figure 9: Violin plot of the \(\alpha_{3}\) parameter posterior distributions for all clusters. The median, \(1\sigma\) and \(2\sigma\) values are denoted by the horizontal ticks on each distribution. Colours represent the remaining mass fraction. The corresponding values of some canonical high-mass (\(m>1\,\mathrm{M}_{\odot}\)) IMF formulations (Salpeter, 1955; Kroupa, 2001) are shown by dashed lines.
## 7 Conclusions
In this work, we have inferred, through dynamic nested sampling, the best-fitting model parameter distributions of multimass limepy models for a large sample of Milky Way globular clusters, subject to a number of observed proper motion, line-of-sight velocity, number density and stellar mass function datasets. This process has resulted in well-fit models for 34 Milky Way GCs, with full, well constrained, posterior distributions for the structural, mass functions, and heliocentric distance parameters of each cluster. These results show excellent matches with the properties of the \(N\)-body models computed by Baumgardt et al. (2019).
These models further allow us to explore in detail the stellar (initial) mass functions of a large sample of Milky Way GCs, and yield a number of important conclusions:
1. Deviations of the low and intermediate-mass stellar mass function slopes from the \(\alpha_{1}=\alpha_{2}\) line demonstrate that a two-component power law is necessary in order to describe the (initial) mass function in this mass regime.
2. We show that, while the low and intermediate-mass MF slopes are strongly dependent on the dynamical age of the clusters, the high-mass slopes (\(\alpha_{3};m>1\,\mathrm{M}_{\odot}\)) are not, indicating that the MF in this regime has generally been less affected by dynamical losses, and is most representative of the IMF.
3. Examination of the high-mass MF slopes suggest an IMF in this regime (\(\alpha_{3}=2.37^{+0.48}_{-0.25}\)) which is in excellent agreement with canonical values (e.g. Salpeter, 1955; Kroupa, 2001). This result precludes the need for any more extreme high-mass IMF formulation for globular clusters, such as a top-heavy IMF.
4. No statistically significant correlation is found between the high-mass stellar IMF slope in GCs and cluster metallicity.
In a separate paper (Paper II), we will use the best-fitting models presented in this work to analyze and discuss the populations of stellar-mass BHs in our sample of GCs.
## Acknowledgements
ND is grateful for the support of the Durland Scholarship in Graduate Research. VHB acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) through grant RGPIN-2020-05990. MG acknowledges support from the Ministry of Science and Innovation (EUR2020-112157, PID2021-125485NB-C22, CX2019-000918-M funded by MCIN/AEI/10.13039/501100011033) and from AGAUR (SGR-2021-01069).
This research was enabled in part by support provided by ACENET (www.ace-net.ca) and the Digital Research Alliance of Canada ([https://alliancecan.ca](https://alliancecan.ca)).
This work has also benefited from a variety of Python packages including astropy(Astropy Collaboration et al., 2013, 2018), corner(Foreman-Mackey, 2016), dynesty(Speagle, 2020), emcee(Foreman-Mackey, 2016), h5py(Collette et al., 2022), matplotlib(Hunter, 2007), numpy(Harris et al., 2020), scipy(Virtanen et al., 2020) and shapely(Gillies et al., 2022).
## Data Availability
The data underlying this article are available at [https://github.com/mudickson/GCfit-results](https://github.com/mudickson/GCfit-results). Extracted Gaia EDR3 PM dispersion profiles are also available in Zenodo, at [https://dx.doi.org/10.5281/zenodo.7344596](https://dx.doi.org/10.5281/zenodo.7344596).
|
2308.03352
|
The cycling mechanism of manganese-oxide cathodes in zinc batteries: A
theory-based approach
|
Zinc-based batteries offer good volumetric energy densities and are
compatible with environmentally friendly aqueous electrolytes. Zinc-ion
batteries (ZIBs) rely on a lithium-ion-like Zn$^{2+}$-shuttle, which enables
higher roundtrip efficiencies and better cycle life than zinc-air batteries.
Manganese-oxide cathodes in near-neutral zinc sulfate electrolytes are the most
prominent candidates for ZIBs. Zn$^{2+}$-insertion, H$^+$-insertion, and
Mn$^{2+}$-dissolution are proposed to contribute to the charge-storage
mechanism. During discharge and charge, two distinct phases are observed.
Notably, the pH-driven precipitation of zinc-sulfate-hydroxide is detected
during the second discharge phase. However, a complete and consistent
understanding of the two-phase mechanism of these ZIBs is still missing. This
paper presents a continuum full cell model supported by DFT calculations to
investigate the implications of these observations. We integrate the
complex-formation reactions of near-neutral aqueous electrolytes into the
battery model and, in combination with the DFT calculations, draw a consistent
picture of the cycling mechanism. We investigate the interplay between
electrolyte pH and reaction mechanisms at the manganese-oxide cathodes and
identify the dominant charge-storage mechanism. Our model is validated with
electrochemical cycling data, cyclic voltammograms, and in-situ pH measurments.
This allows us to analyse the influence of cell design and electrolyte
composition on cycling and optimize the battery performance.
|
Niklas J. Herrmann, Holger Euchner, Axel Groß, Birger Horstmann
|
2023-08-07T07:09:37Z
|
http://arxiv.org/abs/2308.03352v1
|
# The Cycling Mechanism of Manganese-Oxide Cathodes in Zinc Batteries: A Theory-Based Approach
###### Abstract
Zinc-based batteries offer good volumetric energy densities and are compatible with environmentally friendly aqueous electrolytes. Zinc-ion batteries (ZIBs) rely on a lithium-ion-like Zn\({}^{2+}\)-suthute, which enables higher roundtrip efficiencies and better cycling fit than zinc-air batteries. Manganese-oxide cathodes in near-neutral zinc sulfate electrolytes are the most prominent candidates for ZIBs. Zn\({}^{2+}\)-insertion, H\({}^{+}\)-insertion, and Mn\({}^{2+}\)-dissolution are proposed to contribute to the charge-storage mechanism. During discharge and charge, two distinct phases are observed. Notably, the pH-driven precipitation of zinc-sulfate-hydroxide is detected during the second discharge phase. However, a complete and consistent understanding of the two-phase mechanism of these ZIBs is still missing. This paper presents a continuum full cell model supported by DFT calculations to investigate the implications of these observations. We integrate the complex-formation reactions of near-neutral aqueous electrolytes into the battery model and, in combination with the DFT calculations, draw a consistent picture of the cycling mechanism. We investigate the interplay between electrolyte pH and reaction mechanisms at the manganese-oxide cathodes and identify the dominant charge-storage mechanism. Our model is validated with electrochemical cycling data, cyclic voltammograms, and in-situ pH measurments. This allows us to analyse the influence of cell design and electrolyte composition on cycling and optimize the battery performance.
## I Introduction
Zinc-metal anodes feature competitive energy densities and are sufficiently stable in aqueous electrolytes, which are environmentally friendly, cheap and have excellent ionic conductivity. Several primary zinc batteries based on alkaline electrolytes, such as zinc-carbon, zinc-air, or alkaline MnO\({}_{2}\) batteries are commercially used for a long time [1]. However, these alkaline zinc batteries were never successfully commercialized as secondary batteries as they experience very limited rechargeability [2; 3].
Modern rechargeable zinc-ion batteries use similar materials and electrodes as Zn-MnO\({}_{2}\) batteries but with non-alkaline electrolytes [4]. In 1986, Yamamoto and coworkers presented a battery with a metallic zinc anode and MnO\({}_{2}\) cathode [5; 6]. In exchange for the KOH electrolyte, they used a near-neutral aqueous solution of ZnSO\({}_{4}\) as electrolyte. This early experiment showed a rechargeability, significantly better than their alkaline predecessors, but still limited to around 30 cycles [5]. Different inorganic zinc salts were tested as aqueous electrolytes [6], of which ZnSO\({}_{4}\), which is still the most popular [7], showed the highest achievable capacity. At the beginning of the 21\({}^{\text{st}}\) century, improvements in cycling stability re-sparked interest in ZIBs, leading to a rapidly growing amount of research in the last decade. Several other cathode materials were tested [1], vanadates achieving high stabilities [8; 9], Prussian blue analogs with extraordinary cycling stability [10], and organic cathode materials with promising capacities [11; 12]. Nevertheless, manganese-based cathodes are still the most promising combining a well-established production chain with a competitive overall cell performance. There are several approaches to increase the cell voltage which typically requires extending the electrolyte stability by using non-aqueous electrolytes [13; 14]. However, aqueous electrolytes achieve higher energy densities and offer a price advantage and excellent eco-friendliness.
In the last decade, research achieved a significant increase in cycling stability and investigated the details of cycling characteristics. It was observed that both discharging as well as charging voltages show two distinct phases [15; 16; 17; 18]. While the discharge and charge phases at high state of charge (SOC) experience a rather fast kinetic, the second phase at the end of discharge and the beginning of charge only shows slow kinetics [15; 19; 20]. A dip in cell potential is present between the two discharge phases. It is most pronounced in microstructured MnO\({}_{2}\) cathodes. This voltage dip during discharge correlates with the onset of Zn\({}_{4}\)(OH)\({}_{6}\)SO\({}_{4}\) (ZHS) precipitation. Precipitation of ZHS occurs at the MnO\({}_{2}\) cathode during the second phase of the discharge and is dissolved again at the beginning of charge as demonstrated by in-situ spectroscopy [18; 21]. Additionally, different polymorphs of MnO\({}_{2}\) are considered and studied as electrodes, but \(\delta\)-MnO\({}_{2}\) with its layered structure is often regarded as the most promising [22; 23; 24]. Furthermore, the cell is optimized by varying electrolyte concentration and composition [7; 25]. Especially, pre-adding a Mn\({}^{2+}\)-salt to optimize cycling performance is evaluated [26; 27; 28].
The precipitation of ZHS indicates a reaction process that changes electrolyte pH. ZHS precipitates at a pH \(\approx 5.5\), which is more alkaline than benign ZnSO\({}_{4}\) electrolytes [29]. The coin-sention of H\({}^{+}\) is often attributed to the observed pH shift. Lately, research papers focused on the dissolution process of Mn\({}^{2+}\)-ion leaching from the cathode [26; 30; 31]. Both, the insertion of H\({}^{+}\) as well as the dissolution of Mn\({}^{2+}\) result in an increase in electrolyte pH [29; 32]. Experiments with analytical measurements during cycling have shown that reversible vari
ations of the Mn\({}^{2+}\) concentration in the electrolyte occur [15; 17]. The importance of cathodic dissolution for understanding the cycling mechanism of MnO\({}_{2}\)-based ZIBs is further highlighted by the recently published works of Chen et al. [18], Godeffroy et al. [16], and Yang et al. [17].
In this paper, we present a theory-based approach and identify the cycling mechanism of ZIBs (see Figure 1). We focus on the behavior of ZIBs with MnO\({}_{2}\) cathode in an aqueous ZnSO\({}_{4}\) solution. With the help of density functional theory (DFT) calculations, we investigate the properties of \(\delta\)-MnO\({}_{2}\) and evaluate the dissolution and insertion potentials of the experimentally proposed processes [4]. Additionally, we use thermodynamic calculations of the equilibrium speciation of the ZnSO\({}_{4}\) electrolyte to identify electrolyte stability with respect to precipitation and to quantify the pH buffering properties. Based on this result, we develop our ZIB model describing the dynamic cell behavior. We implement a pseudo-two-dimensional (P2D) cell model which uses the quasi-particle transport theory derived by Clark et al. [33; 34; 35]. We both investigate the zinc- and proton-insertion mechanism as well as Mn\({}^{2+}\)-dissolution (Figure 1) and compare them with evidence from electrochemical cycling measurements. With this model, we elucidate the cycling mechanism of ZIB cells and use it for cell optimizations.
## II Theory
### Density Functional Theory (DFT)
Density functional theory (DFT) is the standard tool for material simulations [36; 37]. Based on the MnO\({}_{2}\) structure, we calculate the open circuit voltage (OCV) and compare different proposed reaction processes. For this purpose, we simulate the electronic structure of H\({}_{x}\)Zn\({}_{y}\)MnO\({}_{2}\cdot\) H\({}_{2}\)O with H content \(x\in[0,1]\) as well as Zn content \(y\in[0,0.5]\) and calculate the total energy \(E_{\rm tot}\) of the relevant MnO\({}_{2}\) structures for H\({}^{+}\) and Zn\({}^{2+}\) insertion. We approximate the overall difference in the Gibbs free energy \(\Delta G\) as
\[\Delta G\approx\Delta E-T\Delta S^{\rm conf}\, \tag{1}\]
where \(S^{\rm conf}\) is the configurational entropy of the structure and \(\Delta E\) the difference in the total energies calculated by DFT. Extending the computational hydrogen electrode to the computational zinc electrode [36; 37], we derive convenient expressions for the electrochemical potentials \(\tilde{\mu}_{i}=\mu_{i}+z_{i}eU\) for Zn\({}^{2+}\) and H\({}^{+}\) thus avoiding explicit calculations of solvation energies. This approach uses the circumstance, that the equilibria at standard conditions can be used to express the electrochemical potentials of solvated ions through molecular or atomic chemical potentials [36]. In detail, the definition for the standard hydrogen potential \(U_{\rm SHE}\) uses the equilibrium of dissolved protons and hydrogen in the gas phase,
\[\Delta\tilde{\mu}_{\rm H^{+}} =\tilde{\mu}_{\rm H^{+}(aq)}+\tilde{\mu}_{e^{-}}-\frac{1}{2}E_{ \rm H_{2}}\] \[=-eU_{\rm SHE}-k_{\rm B}T\ln(10){\rm pH}. \tag{2}\]
Analogous, the electrochemical potential for Zn\({}^{2+}\) in solution is calculated as
\[\Delta\tilde{\mu}_{\rm Zn^{2+}} =\tilde{\mu}_{\rm Zn^{2+}(aq)}+2\tilde{\mu}_{e^{-}}-E_{\rm Zn}\] \[=-2e\left(U_{\rm SHE}-U_{0}\right)-k_{\rm B}T\ln(a_{\rm Zn^{2+}})\, \tag{3}\]
where \(U_{0}\) is the standard potential of zinc vs. SHE.
Finally, we derive the insertion potential as \(U_{\rm ins}=-\Delta G/\tilde{\mu}_{z}\Delta{\rm Ni}_{z}\), where
\[\Delta G =E_{\rm tot}^{\rm H,Zn_{y}MnO_{2}}-E_{\rm tot}^{\rm MnO_{2}}-T \cdot\left(S_{\rm conf}^{\rm H,Zn,MnO_{2}}-S_{\rm conf}^{\rm MnO_{2}}\right)\] \[\quad-x\cdot\left(E_{\rm tot}^{\rm Zn(bulk)}+\Delta\tilde{\mu}_{ \rm Zn^{2+}}\right)-y\cdot\left(\frac{1}{2}E_{\rm tot}^{\rm H2(gas)}+\Delta \tilde{\mu}_{\rm H^{+}}\right). \tag{4}\]
The quantitative contribution of the configurational entropy can be found in the Supporting Information (Figure S2). At room-temperature, \(T=300\,\)K, the relative influence is in the order of \(1k_{\rm B}T\approx 25\,\)meV.
Analogous reasoning leads to the dissolution potential \(U_{\rm diss}\) of the respective H\({}_{x}\)Zn\({}_{y}\)MnO\({}_{2}\cdot\) H\({}_{2}\)O species. Here, we extend the calculations of the chemical potential to include Mn\({}^{2+}\). The dissolution potential can then be calculated as \(U_{\rm diss}=-\Delta G/\tilde{\mu}_{z}\Delta{\rm Ni}_{z}\). For the dissolution, \(\Delta G\) only includes the Gibbs free energy of the dissolved structure. All reaction products are dissolved ions, and their energy contributions are therefore included by their electrochemical potentials. The full details can be found in the Supporting Information.
Figure 1: Schematic overview on proposed charge-storage mechanisms in ZIB. The redox reaction of the zinc metal anode is shown on the left (I). At the cathode, the electrochemical reactions are the Zn\({}^{2+}\)-Insertion (II.a), the Mn\({}^{2+}\) dissolution (II.b) and the insertion of H\({}^{+}\) (II.c). The precipitation of ZHS (III), which is experimentally observed at the cathode, is shown in the lower right of the figure.
### Continuum Cell Model
#### Equilibrium Speciation & Quasi-Particle Transport model
When simulating the transport of near-neutral aqueous electrolytes, we must follow the dynamics of multiple species formed by zinc and its ligands. Not only is this computationally costly by increasing the problem dimensionality, but it also decreases numerical stability due to the different timescales and the non-linearity of complex-forming reaction kinetics. Our approach builds upon the quasi-particle transport model developed and applied in previous works on near-neutral zinc-air batteries [33, 34, 35].
The presented quasi-particle framework utilizes an abstraction level to resolve the dynamic behavior in aqueous electrolytes. We define quasi-particles so that their concentrations are invariant under the complex-formation reactions. This allows us to decouple slow electrolyte transport and slow heterogeneous reactions from fast complex formation reactions. We calculate the transport of Zn\({}^{2+}\)-quasi-particles instead of each Zn-ligand complex individually. On the side, we solve for electrolyte equilibrium speciation with algebraic equations defining the respective quasi-particles. In this work, we use the quasi-particles Zn\({}^{2+}_{\rm T}\), H\({}^{+}_{\rm T}\), Mn\({}^{2+}_{\rm T}\) and SO\({}^{2-}_{\rm T}\). The index \({}_{\rm T}\) denotes total concentration. For example, Zn\({}^{2+}_{\rm T}\) is the total concentration of Zn atoms, defined as
\[[\text{Zn}^{2+}_{\rm T}]= [\text{Zn}^{2+}]+\sum_{n=1}^{4}[\text{Zn}(\text{SO}_{4})_{n}{}^{2 (1-n)}]\] \[+ \sum_{n=1}^{4}[\text{Zn}(\text{OH})_{n}{}^{2-n}]\] \[+ 2\cdot\left([\text{Zn}_{2}\text{OH}^{3+}]+[\text{Zn}_{2}(\text{ OH})_{6}{}^{2-}]\right)\] \[+ 4\cdot[\text{Zn}_{4}(\text{OH})_{4}{}^{4+}]. \tag{5}\]
Here, \(n\) is the stoichiometry of the zinc-sulfate complex, square brackets are used to indicate a concentration (\([\text{X}]=c_{\text{X}}\)). Consequently, electrolyte pH is given by the H\({}^{+}\)-concentration as pH \(=-\log_{10}c_{\text{H}^{+}}/c_{0}\). We equate concentrations and activities as we analyzed that measured activity coefficients do not significantly alter our results. All details of the quasi-particle Ansatz can be found in Ref. [34]. Our definitions of quasi-particles are given in the Supporting Information (Subsection 1.1).
Homogeneous reactions govern the formation of complexes in the electrolyte. In equilibrium, the law of mass action determines the ratio of reaction products and reactants. For example for Zn(SO\({}_{4}\))\({}_{2}{}^{2-}\), the law of mass action reads
\[\frac{c_{\text{Zn}(\text{SO}_{4})_{2}{}^{2-}}}{c_{\text{Zn}^{2+}}\ c_{\text{ SO}^{2-}}^{2}}=\beta\, \tag{6}\]
with \(\beta=10^{-3.28}\) from Ref. [38]. We use laws of mass action to express the concentrations on the right side of Equation (5) with the concentrations of the elementary ions Zn\({}^{2+}\), H\({}^{+}\), Mn\({}^{2+}\) and SO\({}_{4}{}^{2-}\). By combining the resulting set of algebraic equations with the charge-neutrality equation for quasi-particles,
\[0=2\cdot[\text{Zn}^{2+}_{\rm T}]+[\text{H}^{+}_{\rm T}]+2\cdot[\text{Mn}^{2+ }_{\rm T}]-2\cdot[\text{SO}_{4}{}^{2-}_{\rm T}]\, \tag{7}\]
we calculate the concentrations of all complexes, i.e., the equilibrium electrolyte speciation. The homogeneous electrolyte reactions for this work and the used stability constants [38, 39, 40, 41] are given in the Supporting Information (Table S1).
We simulate transport for 4 quasi-particles instead of 24 complexes. Derived from consistent transport theory [34, 42], quasi-particle dynamics is calculated with the continuity equation
\[\frac{\partial\epsilon_{\text{e}}c_{q}}{\partial t}=-\vec{\nabla}\cdot\left( \sum_{i}\tau_{i,q}\vec{N}^{\text{DM}}_{\rm i}\right)+\dot{s}_{q}. \tag{8}\]
Here, \(\tau_{i,q}\) represents the stoichiometry of the solute \(i\) in the quasi-particle \(q\) and \(\epsilon_{\text{e}}\) the electrolytes volume fraction. The important feature of Equation (8) is that the diffusion-migration flux of the quasi-particle is given by the weighted sum of the individual species \(\vec{N}^{\text{DM}}_{\rm q}=\sum_{i}\tau_{i,q}\vec{N}^{\text{DM}}_{\rm i}\). The diffusion-migration flux of all individual species \(\vec{N}^{\text{DM}}_{\rm i}\) is calculated as
\[\vec{N}^{\text{DM}}_{\rm i}=\epsilon_{\text{e}}^{B}D_{i}\vec{\nabla}c_{i}+ \frac{t_{i}}{z_{i}F}\vec{J}\, \tag{9}\]
where \(D_{i}\) is the diffusion coefficient, \(z_{i}\) the charge number, \(t_{i}\) the transference number of the species, and \(\vec{J}=-\kappa\vec{\nabla}\phi_{\rm ely}\) is the current density as gradient of the electrolyte potential \(\phi_{\rm elyt}\). We neglect the convection velocity [42, 43], as electrolyte volume in ZIBs remains approximately constant. Electro-neutrality is enforced by the charge-conservation equation
\[0=-\vec{\nabla}\cdot\vec{J}+\sum_{i}z_{i}\dot{s}_{i}^{\rm e}\, \tag{10}\]
where \(\dot{s}_{i}^{\rm e}\) is the source term due to the electrochemical reactions at the electrodes and is identical to the formulation in the regular Doyle-Fuller-Newman (DFN) models.
#### Electrochemical and Precipitation Reactions
Our continuum cell model contains the rates of the electrochemical half-cell reactions and the relevant precipitation reaction [4]. These are the electrochemical dissolution and deposition of the metallic zinc anode, the electrochemical insertion reaction of both Zn\({}^{2+}\) and H\({}^{+}\), the electrochemical dissolution of Zn\({}_{0.5}\)MnO\({}_{2}\), and the precipitation of ZHS.
The zinc metal anode dissolves and reforms as redox reaction [44],
\[\text{Zn}\rightleftharpoons\text{Zn}^{2+}+2\,\text{e}^{-}. \tag{11}\]
This is the bare redox reaction. Upon solvation, the Zn\({}^{2+}\) ions form complexes in the electrolyte, e.g., ZnSO\({}_{4}\). Our quasi-particle formalism accounts for this formation of zinc-ligand complexes as discussed in Subsection II.2. We calculate the
reaction rate of the Zn\({}^{2+}\) redox reaction using a symmetric Butler-Volmer rate,
\[k_{\text{ano}}=k_{\text{ano}}^{0}\cdot\sqrt{\frac{c_{Zn^{2+}}}{c_{0}}}\sinh\left( \frac{zF}{2RT}\cdot\eta_{\text{ano}}\right), \tag{12}\]
where \(\eta_{\text{ano}}\) is the overpotential at the anode surface, determined by the difference between electrode and electrolyte potential, i.e., \(\eta_{\text{ano}}=\phi_{\text{elde}}-\phi_{\text{elyt}}-(U_{0,\text{Zn}}+ \nicefrac{{RT}}{{2F}}\,\text{log}\,c_{Zn^{2+}}/c_{0})\).
The MnO\({}_{2}\) cathode structure allows for the insertion of mono- and multivalent ions like H\({}^{+}\) and Zn\({}^{2+45}\). For the insertion of H\({}^{+}\), the reaction equation reads
\[2\,\text{H}^{+}+2\,\text{MnO}_{2}+2\,\text{e}^{-}\xleftrightarrow{2}\,2\, \text{HMnO}_{2}\;, \tag{13}\]
and the insertion reaction of Zn\({}^{2+}\) is
\[\text{Zn}^{2+}+\text{Mn}_{2}\text{O}_{4}+2\,\text{e}^{-}\xleftrightarrow{2} \,\text{ZnMn}_{2}\text{O}_{4}\;. \tag{14}\]
The corresponding Butler-Volmer rates are
\[k_{\text{ins}}=k_{\text{ins}}^{0}\cdot\sqrt{\text{SOC}\cdot(1-\text{SOC}) \cdot\frac{c_{i}}{c_{0}}}\sinh\left(\frac{z_{i}F}{2RT}\cdot\eta_{\text{ins}} \right). \tag{15}\]
Here, \(c_{i}\) is the electrolyte concentration of the insertion species and \(\eta_{\text{ins}}\) the corresponding overpotential. The exchange current density as prefactor depends on the state of charge (SOC). We define it as \(\text{SOC}=c_{i,\text{solid}}/c_{\text{max}}\), where \(c_{i,\text{solid}}\) is the concentration of Zn or H in the cathode and \(c_{\text{max}}\) is their maximal concentration in the material, i.e., \(\text{HMnO}_{2}\), \(\text{Zn}_{0.5}\text{MnO}_{2}\). These Butler-Volmer equations are adapted from thermodynamical derivations for the insertion reactions in Li-ion batteries [46].
Additionally, the electrochemical dissolution and deposition of Mn\({}^{2+}\) occur at the cathode. Based on DFT calculations presented in Subsection III.1, we will demonstrate that the dissolution of Zn\({}_{0.5}\)MnO\({}_{2}\) is the most relevant,
\[2\,\text{Zn}_{0.5}\text{MnO}_{2}+2\,\text{e}^{-}+8\,\text{H}^{+} \xleftrightarrow{2}\,\text{Zn}^{2+}+2\,\text{Mn}^{2+}+4\,\text{H}_{2}\text{O}\;. \tag{16}\]
The open circuit voltage of this process \(U_{\text{diss}}\) is given as
\[U_{\text{diss}}= U_{\text{ref}}\] \[+\frac{RT}{zF}\left[\log\left(\frac{c_{Zn^{2+}}}{c_{0}}\cdot\frac{c_ {\text{Mn}^{2+}}}{c_{0}}^{2}\right)-8\log(\frac{c_{\text{H}^{+}}}{c_{0}}) \right]\;. \tag{17}\]
We model the dissolution and deposition rates for this reaction in analogy to the insertion reactions above,
\[k_{\text{diss}}= k_{\text{diss}}^{0}\cdot\sqrt{\frac{c_{Zn_{0.5}\text{Mn}_{2}}}{c _{Zn_{0.5}\text{Mn}_{2}}}}\] \[\cdot\sinh\left(\frac{zF}{2RT}\cdot\left(\phi_{\text{cat}}-\phi_{ \text{elyt}}-U_{\text{diss}}\right)\right)\;. \tag{18}\]
The equilibrium and kinetics of precipitation reactions depend on electrolyte pH. We will show in Figure 3 that ZHS is the only relevant precipitate in the cell studied here. Thus, we include the precipitation of ZHS, a zinc-sulfate salt, in the cell model. The charge-neutral precipitation reaction of ZHS is given with
\[4\,\text{Zn}^{2+}+\text{SO}_{4}{}^{2-}+6\,\text{OH}^{-} \xleftrightarrow{2}\,\text{Zn}_{4}(\text{OH})_{6}\text{SO}_{4}\;, \tag{19}\]
which depends on pH through OH\({}^{-}\) concentration. Based on this, we calculate the saturation concentration for uncomplexed Zn\({}^{2+}\) with respect to ZHS precipitation as a function of pH and \(\text{SO}_{4}{}^{2-}\) as
\[c_{\text{sat}}=\left(K_{\text{sp}}\cdot c_{\text{H}^{-}}\cdot c_{\text{SO}_{4 }{}^{2-}}-1\right)^{\frac{1}{2}}\;, \tag{20}\]
with the solubility product \(K_{\text{sp}}\). We describe the dissolution reaction as a diffusion-limited process,
\[k_{\text{prec}}=A_{\text{spec}}D_{Zn^{2+}}\cdot\epsilon^{\beta} \cdot\frac{c_{Zn^{2+}}-c_{\text{sat}}}{\delta_{0}}\;, \tag{21}\]
with the diffusion layer thickness \(\delta_{0}\). We model the nucleation process for ZHS with the oversaturation approach adopted from earlier works [47] with the critical supersaturation ratio \(s_{\text{critical}}=105\,\%\) as for ZnO in Ref. [33].
## III Simulation results
In this section, we discuss the results of our calculations for the open circuit voltages of the electrodes, the equilibrium speciation and pH in the electrolyte, and the voltages during cycling. First, we present the calculated energies of the \(\delta\)-MnO\({}_{2}\cdot\) H\({}_{2}\)O electrode structures and interpret their results for insertion and dissolution reaction (see Subsection III.1). Following, we use equilibrium thermodynamics to calculate the evolution of electrolyte pH and discuss the occurring precipitation reactions (see Subsection III.2). We discuss the relevance of H\({}^{+}\)-insertion into MnO\({}_{2}\) during the first discharge phase. In Subsection III.3, we simulate the cell dynamics in both discharge phases, discuss the results of the transition to the second phase and the effects of ZHS precipitation at the cathode.
### Electrode Potentials (DFT)
The combination of chemical potentials of electrolyte species and the structural energies of the \(\delta\)-MnO\({}_{2}\) crystal structure allows estimating the likelihood of relevant electrochemical structures and thereby reactions, namely Zn\({}^{2+}\)- or H\({}^{+}\)-insertion and dissolution of the cathode structure. Therefore, we performed DFT calculations and thermodynamic calculations (see Subsection III.2) in order to calculate the corresponding open-circuit voltages (see Subsection II.1). Simulations of the structures for the proposed insertion states [48; 49], \(\text{HMnO}_{2}\cdot\text{H}_{2}\text{O}\) and \(\text{Zn}_{0.5}\text{Mn}_{2}\text{O}_{4}\cdot\text{H}_{2}\text{O}\), as well as a mixture of both, H\({}_{x}\)Zn\({}_{y}\)MnO\({}_{2}\cdot\text{H}_{2}\text{O}\), were executed and analyzed. The calculated structures can be found in the Supporting Information. By using Equation (4), we calculate the theoretical insertion potentials for the distinct phases in a given environment. We do this by using the chemical potentials of the dissolved species, calculated according to Subsection III.2. In Figure 2, the relative insertion potentials for the stepwise reactions at any stoichiometrically valid reaction are shown. For pure H\({}^{+}\)-insertion, we investigated the structures of H\({}_{0.25}\)MnO\({}_{2}\), H\({}_{0.5}\)MnO\({}_{2}\) and HMnO\({}_{2}\). Within a 2 m ZnSO\({}_{4}\) + 0.5 m MnSO
electrolyte, the insertion potential decreases from 2.91 V to 2.32 V at the end of the insertion process. The investigated structures with solely Zn\({}^{2+}\)-insertion show insertion potentials between 1.81 V to 1.33 V. The insertion potentials for the H\({}^{+}\)-insertion are greater at any point in the investigated phase space.
An electrochemical dissolution reaction of the MnO\({}_{2}\)-cathode is clearly observed in literature [50; 51; 31] and on some occasions attributed as the key mechanism [16; 17; 18] for the two phase behavior. We use the energies of formation acquired from DFT to evaluate the equilibrium potential for the dissolution reaction. The dissolution potential is given by the calculated \(E_{\mathrm{f}}\) of the cathode, the chemical potential of the individual species in the electrolyte and the \(E_{\mathrm{f}}\) for the bulk phase of Mn, Zn and H\({}_{2}\)(gas). The results are listed in Table 1. The energetically most favorable dissolution reaction is \(2\mathrm{Zr}n_{0.5}\mathrm{MnO}_{2}+8\,\mathrm{H}^{+}+2\,\mathrm{e}^{-} \,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\,\)\(\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\)\(\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\)\(
simulation for Mn\({}^{2+}\) dissolution in the Supporting Information (Figure S4).
We find that the pH around equilibrium is highly sensitive to the H\({}^{+}\) insertion reaction. The saturation limit is reached after 2 \(\mu\)mol L\({}^{-1}\) of H\({}^{+}\) are inserted into the electrolyte which equals a discharged capacity of 52 \(\mu\)A h mL\({}^{-1}\). In typical laboratory coin cells, reported electrolyte to active mass ratios are in the order of 30 mL g\({}^{-1}\)[25]. When we now use our calculations to estimate ZHS onset for the experimental electrolyte to active mass ratios, the onset is expected after approximately 0.15 mA h g\({}^{-1}\). However, the precipitation of ZHS is experimentally observed after a discharged capacity greater than 100 mA h g\({}^{-1}\).
Thus, the first discharge phase cannot be dominated by H\({}^{+}\)-insertion (or MnO\({}_{2}\) dissolution) as experiments find no ZHS precipitation in this phase. In the non-equilibrium case for discharge at realistic rates, diffusion limitations further accelerate local pH change and ZHS precipitation as we discuss in detail in the Supporting Information in Subsection 3.3. In combination with the calculated electrode potentials (see Subsection III.1), we conclude that the H\({}^{+}\)-insertion reaction, even if it is energetically favorable, must be strongly kinetically suppressed and is not relevant for the cycling mechanism found in MnO\({}_{2}\)-based ZIBs.
### Discharge Phases (Cell Model)
As discussed above, the insertion of H\({}^{+}\) into MnO\({}_{2}\) cathode cannot dominate the first discharge phase. Thus, we model the discharge with the combination of Zn\({}^{2+}\)-insertion and Mn\({}^{2+}\) dissolution at the cathode. We simulate the galvanostatic discharge of a laboratory coin cell in the presence of ZHS precipitation and plot the discharge voltage in Figure 5a. The cell voltage shows two discharge phases with a voltage dip in between as generally reported in the literature [1]. The filled regions below the discharge curve represent the relative contribution of the Zn\({}^{2+}\)-insertion and Mn\({}^{2+}\) dissolution reaction. The first discharge phase is dominated by the insertion of Zn\({}^{2+}\). The Mn\({}^{2+}\) dissolution onsets shortly before the voltage dip and becomes relevant in the second discharge region. This is in excellent agreement with the experimental findings of Wu et al. [15] that the Mn\({}^{2+}\) content in the electrolyte significantly increases in the second phase.
In the inset of Figure 5a, we neglect ZHS precipitation for comparison. In this case, only the first discharge phase is present and the contribution of Mn\({}^{2+}\) dissolution is negligible. Thus, ZHS precipitation is required to reproduce the two distinct discharge phases.
Thus, we analyze electrolyte pH and average ZHS volume fraction \(\epsilon_{\text{ZHS}}\) during discharge in Figure 5b. ZHS precipitation is limited to the second discharge phase. Its onset is correlated with the voltage dip in Figure 5a. The pH value increases during the first phase of discharge. At the end of the first phase, the pH rises sharply until it is reduced again at the onset of ZHS precipitation. During the second phase, electrolyte pH slowly increases at first before strongly raising near the end of discharge.
We can rationalize this behavior based on the chemical reactions for Zn\({}_{0.5}\)MnO\({}_{2}\) dissolution and Zn\({}^{2+}\) insertion. During discharge, Zn\({}_{0.5}\)MnO\({}_{2}\) dissolution releases OH\({}^{-}\) into the electrolyte (see Equation (16)) so that the electrolyte becomes more alkaline. While the rate of Zn\({}^{2+}\) insertion is independent of pH, the equilibrium voltage of Zn\({}_{0.5}\)MnO\({}_{2}\) dissolution (see Equation (17)) strongly decreases with electrolyte
Figure 4: Dependence of electrolyte pH and Zn\({}^{2+}\) saturation with respect to ZHS precipitation as a function of proton concentration for a H\({}^{+}\) reactions in a 2 m ZnSO\({}_{4}\), 0.5 u MnSO\({}_{4}\) electrolyte. The pH is shown on the left, and zinc concentration and zinc saturation concentration are shown on the right; both are shown as a function of the amount of H\({}^{+}\) added to the electrolyte. We argue that the insertion of H\({}^{+}\) into MnO\({}_{2}\) during discharge would result in an identical decrease of [H\({}_{1}^{*}\)] in the electrolyte.
Figure 3: Phase diagram of electrolyte speciation and precipitation reaction for the ZnSO\({}_{4}\) electrolyte with 0.5 m MnSO\({}_{4}\) additive. The background colors depict the dominant aqueous zinc complexes. The solid-colored lines correspond to the solubility of the respective precipitates. The solid gray lines show paths of constant [SO\({}_{4}^{2-}\)], which are invariant with respect to electrochemical reactions. The white circle indicates the initial state of a benign solution and the dark gray line its corresponding isoline.
pH \(U_{\mathrm{diss}}\approx U_{\mathrm{diss}}^{0}+238\mathrm{mV}\cdot\mathrm{pH}\). Consequently, the dissolution potential for this reaction drops. In turn, the pH increase limits the dissolution reaction as long as the cell voltage is stabilized by Zn\({}^{2+}\) insertion. When Zn\({}^{2+}\) insertion becomes more difficult due to transport limitations in the MnO\({}_{2}\) material, the cell voltage drops and Zn\({}_{0.5}\)MnO\({}_{2}\) dissolution accelerates. As a consequence, the pH value increases quickly and ZHS starts to precipitate (see Figure 4). The pH-driven precipitation (see Figure 3) removes OH\({}^{-}\) from the electrolyte and stabilizes the pH near its saturation limit. ZHS precipitation makes possible significant Zn\({}_{0.5}\)MnO\({}_{2}\) dissolution without its self-limiting mechanism.
This interplay between ZHS precipitation and Zn\({}_{0.5}\)MnO\({}_{2}\) dissolution, which is first described by our theory, is key to our consistent model of the cycling mechanism of MnO\({}_{2}\)-based ZIBs. The Zn\({}_{0.5}\)MnO\({}_{2}\) dissolution, while energetically more favorable, is a self-limiting reaction in the first discharge phase without precipitation. The onset of ZHS precipitation, observed as a nucleation dip in cell voltage, stabilizes electrolyte pH and resolves the self-limitation of pure Zn\({}_{0.5}\)MnO\({}_{2}\) dissolution. In the second discharge phase, the dissolution of the Zn\({}^{2+}\)-inserted manganese oxide Zn\({}_{0.5}\)MnO\({}_{2}\) contributes significantly to the overall capacity and drives the precipitation of ZHS. During charging, the ZHS will be dissolved again and the cathode is redeposited. Laboratory ZIBs are often optimized with respect to capacity and thus designed towards significant Zn\({}_{0.5}\)MnO\({}_{2}\) dissolution. However conversion electrodes are prone to shape change [44] and the deposition process of MnO\({}_{2}\)-structures can change its crystal structure [53]. Thus, we expect that this common optimization strategy limits cycle life and induces accelerated aging. We propose to reduce Zn\({}_{0.5}\)MnO\({}_{2}\) dissolution to reduce aging and capacity fade. In Subsection IV.2, we optimize the discharging strategy towards this rationale.
To gain further insights, we simulate the cycling behavior for different current densities. We simulate several cycles with
Figure 5: a) Galvanostatic discharge behavior at 200 mA g\({}^{-1}\) (equal to 0.4 mA cm\({}^{-2}\) at the simulated mass loading of 2 mg cm\({}^{-2}\)). Shown are the discharge voltages based on Zn\({}^{2+}\)-insertion and Mn\({}^{2+}\) dissolution. The main axis shows simulations including the ZHS precipitation reaction, while the simulation shown in the inset neglects this reaction. Only the full model reproduces the second discharge phase. The colored areas below the discharge potential represent the fractional contribution of the Zn\({}^{2+}\)-insertion and Mn\({}^{2+}\)-dissolution to the cell current. Here, Mn\({}^{2+}\)-dissolution becomes significant only in the second discharge phase. b) Dynamics of electrolyte pH and ZHS precipitation for the full cell model. Shown are the electrolyte pH at both anode and cathode as well as the average volume-fraction of ZHS in the cell. While there is a pH increase in both discharge phases, ZHS growth happens only in the second discharge phase. The pH at the end of the first discharge phase sharply increases but is lowered again once ZHS growth starts.
Figure 6: Cycling voltage for current rates of 100 mA g\({}^{-1}\), 200 mA g\({}^{-1}\), and 300 mA g\({}^{-1}\). Shown are the cell potentials during galvanostatic discharge and charge during the second cycle. At low current densities, the second discharge phase is clearly defined and the phase distinction is also visible during charging. At the highest rate, the voltage dip and the additional capacity of the second phase are not present.
a galvanostatic charge and discharge, both at the same current density. Figure 6, shows the charge and discharge potentials of the second cycle. During charging, we find two clearly separated phases without a separating voltage dip. The voltage dip between the phases is present at low currents but disappears at higher rates. The contribution of the second phase is decreasing with increasing currents and is fully suppressed at high currents. This shows how sensitive the voltage reacts to variations in cycling currents. In turn, small differences in material preparation and cell design can also strongly affect cell behavior.
## IV Discussion
In the following section, we compare the behavior of our theory-based model with experimental observations from the literature to validate our approach. Hereby, we compare the (dis)charge voltages, investigate the tempo-spatial profiles of pH evolution as well as precipitation within our cell model and present the results of cyclovoltammetry simulations in Subsection IV.1. Subsequently, we discuss strategies to increase cycling stability and reduce MnO\({}_{2}\) dissolution and ZHS precipitation by adding MnO\({}_{4}\) into the electrolyte, by increasing electrolyte volume, and by adjusting the cycling protocol (see Subsection IV.2).
### Validation
We use literature data of measured cell potentials during cycling to validate our proposed cycling mechanism. Experimental results show two phases during discharge, separated by a voltage dip, which is reproduced by our model. A comparison of experimental discharge voltages of \(\delta\)-MnO\({}_{2}\) coin cells, as found in References [17, 18, 23, 52], with our simulation results is plotted in the Supporting Information in Figure S3. The experiments show the same discharge and charge behavior as our simulations, with two phases that are separated by a voltage dip during discharge (see Figure 6). Observed rate dependencies of the cycling behavior for \(\delta\)-MnO\({}_{2}\) as, for example, investigated by Guo et. al. [54] and Ren et. al. [52] show that the second-phase capacity is reduced significantly with increasing current densities. At high rates, it is also observed that the second phase might not even occur. This behaviour is similarly observed for other MnO\({}_{2}\) polymorphs, e.g., for \(\alpha\)-MnO\({}_{2}\)[15], \(\varepsilon\)-MnO\({}_{2}\)[55], and amorphous MnO\({}_{2}\)[56]. In our computational study, we find that the second phase disappears at higher rates due to the slow kinetics of the precipitation reaction (see Figure 6). Quantitative differences between our simulation and the different lab cell measurements are a result of different synthesis approaches, cell design and applied current. We summarize that our model reproduces the key experimental features, i.e., the two discharge phases, the voltage dip, as well as the rate dependence of the two phases.
The evolution of ZHS is measured by Putro et. al [21] and Chen et. al. [18]. Their in-situ spectroscopy data show a reversible growth and dissolution of ZHS during cycling, which is occurring in the second phase of discharge and the first phase of charge [18, 21]. The right subfigure of Figure 7 presents our simulation results for ZHS volume for two consecutive cycles of the cell model fraction in a spatially resolved way. Our simulations nicely reproduce these experimental findings for ZHS growth. We also find that ZHS precipitation occurs in the cathode only and does not extend into the anode.
In 2016, Lee et. al. [29] investigated the pH evolution in a ZIB with a \(\alpha\)-MnO\({}_{2}\) cathode during the first cycle. We compare our simulations with recent investigations of Biro and coworkers [57, 32]. They study in detail the pH evolution over several cycles and find that the pH evolution is reversible.
Figure 7: Electrolyte pH and volume fraction of the precipitate \(\epsilon_{\text{ZHS}}\) over two cycles. During discharge, electrolyte pH increases gradually from an initial value of \(\approx 4.3\). Around the dip in voltage, the pH reaches values that are higher than the saturation limits but drops with the precipitation of ZHS. In the second half of the discharge, the volume of \(\epsilon_{\text{ZHS}}\) grows while the pH remains mostly constant throughout the cell. At the end of the charge, the sharper voltage decline is associated with a more rapid increase of the pH.
Electrolyte pH is measured separately in the anode and cathode. They highlight a sharp decrease in pH at the end of the charge. The left subfigure of Figure 7 shows our simulation results for pH evolution within the active region of the cell. Our model reproduces the reversible behavior of electrolyte pH and the sharp increase at the end of discharge found by Biro and coworkers [32; 57]. Our simulations predict no significant pH gradient between the cathode and anode because our coin-cell geometry is significantly smaller than the laboratory setup of Biro and coworkers granting space for the pH measurement device. In combination with the excellent conductivity of aqueous electrolytes, the rather uniform pH distribution is according to our expectations.
Cyclovoltammograms (CVs) are used in experiments to identify individual processes by their characteristic redox peaks. We perform cell simulations and elucidate the direct correlation of the characteristic of the CVs for MnO\({}_{2}\)-cathodes with the underlying electrochemical reaction. Figure 8 shows the simulated cyclovoltammograms of our cell model. We observe that the experimentally described separation of two redox peaks [50; 52; 54] is predicted by our cell model. The filled areas in Figure 8 visualize how the rates of the individual electrochemical reactions at the cathode contribute to the overall cell current. Here, the first peak in discharge directions can be associated with the Zn\({}^{2+}\) insertion reaction, while the second discharge peak is a result of the onset of the Mn\({}^{2+}\) dissolution. In the charging direction, the Mn\({}^{2+}\) of the cathode is redeposited first as Zn\({}_{0.5}\)MnO\({}_{2}\) and then the remaining Zn\({}^{2+}\) is de-inserted.
### Optimization
Based on our cell model we investigate strategies to reduce MnO\({}_{2}\) dissolution and ZHS precipitation. In this section, we discuss the effect of MnSO\({}_{4}\) as electrolyte additive and electrolyte volume variations. Finally, we present a modified discharge protocol that allows for improving the insertion/dissolution ratio.
The volume of the electrolyte influences pH stability and changes the precipitation dynamics of ZHS. Our calculations in Subsection III.2 showcase the sensitivity to excess electrolyte on a pH-driven precipitation reaction. In Figure 9, we present a study of cycling behavior for different electrolyte volumes based on our Zn\({}^{2+}\)-insertion/Mn\({}^{2+}\)-dissolution model. We implement a reservoir with excess electrolyte and increase the electrolyte amount, starting from 9.2 \(\upmu\)L cm\({}^{-2}\), which is the amount needed to fill the pore volume in the anode, separator and cathode. While the capacity of the first discharge phase, which is dominated by the Zn\({}^{2+}\) insertion process, is rarely influenced by the amount of excess electrolyte in Figure 9, the Mn\({}^{2+}\)-dissolution phase is significantly extended in the presence of more electrolyte. We conclude that the ZHS precipitation/Mn\({}^{2+}\) dissolution mechanism is sensitive to ion depletion in small electrolyte volumes.
MnSO\({}_{4}\) is often used as electrolyte additive in order to inhibit MnO\({}_{2}\) dissolution [22; 12; 26]. The amount of pre-added MnSO\({}_{4}\) is mostly empirically motivated. While early work of Kim et. al. [22] showed optimum cycling stability for 0.1 \(\upmu\)MoS\({}_{4}\), the recent work of Chen et. [18] uses 0.5 \(\upmu\)MnSO\({}_{4}\). Figure 10 presents a comparison of MnSO\({}_{4}\)-influence on cycling performance. In the inset of Figure 10, the cell voltage during cycling is shown. While the achievable capacity is only slightly dependent on the MnSO\({}_{4}\) amount, larger amounts of
Figure 8: Simulated voltammetry measurements. The current density is shown as a function of applied potential for sweep rates from 0.08 mV s\({}^{-1}\) to 0.12 mV s\({}^{-1}\) as black lines. The colored regions below the current curve show the current contributions of the Zn\({}^{2+}\)-insertion and Mn\({}^{2+}\)-dissolution reaction. The first discharge-peak is dominated by the Zn\({}^{2+}\)-insertion reaction, Mn\({}^{2+}\)-dissolution is only relevant in the second peak.
Figure 9: Discharge behavior with different electrolyte volumes. The amount of electrolyte is increased relative to the minimal volume used to wet electrodes and separator by up to 30 %. The end of the first discharge phase is hardly influenced by excess electrolyte (compare inset), while the second discharge phase becomes longer, the more electrolyte is added to the cell.
MnSO\({}_{4}\) result in more pronounced voltage dips associated with the nucleation of ZHS. The main part of Figure 10 evaluates the capacity at which ZHS precipitation is first observed. We find that the onset of the second phase with MnO\({}_{2}\) dissolution occurs later if larger amounts of MnSO\({}_{4}\) are pre-added. In summary, MnSO\({}_{4}\)-additive effectively allows for a significantly larger discharge capacity in the first phase. Evaluation of the ratio of capacity from the Zn\({}^{2+}\)-insertion and capacity from Mn\({}^{2+}\)-dissolution gives a Zn\({}^{2+}\) contribution of \(\approx 62\,\%\) for a discharge at \(2\,\mathrm{A}\,\mathrm{m}^{-2}\), which is in agreement with the experimental findings of Yang et. al. [17]. However, the change of this ratio is less than \(1\,\%\) for cycling in pure ZnSO\({}_{4}\) as compared to the electrolyte with \(0.5\,\mathrm{m}\) MnSO\({}_{4}\). We therefore find that MnSO\({}_{4}\) helps to prolong the first phase, but does not significantly change the total discharge capacity and the relative contribution of the MnSO\({}_{4}\)-dissolution process.
Recently published works on high-performance ZIBs all salvage the additional capacity achievable in the second discharge phase which is associated with cathodic dissolution [58]. However, experimental studies also report crystallographic changes in redeposited MnO\({}_{2}\) during charging [59, 60, 61]. Additionally, dissolution and redeposition of the MnO\({}_{2}\)-structure has been claimed to be a reason for reduced cycle life [62]. Therefore, limiting the cathode dissolution might help achieve higher cycling stability. Figure 11 shows the influence of a constant current-constant voltage (CC-CV) discharge profile on the achievable energy. We conducted discharge simulations with a constant current at the start, once a certain voltage is reached, the discharge is switched to potentiostatic mode. We varied the switching voltage between \(1.1\,\mathrm{V}\) to \(1.55\,\mathrm{V}\). Here, we find that switching discharge modes from galvanostatic to potentiostatic mode has a significant leveraging effect on the cathodic dissolution. At the switching region around \(1.3\,\mathrm{V}\) to \(1.4\,\mathrm{V}\), the cathode dissolution can be suppressed without sacrificing any of the capacity of the Zn\({}^{2+}\)-insertion process.
## V Conclusion
This article discusses the relevance of proposed reaction mechanisms in the MnO\({}_{2}\) cathode in ZnSO\({}_{4}\) electrolyte, i.e., H\({}^{+}\) insertion, Zn\({}_{2}\)\({}^{+}\) insertion, and MnO\({}_{2}\) dissolution. The calculated electrode potentials by DFT indicate that a H\({}^{+}\) insertion reaction is energetically more favorable. Based on calculations for electrolyte thermodynamics, however, we conclude that a H\({}^{+}\) consuming reaction can not be dominant in the first half of discharge. Contrary to the expectations from MnO\({}_{2}\)-cathodes in alkaline electrolytes, the first discharge phase is thus dominated by the insertion of Zn\({}^{2+}\) ions.
The continuum cell model for ZIB cells with MnO\({}_{2}\) cathodes developed in this work reproduces the two-phase cycling behavior. It is used to investigate the critical role of ZHS precipitation for the second discharge phase. This work proposes feedback between the cathode's electrochemical dissolution and the stabilizing effect of ZHS dissolution on electrolyte pH. With the nucleation of ZHS, electrolyte pH is stabilized at the saturation limits which allows for continuous MnO\({}_{2}\) dissolution. Validated by different in-situ experiments, our simulation results show that the developed theory with its pH-based feedback process can reproduce the two-phase cycling characteristics of MnO\({}_{2}\)-based ZIBs and the double-peak structure in cyclovoltammetry measurements. The unique voltage dip during discharging is identified as a result of the nucleation of ZHS at the cathode.
With this consistent understanding of the cycling mechanism, theory-based optimization strategies become possible. The combination of conversion reactions, i.e., MnO\({}_{2}\) dissolu
Figure 11: Optimized Discharge Performance with a CC-CV-type discharge. The contribution of capacity from the Mn\({}^{2+}\)-process and the achievable overall energy are presented in relation to their values at a standard CC discharge. The values are shown as a function of the switching voltage between CC and CV discharge. If the switching voltage is higher than the cell voltage at the start of the second phase, the dissolution process is significantly suppressed.
Figure 10: Discharge behaviour with different amounts of MnSO\({}_{4}\) pre-added to the electrolyte. Shown is the quantitative analysis of the Mn\({}^{2+}\)-additive. The major axis displays the first phase capacity as a function of pre-added MnSO\({}_{4}\). The inset axis shows the charge and discharge behavior in the second cycle. The higher the amount of MnSO\({}_{4}\)-additive, the sharper the transition between the first and second discharge phases. While the onset of the second phase is significantly deferred with MnSO\({}_{4}\)-additive, the capacity is hardly influenced.
tion and ZHS precipitation, increases discharge capacity, but leads to shape change and capacity fade during continued cycling. We present an optimized CC-CV-discharging protocol, which can mitigate cathode dissolution also at low current densities. Another optimization approach would be electrolyte design based on our theoretical expectations, such as suppression of the ZHS stabilizing mechanism.
## VI Computational Section
Periodic density functional theory (DFT) calculations were performed to investigate the proton and zinc insertion in \(\delta\)-MnO\({}_{2}\). For this purpose, the Vienna ab initio simulation package (VASP) was applied, using the Projector Augmented Wave (PAW) method to describe the electron-core interaction [63, 64, 65, 66]. While exchange and correlation were accounted for by the generalized gradient approximation in the formulation of Perdew, Burke and Ernzernhof (PBE) an additional Hubbard-like correction - with a U parameter of 3.9 - was included to describe the localized character of the Mn d-electrons [67, 68]. All calculations were based on supercells of a 9 atom \(\delta\)-MnO\({}_{2}\) cell that contained one water molecule, i.e., Mn\({}_{2}\)O\({}_{4}\cdot\) H\({}_{2}\)O, using an energy cutoff of 600 eV and a 7x14x5 K-point mesh for the unit cell, which was adapted accordingly for larger supercells. To investigate possible intercalation compounds, different numbers of Zn and H atoms were inserted in the respective supercells, corresponding to H\({}_{x}\)Zn\({}_{x}\)MnO\({}_{2}\cdot\) H\({}_{2}\)O stoichiometries (with x and y equal to 0, 0.25, 0.5 and 1). The structures were relaxed with respect to lattice vectors and atomic positions, applying convergence criteria of 10\(\times\)10\({}^{-6}\) eV for the electronic self-consistency loop and of 1\(\times\)10\({}^{-3}\) eV A\({}^{-1}\) for the residual forces, respectively.
A thermodynamic model based on the law of mass action was applied to calculate ion speciation and solubility. This modeling approach is based on existing works [69, 33, 34, 70]. The cell-level simulations are conducted with a continuum model based on the quasi-particle method derived in our previous works [33, 34]. The equilibrium calculations from the thermodynamic model are integrated into the cell-level simulations, assuming that complex formation reactions are much faster than typical time scales of the charge and discharge. The model consisted of a system of 12 equations: 4 electrolyte-conservation-equations describing the electrolyte speciation, 3 solid-volume-conservation equations, 3 solute mass continuity equations, electrolyte-charge continuity expression and 1 expression representing either the galvanostatic or potentiostatic condition. A P2D finite-volume model, with spatial resolution in electrolyte transport and cathodic diffusion, was implemented in Python. The differential-algebraic equations were solved with MATLABs fully-implicit ode15s solver.
The cell model was parametrized based on recent designs for \(\delta\)-MnO\({}_{2}\) 2032-like coin cells as presented in literature [17, 18, 25, 52]. Parameters are mostly taken from the coin cells manufactured in the recent study of Chen and coworkers [18], which are similar to most other designs. Cathode composition is a mixture of MnO\({}_{2}\), acetylene black and a PVDF binder with 70:20:10 wt% with a mass loading of 2 mg cm\({}^{-2}\). Relative volume fractions are calculated based on the theoretical densities of the materials. Pore volume measurements were reported in the studies from Shen et. al. [71] and Corpuz et. al. [14] in the range of 0.44 cm g\({}^{-3}\) to 0.78 cm g\({}^{-3}\). Here, we use a pore volume of 0.5 cm g\({}^{-3}\) to calculate the porosity of the cathode and, combined with the mass loading, the resulting cathode thickness of 66 um. The separator thickness is set to 150 um [72]. If not stated otherwise, the electrolyte used is an aqueous solution of 2 m ZnSO\({}_{4}\), 0.5 m MnSO\({}_{4}\) and cycling of the cell is simulated under galvanostatic conditions at 200 mA g\({}^{-1}\). The full details of the calculation and choice of parameters can be found in the Supporting Information.
**Acknowledgments**
The authors acknowledge support from the Helmholtz Association, the state of Baden-Wuerttemberg through bwHPC, and the German Research Foundation (DFG) through Grant No. INST 40/467-1 FUGG (JUSTUS cluster). Part of this work was performed on the HoreKa supercomputer funded by the Ministry of Science, Research and the Arts Baden-Wuerttemberg and by the Federal Ministry of Education and Research. The research leading to these results has received funding from the Federal Ministry of Education and Research (BMBF) in the framework of the project 'ZIB' (FKZ 03XP0204A). Further support by the German Research Foundation (DFG) under Germany's Excellence Strategy - EXC 2154 - Project number 390874152 is gratefully acknowledged.
|
2304.04930
|
A singular integral identity for surface measure
|
We prove that the integral of a certain Riesz-type kernel over
$(n-1)$-rectifiable sets in $\mathbb{R}^n$ is constant, from which a formula
for surface measure immediately follows. Geometric interpretations are given,
and the solution to a geometric variational problem characterizing convex
domains follows as a corollary, strengthening a recent inequality of
Steinerberger.
|
Ryan E. G. Bushling
|
2023-04-11T02:04:43Z
|
http://arxiv.org/abs/2304.04930v2
|
# A singular integral identity for surface measure
###### Abstract.
We prove that the integral of a certain Riesz-type kernel over \((n-1)\)-rectifiable sets in \(\mathbb{R}^{n}\) is constant, from which a formula for surface measure immediately follows. Geometric interpretations are given, and the solution to a geometric variational problem characterizing convex domains follows as a corollary, strengthening a recent inequality of Steinerberger.
Key words and phrases:Rectifiable sets, Sets of finite perimeter, Geometric variational problems. 2020 Mathematics Subject Classification: Primary 28A75, 53A07; Secondary 51M16, 52A38
## 1. Introduction and main results
In [4], Steinerberger proves an inequality inspired by the following simple observation: if \(\Omega\) is a smoothly bounded convex domain and \(x,y\in\partial\Omega\) are close with respect to the Riemannian metric on \(\partial\Omega\), then the normal vectors at \(x\) and \(y\) are nearly orthogonal to \(x-y\), and the measure of "closeness" hinges on the curvature of \(\partial\Omega\). Leveraging this from a probabilistic standpoint, he concludes the following. Let \(\mathcal{H}^{n-1}\) denote \((n-1)\)-dimensional Hausdorff measure.
**Proposition.**_There exists a constant \(c_{n}>0\) such that, for every bounded, \(C^{1}\)-bounded domain \(\Omega\subseteq\mathbb{R}^{n}\) with outward-pointing unit vector field \(\nu\),_
\[\int_{\partial\Omega}\int_{\partial\Omega}\frac{|\langle x-y,\nu(y)\rangle \langle x-y,\nu(x)\rangle|}{\|x-y\|^{n+1}}\,d\mathcal{H}^{n-1}(y)\,d\mathcal{H }^{n-1}(x)\geq c_{n}\mathcal{H}^{n-1}(\partial\Omega).\]
_Moreover, equality holds if and only if \(\Omega\) is convex._
Figure 1. For a \(C^{1}\)-bounded convex region \(\Omega\), the line segment between any two points \(x,y\in\partial\Omega\) is such that \(\langle x-y,\nu(y)\rangle\langle x-y,\nu(x)\rangle\leq 0\), and this quantity vanishes quickly as \(\|x-y\|\to 0\). If \(\Omega\) is not convex, then \(\langle x-y,\nu(y)\rangle\langle x-y,\nu(x)\rangle\) can be positive, and there may be nearby points \(x,y\in\partial\Omega\) such that \(\langle x-y,\nu(y)\rangle\langle x-y,\nu(x)\rangle\) is not small relative to \(\|x-y\|\).
What prevents the inequality from being an equality in general is the absolute value: for \(\Omega\) open and \(C^{1}\)-bounded, the sign of \(\langle x-y,\nu(y)\rangle\langle x-y,\nu(x)\rangle\) is constant precisely when \(\Omega\) is convex, and dropping the absolute value results in a "systematic cancellation" that turns the inequality into a formula for surface measure (cf. Figure 1). This remedy begs the question whether boundaries of domains are the natural class of hypersurface with which to work in this context, as the setup only requires a normal vector field that is distributed "consistently" across the surface, as in Figure 2. In SS2.2, we specify such a class of set/vector field pairs say that its members satisfy the _orientation cancellation condition_. The class includes all boundaries of \(C^{1}\)-bounded domains and all closed, oriented, immersed smooth \((n-1)\)-manifolds (both with their outward unit normal vector fields), as well as a host of lower-regularity sets with vector fields that do not arise as the result of an "orientation."
In this setting, we can prove the following theorem. For the remainder of this section and subsequently in SS3, \(\Sigma\subset\mathbb{R}^{n}\) denotes an \((n-1)\)-rectifiable set and \(\nu\) a measurable unit normal vector field on \(\Sigma\) (cf. SS2.1). We also let
\[\alpha_{k}:=\mathcal{L}^{k}(B(0,1))\]
be the measure of the unit ball in \(\mathbb{R}^{k}\).
**Theorem.** _For every \((\Sigma,\nu)\) satisfying the orientation cancellation condition (cf. SS2.2), the identity_
\[\int_{\Sigma}\frac{\langle x-y,\nu(y)\rangle\langle y-x,\nu(x) \rangle}{\|x-y\|^{n+1}}d\mathcal{H}^{n-1}(y)=\alpha_{n-1}. \tag{1.1}\]
_holds for \(\mathcal{H}^{n-1}\)-a.e. \(x\in\Sigma\). Consequently,_
\[\frac{1}{\alpha_{n-1}}\int_{\Sigma}\int_{\Sigma}\frac{\langle x-y,\nu(y)\rangle\langle y-x,\nu(x)\rangle}{\|x-y\|^{n+1}}\,d\mathcal{H}^{n-1}(y )\,d\mathcal{H}^{n-1}(x)=\mathcal{H}^{n-1}(\Sigma). \tag{1.2}\]
Figure 2. The normal vectors to this immersed submanifold are oriented in such a way that the signs of the angles formed with any line sum to \(0\). This motivates the definition of the “orientation cancellation condition.”
In plain language, the Theorem states the following: if \(\Sigma\) were semitransparent, then the amount of \(\Sigma\) that one would see while standing on the surface--counting pieces of \(\Sigma\) positively or negatively according to their orientation relative to the viewer--would not depend on the point at which one stood. Moreover, this quantity does not even depend on the surface \(\Sigma\): it is a universal constant depending only on the dimension \(n\), and taking \(\Sigma=\mathbb{S}^{n-1}\) gives the constant explicitly. It follows immediately that the surface area of \(\Sigma\) is proportional to the integral over all \(x\in\Sigma\) of the signed surface area one sees from the vantage point \(x\). While this interpretation is not apparent from the theorem statement, the heuristic is salient in the proof. See also Figure 3.
A more concrete consequence of the proof is that Equation (1.1) holds for every \(x\in\Sigma\) at which the orientation cancellation condition is satisfied. In particular, if \(\Sigma=\partial E\) for some smoothly bounded open set \(E\) and \(\nu\) is outward-pointing, then the equation holds for all \(x\in\partial E\). However, even if \(E\) is merely a set of finite perimeter with Gauss-Green measure \(\mu_{E}=\nu_{E}\mathcal{H}^{n-1}\mathbin{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}\partial^{*}E\), then the orientation cancellation condition is still satisfied with \((\Sigma,\nu)=(\partial^{*}E,\nu_{E})\) at \(\mathcal{H}^{n-1}\)-a.e. \(x\in\partial^{*}E\). (See SS2.1.)
In view of this discussion (formalized in Lemma 2 below), the Theorem implies Steinerberger's proposition under a milder regularity hypothesis.
**Corollary.** _For every bounded set \(E\subset\mathbb{R}^{n}\) of finite perimeter,_
\[\frac{1}{\alpha_{n-1}}\int_{\partial^{*}E}\int_{\partial^{*}E}\frac{|\langle x -y,\nu_{E}(y)\rangle\langle y-x,\nu_{E}(x)\rangle|}{\|x-y\|^{n+1}}\,d\mathcal{ H}^{n-1}(y)\,d\mathcal{H}^{n-1}(x)\geq\mathcal{H}^{n-1}(\partial^{*}E). \tag{1.3}\]
_Furthermore, there is equality if and only if \(E\) is \(\mathcal{L}^{n}\)-equivalent to a convex set._
Notice that the inner integral
\[\int_{\partial^{*}E}\frac{|\langle x-y,\nu(y)\rangle\langle y-x,\nu(x)\rangle| }{\|x-y\|^{n+1}}\,d\mathcal{H}^{n-1}(y)\]
is unstable under \(L^{\infty}\) perturbations of \(\partial^{*}E\), although it _is_ stable under \(C^{1}\) perturbations. As such, the magnitude of this "energy" relative to the measure of the boundary provides an interesting metric for how "close" a set is to being convex. Steinerberger [5] substantiates this idea with an application to a geometric variational problem. Put slightly differently, the Corollary may be interpreted as stating (to paraphrase [4]) that the solution set to a certain nonlocal isoperimetric problem, considered over a subfamily of the family of finite-perimeter sets, is the family of all convex domains of a given perimeter.
## 2. Definitions
This short section describes the objects of study, and a full account of this background can be found in [2] and [3]. The generality is not so great as to defy classical methods, yet is sufficient to include the boundaries of all convex domains and the variety of hypersurfaces suggested by Figure 2.
### Rectifiable sets
A set \(\Sigma\subseteq\mathbb{R}^{n}\) is _\(\boldsymbol{k}\)-rectifiable_ if \(\mathcal{H}^{k}(\Sigma)<\infty\) and there exist a countable family \(\{F_{i}\}_{i\in I}\) of Lipschitz maps \(F_{i}\colon A_{i}\subseteq\mathbb{R}^{k}\to\mathbb{R}^{n}\) and an \(\mathcal{H}^{k}\)-null set \(\Sigma_{0}\subseteq\Sigma\) such that
\[\Sigma=\Sigma_{0}\cup\bigcup_{i\in I}F_{i}(A_{i}).\]
If \(\Sigma\) is \(k\)-rectifiable, then, at \(\mathcal{H}^{k}\)-a.e. \(y\in\Sigma\), there exists a unique _approximate tangent space_\(T_{y}\Sigma\subseteq\mathbb{R}^{n}\), which coincides with the classical tangent space when \(\Sigma\) is a smooth hypersurface. We call \(\nu\colon\Sigma\to\mathbb{S}^{n-1}\) a _measurable unit normal vector field_ on \(\Sigma\) if it is \(\mathcal{H}^{k}\)-measurable and \(\nu(y)\) is orthogonal to \(T_{y}\Sigma\) for \(\mathcal{H}^{k}\)-a.e. \(y\in\Sigma\) at which the approximate tangent space is uniquely defined. In this case, the real-valued function \(y\mapsto\langle T(y),\nu(y)\rangle\) is a measurable function on \(\Sigma\) whenever \(T\colon\Sigma\to\mathbb{R}^{n}\) is measurable, and the integral
\[\int_{\Sigma}\langle T(y),\nu(y)\rangle\,d\mathcal{H}^{k}(y)\]
is well-defined when the function is also integrable. (In particular, the proof of the Theorem implies that the integrand in the theorem statement is integrable for a.e. \(x\in\Sigma\).)
The main features of rectifiable sets that we use are the notion of a normal vector field and the applicability of the coarea formula (cf. [2]).
The language of the theory of sets of finite perimeter comes to bear in the Corollary. If \(E\subseteq\mathbb{R}^{n}\) is such a set, then there is an \((n-1)\)-rectifiable set \(\partial^{*}E\subseteq\partial E\), the _reduced boundary_ of \(E\), on which the _measure-theoretic outward unit normal vector field_\(\nu_{E}\colon\partial^{*}E\to\mathbb{R}^{n}\) is defined. The set \(E\) comes with a natural vector-valued measure \(\mu_{E}\), the _Gauss-Green measure_ of \(E\), that takes the form \(\mu_{E}=\nu_{E}\,|\mu_{E}|=\nu_{E}\,\mathcal{H}^{n-1}\,\mbox{\vrule height 7.0pt width 0.5pt depth 0.0pt} \,\partial^{*}E\). All bounded convex sets are sets of finite perimeter. While [3] is our primary reference, [1] contains some more nuanced results that we shall need as well.
### The orientation cancellation condition
For each \(x\in\mathbb{R}^{n}\) and \(\omega\in\mathbb{S}^{n-1}\), let \(L_{x,\omega}:=x+\operatorname{span}\omega\) denote the line through \(x\) with direction vector \(\pm\omega\). Given an \((n-1)\)-rectifiable set \(\Sigma\subset\mathbb{R}^{n}\) and a measurable unit normal vector field \(\nu\colon\Sigma\to\mathbb{S}^{n-1}\), we say the _orientation cancellation condition_ (or _OCC_) _is satisfied_ at a point \(x\in\Sigma\) if the equation
\[\sum_{y\,\in\,\Sigma\cap L_{x,\omega}}\operatorname{sgn}\left\langle\omega, \nu(y)\right\rangle=0 \tag{2.1}\]
holds for \(\mathcal{H}^{n-1}\)-a.e. \(\omega\in\mathbb{S}^{n-1}\), where \(\operatorname{sgn}(\,\cdot\,)\) is the signum function with \(\operatorname{sgn}0:=0\). If the OCC is satisfied at \(\mathcal{H}^{n-1}\)-a.e. \(x\in\Sigma\), we say that \((\Sigma,\nu)\)_satisfies the orientation cancellation condition_.
One can take this definition as an adaptation to lower-regularity sets of a concept from algebraic topology. A smooth, immersed hypersurface \(\Sigma\subset\mathbb{R}^{n}\) admitting a continuous normal vector field \(\nu\) such that \((\Sigma,\nu)\) satisfies the OCC is said to have _first Stiefel-Whitney class_\(0\), and it is a theorem that this is equivalent to orientability. Naturally, the most salient example of such a surface is the boundary of a smoothly bounded open set, in which case \(\operatorname{sgn}\left\langle\omega,\nu(y)\right\rangle\) alternates sign along successive values of \(y\in\Sigma\cap L_{x,\omega}\). However, there are more general cell complexes with first Stiefel-Whitney class is \(0\) that are not oriented manifolds
(cf. [6]). For example, there are many choices of orientation for the \(1\)-cells in Figure 2 that give it first Stiefel-Whitney class \(0\), although the resulting space does not admit a topology making it into an immersed, oriented submanifold of Euclidean space.
## 3. Proofs of results
We single out one computation before delving into the proof of the main theorem.
**Lemma 1**.: _Let \(\Sigma\subset\mathbb{R}^{n}\) be an \((n-1)\)-rectifiable set, \(\nu\colon\Sigma\to\mathbb{S}^{n-1}\) a measurable unit normal vector field, \(x\in\Sigma\) a point, and \(\pi_{x}\colon\mathbb{R}^{n}\setminus\{x\}\to\mathbb{S}^{n-1}\cong\partial B(x,1)\) the radial projection onto the unit sphere centered at \(x\):_
\[\pi_{x}(y):=\frac{y-x}{\|y-x\|}.\]
_Then the a.e.-defined Jacobian determinant \(|J\pi_{x}|\colon\Sigma\setminus\{x\}\to\mathbb{R}\) is given by_
\[|J\pi_{x}(y)|=\frac{|\langle x-y,\nu(y)\rangle|}{\|x-y\|^{n}}.\]
Proof.: We employ the tensor notation of [3]. Let \(y\in\Sigma\) be a point at which \(\nu\) is defined and let \((\mathbf{u}_{i})_{i=1}^{n}\) be an orthonormal basis for \(\mathbb{R}^{n}\) such that \(\operatorname{span}\left(\mathbf{u}_{i}\right)_{i=1}^{n-1}=T_{y}\Sigma\) and \(\mathbf{u}_{n}=\nu(y)\). A routine computation gives the representation of the derivative \(D_{y}\pi_{x}\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) in these coordinates:
\[D_{y}\pi_{x}=\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{\delta_{ij}\|y-x\|^{2}-(y_{i}- x_{i})(y_{j}-x_{j})}{\|y-x\|^{3}}\mathbf{u}_{i}\otimes\mathbf{u}_{j},\]
where we write \(z=\sum_{i=1}^{n}z_{i}\mathbf{u}_{i}\). The derivative at \(y\) of the immersion \(\iota\colon\Sigma\hookrightarrow\mathbb{R}^{n}\) is
\[D_{y}\iota=\sum_{j=1}^{n-1}\mathbf{u}_{j}\otimes\mathbf{u}_{j},\]
and composing with \(D_{y}\pi_{x}\) gives the restriction of \(D_{y}\pi_{x}\) to \(T_{y}\Sigma\):
\[D_{y}\pi_{x}|_{T_{y}\Sigma} =D_{y}(\pi_{x}\circ\iota)=D_{y}\pi_{x}\circ D_{y}\iota\] \[=\left(\sum_{i=1}^{n}\sum_{k=1}^{n}\frac{\delta_{ik}\|y-x\|^{2}-( y_{i}-x_{i})(y_{k}-x_{k})}{\|y-x\|^{3}}\mathbf{u}_{i}\otimes\mathbf{u}_{k} \right)\left(\sum_{j=1}^{n-1}\mathbf{u}_{j}\otimes\mathbf{u}_{j}\right)\] \[=\sum_{i=1}^{n}\sum_{j=1}^{n-1}\frac{\delta_{ij}\|y-x\|^{2}-(y_{i }-x_{i})(y_{j}-x_{j})}{\|y-x\|^{3}}\mathbf{u}_{i}\otimes\mathbf{u}_{j}.\]
The adjoint is obtained simply by commuting \(\mathbf{u}_{i}\) and \(\mathbf{u}_{j}\), and the composition of the adjoint with \(D_{y}\pi_{x}|_{T_{y}\Sigma}\) is therefore
\[\big{(}D_{y}\pi_{x}|_{T_{y}\Sigma}\big{)}^{*}\big{(}D_{y}\pi_{x}| _{T_{y}\Sigma}\big{)} =\left(\sum_{i=1}^{n-1}\sum_{k=1}^{n}\frac{\delta_{ik}\|y-x\|^{2} -(y_{i}-x_{i})(y_{k}-x_{k})}{\|y-x\|^{3}}\mathbf{u}_{i}\otimes\mathbf{u}_{k}\right)\] \[\qquad\circ\left(\sum_{k=1}^{n}\sum_{j=1}^{n-1}\frac{\delta_{kj} \|y-x\|^{2}+(y_{k}-x_{k})(y_{j}-x_{j})}{\|y-x\|^{3}}\mathbf{u}_{k}\otimes \mathbf{u}_{j}\right)\]
\[=\frac{1}{\|y-x\|^{6}}\sum_{i=1}^{n-1}\sum_{k=1}^{n}\sum_{j=1}^{n-1} \big{(}\delta_{ij}\|y-x\|^{4}-\delta_{ik}(y_{k}-x_{k})(y_{j}-x_{j})\|y-x\|^{2}\] \[\qquad\qquad\qquad-\delta_{kj}(y_{i}-x_{i})(y_{k}-x_{k})\|y-x\|^{ 2}+\delta_{ij}(y_{i}-x_{i})(y_{k}-x_{k})^{2}(y_{j}-x_{j})\big{)}\mathbf{u}_{i} \otimes\mathbf{u}_{j}.\]
Reordering the basis if necessary so that \(y_{1}-x_{1}\neq 0\), we find that the eigenvectors of this operator are
\[(y_{1}-x_{1})\mathbf{u}_{i}-(y_{i}-x_{i})\mathbf{u}_{1},\quad i= 2,...,n-1,\quad\text{with eigenvalue}\quad\frac{1}{\|y-x\|^{2}}\quad\text{and}\] \[\sum_{i=1}^{n-1}(y_{i}-x_{i})\mathbf{u}_{i}\quad\text{with eigenvalue}\quad\frac{\|y^{\prime}-x^{\prime}\|^{2}\|y-x\|^{2}-2\|y^{\prime}-x^{ \prime}\|^{2}\|y-x\|^{2}+\|y-x\|^{4}}{\|y-x\|^{6}},\]
where \(z=(z^{\prime},z_{n})\). Simplifying this last eigenvalue gives
\[\frac{\|x^{\prime}-y^{\prime}\|^{2}-2\|x^{\prime}-y^{\prime}\|^{ 2}+\|x-y\|^{2}}{\|x-y\|^{4}}=\frac{\|x-y\|^{2}-\|x^{\prime}-y^{\prime}\|^{2}}{ \|x-y\|^{4}}\] \[\qquad\qquad\qquad=\frac{(x_{n}-y_{n})^{2}}{\|x-y\|^{4}}=\frac{ \langle x-y,\mathbf{u}_{n}\rangle^{2}}{\|x-y\|^{4}}=\frac{\langle x-y,\nu(y) \rangle^{2}}{\|x-y\|^{4}},\]
and taking the square root of the product of the eigenvalues yields the Jacobian:
\[|J\pi_{x}(y)|=\left(\prod_{i=1}^{n-2}\frac{1}{\|x-y\|^{2}}\right)^{1/2}\left( \frac{\langle x-y,\nu(y)\rangle^{2}}{\|x-y\|^{4}}\right)^{1/2}=\frac{|\langle x -y,\nu(y)\rangle|}{\|x-y\|^{n}}.\qed\]
Proof of the Theorem.: Let \(x\in\Sigma\) be a point at which the OCC is satisfied. We employ the coarea formula by radially projecting \(\Sigma\) onto \(\partial B(x,1)\cong\mathbb{S}^{n-1}\) and applying Lemma 1:
\[\int_{\Sigma}\frac{\langle x-y,\nu(y)\rangle\langle y-x,\nu(x) \rangle}{\|x-y\|^{n+1}}d\mathcal{H}^{n-1}(y)\] \[\qquad\qquad=\int_{\mathbb{S}^{n-1}}\int_{\Sigma\cap\pi_{x}^{-1}( \omega)}\big{(}\operatorname{sgn}\left\langle-\omega,\nu(y)\right\rangle\big{)} \left\langle\frac{y-x}{\|y-x\|},\nu(x)\right\rangle d\mathcal{H}^{0}(y)\,d \mathcal{H}^{n-1}(\omega)\] \[\qquad\qquad=\int_{\mathbb{S}^{n-1}}\sum_{\begin{subarray}{c} \lambda>0\\ x+\lambda\omega\in\Sigma\end{subarray}}\big{(}\operatorname{sgn}\left\langle- \omega,\nu(x+\lambda\omega)\right\rangle\big{)}\langle\omega,\nu(x)\rangle d \mathcal{H}^{n-1}(\omega)\] \[\qquad\qquad=\frac{1}{2}\int_{\mathbb{S}^{n-1}}\sum_{y\in\Sigma \cap(L_{x,\omega}\setminus\{x\})}\big{(}\operatorname{sgn}\left\langle\omega, \nu(y)\right\rangle\big{)}\langle-\omega,\nu(x)\rangle d\mathcal{H}^{n-1}( \omega),\]
where \(L_{x,\omega}\) is the line through \(x\) with direction vector \(\pm\omega\). By the orientation cancellation condition (Equation (2.1)),
\[\sum_{y\in\Sigma\cap(L_{x,\omega}\setminus\{x\})}\operatorname{sgn}\left\langle \omega,\nu(y)\right\rangle=\left(\sum_{y\in\Sigma\cap L_{x,\omega}} \operatorname{sgn}\left\langle\omega,\nu(y)\right\rangle\right)-\operatorname{ sgn}\left\langle\omega,\nu(x)\right\rangle=\operatorname{sgn}\left\langle- \omega,\nu(x)\right\rangle,\]
so we conclude that
\[\begin{split}&\int_{\Sigma}\frac{\langle x-y,\nu(y)\rangle\langle y-x,\nu(x)\rangle}{\|x-y\|^{n+1}}d\mathcal{H}^{n-1}(y)\\ &\quad=\frac{1}{2}\int_{\mathbb{S}^{n-1}}\big{(}\operatorname{ sgn}\langle-\omega,\nu(x)\rangle\big{)}\langle-\omega,\nu(x)\rangle\,d\mathcal{H}^{n-1}( \omega)\\ &\quad=\frac{1}{2}\int_{\mathbb{S}^{n-1}}|\langle\omega,\nu(x) \rangle|\,d\mathcal{H}^{n-1}(\omega).\end{split} \tag{3.1}\]
This last integral is invariant under rotations and, hence, may be computed in graph coordinates on \(\mathbb{S}^{n-1}\) with \(\nu(x)=(0,...,0,1)\), giving
\[\frac{1}{2}\int_{\mathbb{S}^{n-1}}|\langle\omega,\nu(x)\rangle|\,d\mathcal{H}^ {n-1}(\omega)=\frac{1}{2}\int_{\mathbb{S}^{n-1}}|\omega_{n}|\,d\mathcal{H}^{n -1}(\omega)=\frac{1}{2}\int_{B(0,1)}2\,d\mathcal{L}^{n-1}(x)=\alpha_{n-1}.\]
This combines with Equation (3.1) to yield Equation (1.1), and since \(\Sigma\) satisfies the OCC, this conclusion holds for \(\mathcal{H}^{n-1}\)-a.e. \(x\in\Sigma\). Equation (1.2) then follows by integrating over \(\Sigma\) with respect to \(d\mathcal{H}^{n-1}(x)\).
Figure 3. The proof of the Theorem formalizes the following idea: if \(\Sigma\) (depicted here as the boundary of a \(C^{1}\)-bounded region) is partitioned into double cones with vertex at \(x\), then each piece of \(\Sigma\) that slices the double cone contributes approximately the same mass to the integral in Equation (1.1), up to a sign. (The weight factor \(\langle x-y,\nu(y)\rangle/\|x-y\|^{n}\) is chosen precisely to guarantee this.) The “piece” of \(\Sigma\) at \(x\) contributes nothing, so the OCC implies that the contribution to the integral from this double cone is approximately the area of a slice of the cone that is unit distance from \(x\) and orthogonal to the axis of the cone.
Again, we precede the proof of the Corollary with a technical lemma to the effect that, when entering or leaving a set of finite perimeter, one typically must cross the reduced boundary. Denote by \(E^{(t)}\) the set of points in \(\mathbb{R}^{n}\) whose Lebesgue density with respect to \(E\) is \(t\) (\(0\leq t\leq 1\)).
**Lemma 2**.: _If \(E\) is a set of finite perimeter, then \((\partial^{*}E,\nu_{E})\) satisfies the OCC._
Proof.: Modifying \(E\) on an \(\mathcal{L}^{n}\)-null set--an operation that does not affect \(\partial^{*}E\)--we assume without loss of generality \(E^{(1)}\subseteq E\) and \(E^{(0)}\subseteq\mathbb{R}^{n}\setminus E\). Let \(\chi_{E}^{*}\) be the precise representative of \(\chi_{E}\), and, for \(x\in\mathbb{R}^{n}\) and \(\omega\in\mathbb{S}^{n-1}\), define \((\chi_{E}^{*})_{x}^{\omega}\colon\mathbb{R}\to\mathbb{R}\) by
\[(\chi_{E}^{*})_{x}^{\omega}:=\chi_{E}^{*}(x+t\omega).\]
Our claim will follow from a general result, adapted here from [1] Theorem 3.108:
**Theorem**.: _For \(\mathcal{H}^{n-1}\)-a.e. \(x\in\mathbb{R}^{n}\), the following statements hold for \(\mathcal{H}^{n-1}\)-a.e. \(\omega\in\mathbb{S}^{n-1}\):_
1. \(L_{x,\omega}\subseteq E^{(0)}\cup\partial^{*}E\cup E^{(1)}\)_._
2. \((\chi_{E}^{*})_{x}^{\omega}\) _has bounded variation,_ \((\chi_{E}^{*})_{x}^{\omega}(t)=\chi_{E}(x+t\omega)\) _for_ \(\mathcal{L}^{1}\)_-a.e._ \(t\in\mathbb{R}\)_, and the set of discontinuities of_ \((\chi_{E}^{*})_{x}^{\omega}\) _is given by_ \[(J_{E})_{x}^{\omega}:=\{t\in\mathbb{R}\colon x+t\omega\in\partial^{*}E\}.\]
3. \(\langle\omega,\nu_{E}(y)\rangle\neq 0\) _for all_ \(y\in\partial^{*}E\cap L_{x,\omega}\)_._
4. _If_ \(x+t\omega\in\partial^{*}E\)_, then we have_ \[\lim_{s\uparrow t}\,(\chi_{E}^{*})_{x}^{\omega}(s)=\left\{\begin{array} []{ll}1&\mbox{if }\langle\omega,\nu_{E}(x+t\omega)\rangle>0\\ 0&\mbox{if }\langle\omega,\nu_{E}(x+t\omega)\rangle<0\end{array}\right.\quad \mbox{and}\] \[\lim_{s\downarrow t}\,(\chi_{E}^{*})_{x}^{\omega}(s)=\left\{ \begin{array}{ll}0&\mbox{if }\langle\omega,\nu_{E}(x+t\omega)\rangle>0\\ 1&\mbox{if }\langle\omega,\nu_{E}(x+t\omega)\rangle<0.\end{array}\right.\]
Claims 1 and 3 are included to make sense of Claims 2 and 4, respectively. By the structure of sets of finite perimeter on the real line, Claim 2 also implies that \((\chi_{E}^{*})_{x}^{\omega}\) is equal to the indicator function of a finite collection of positively separated bounded intervals, except at the endpoints of these intervals (the points of \((J_{E})_{x}^{\omega}\)), where \((\chi_{E}^{*})_{x}^{\omega}\) takes the value \(\frac{1}{2}\). (See [3] Proposition 12.13 and [1] Theorem 3.28.)
Let \(x\) and \(\omega\) be as above, and enumerate the points of \((J_{E})_{x}^{\omega}\) by \(y_{i}=x+t_{i}\omega\), \(t_{1}<\cdots<t_{k}\). By Claim 2, \(k\) is even, so Equation (2.1) defining the OCC will be satisfied if \(\langle\omega,\nu_{E}(y_{i})\rangle\) and \(\langle\omega,\nu_{E}(y_{i+1})\rangle\) have opposite signs for \(i=1,...,k-1\). If \(\langle\omega,\nu_{E}(y_{i})\rangle>0\), then, by Claim 4, we have
\[\lim_{s\downarrow t_{i}}\,(\chi_{E}^{*})_{x}^{\omega}(s)=0.\]
By Claim 2 and the preceding remark on the structure of \((\chi_{E}^{*})_{x}^{\omega}\), we must have \(\chi_{E}(x+t\omega)=0\) for all \(t>t_{i}\) sufficiently close to \(t_{i}\). Since \((\chi_{E}^{*})_{x}^{\omega}\) is continuous on \((t_{i},t_{i+1})\) and takes a discrete set of values, it follows that \(\chi_{E}^{*}(x+t\omega)=0\) for all \(t_{i}<t<t_{i+1}\), whence
\[\lim_{s\uparrow t_{i+1}}\,\,\chi_{E}^{*}(x+t\omega)=\lim_{s\uparrow t_{i+1}}\, \,(\chi_{E}^{*})_{x}^{\omega}(s)=0.\]
Another application of Claim 4 gives \(\langle\omega,\nu_{E}(y_{i+1})\rangle<0\), as desired. By an identical argument, we have \(\langle\omega,\nu_{E}(y_{i+1})\rangle>0\) whenever \(\langle\omega,\nu_{E}(y_{i})\rangle<0\), so the signs of the inner products against \(\omega\) alternate, as we sought to show.
Proof of the Corollary.: By Lemma 2, \((\Sigma,\nu)=(\partial^{*}E,\nu_{E})\) satisfies Equation (1.2), so the triangle inequality for integrals gives (1.3). Equality holds if and only if the integrand is nonnegative \(\mathcal{H}^{n-1}\)-a.e.; or, equivalently, if and only if \(\langle\nu_{E}(x),\nu_{E}(y)\rangle\leq 0\) for \(\mathcal{H}^{n-1}\)-a.e. \(x,y\in\partial^{*}E\). It is clear that, if \(E\) is convex, then this inequality holds for all \(x,y\in\partial^{*}E\). In this case,
\[\overline{E}=\bigcap_{x\in\partial^{*}E}\{z\in\mathbb{R}^{n}\colon\langle z, \nu_{E}(x)\rangle\leq 0\},\]
and the converse also holds: if \(\langle\nu_{E}(x),\nu_{E}(y)\rangle\leq 0\) for all \(x\in\partial^{*}E\), then this intersection defines a convex region of which \(\partial^{*}E\) is the reduced boundary.
## Acknowledgement
My gratitude goes to Stefan Steinerberger for his ideas and suggestions throughout the drafting of this article.
|
2302.09142
|
Quantum State Transfer Optimization: Balancing Fidelity and Energy
Consumption using Pontryagin Maximum Principle
|
In this study, we address a control-constrained optimal control problem
pertaining to the transformation of quantum states. Our objective is to
navigate a quantum system from an initial state to a desired target state while
adhering to the principles of the Liouville-von Neumann equation. To achieve
this, we introduce a cost functional that balances the dual goals of fidelity
maximization and energy consumption minimization. We derive optimality
conditions in the form of the Pontryagin Maximum Principle (PMP) for the
matrix-valued dynamics associated with this problem. Subsequently, we present a
time-discretized computational scheme designed to solve the optimal control
problem. This computational scheme is rooted in an indirect method grounded in
the PMP, showcasing its versatility and efficacy. To illustrate the
practicality and applicability of our methodology, we employ it to address the
case of a spin $\frac{1}{2}$ particle subjected to interaction with a magnetic
field. Our findings shed light on the potential of this approach to tackle
complex quantum control scenarios and contribute to the broader field of
quantum state transformations.
|
Nahid Binandeh Dehaghani, A. Pedro Aguiar
|
2023-01-30T12:53:48Z
|
http://arxiv.org/abs/2302.09142v2
|
# Optimal Control of Quantum State Transfer
###### Abstract
We consider a control constrained optimal control problem of quantum state transformation from an initial given state to a desired target state satisfying Liouville-von Neumann equation. The cost functional to be optimized is viewed as a trade off between maximizing fidelity and minimizing energy consumption. As a new approach, we drive the optimality conditions in the form of Pontryagin Maximum Principle (PMP) for the related matrix-valued dynamics, and next we present a time-discretized computational scheme to solve the proposed optimal control problem by using an indirect method based on the PMP. The algorithm is applied for a spin \(\frac{1}{2}\) particle interacting with a magnetic field.
+
Footnote †: footnote]The authors acknowledge the support of FCT for the grant 2021.07608.BD, the ARISE Associated Laboratory, Ref. LA/P/(1212/2020, and the R&D Unit SYSTEC-Base, Ref. UIDB/01047/2020, and Programmatic, Ref. UIDP/00147/2020 funds, and also the support of projects SNAP, Ref. NORTE-01-0145-FEDER-000085, and RELIABLE (PTDC/EEI-AUT/3522/2020) funded by national funds through FCT/MCTES. The work has been done in the honor and memory of Professor Fernando Lobo Pereira.
Quantum technology aspires to develop practical applications based on properties of the systems obeying the laws of quantum mechanics. This objective requires efficient manipulation of quantum objects in order to obtain desired behaviors. Quantum control holds within a number of techniques to obtain the time evolution of control parameters and enable useful performance in applications ranging from quantum computing, (Palao and Kosloff, 2002; Xu, Li, Chen, and Xue, 2020) to sensing, (Rembold, Oshnik, Muller, Montangero, Calarco, and Neu, 2020; Poggali, Cappellaro, and Fabbri, 2018), simulation, (Holland, Wendt, Kravvaris, Wu, Ormand, DuBois, Quaglioni, and Pederiva, 2020), and metrology, (Lin, Ma, and Sels, 2021). Several quantum control methods, (Stefanatos and Paspalakis, 2021), such as brute-force optimization of a few pulse parameters, (Cheng, Deng, and Qia, 2020), Brumer-Shapiro coherent control, (Gruebele, 2001), pulse-timing control, (McDermott and Vavilov, 2014), stimulated-Raman-Adiabatic-Passage, (Vitanov, Rangelov, Shore, and Bergmann, 2017), genetic algorithms, (Lahoz-Beltra, 2016), and optimal control theory (OCT), (Glaser, Boscain, Calarco, Koch, Kockenberger, Kosloff, Kuprov, Luy, Schirmer, Schulte-Herbruggen et al., 2015; James, 2021; Werschnik, 2006), have been exploited in order to discover the optimal pulse sequences. The studies on Quantum Optimal Control (QOC) began in the late 1980's, (Peirce, Dahleh, and Rabitz, 1988), and has undergone continuous developments up to now.
A wide range of problems arising in quantum technology, as in quantum computing or nuclear magnetic resonance spectroscopy, are compatible to be formulated in the framework of OCT. Amongst the most dominant advances of QOC, we can point out the introduction of rapidly converging iterative algorithms, (Maday and Turinici, 2003; Zhu and Rabitz, 1998), and its generalization for dissipative systems, (Ohtsuki, Zhu, and Rabitz, 1999), while taking several control criteria into account, (Ohtsuki, Nakagami, Fujimura, Zhu, and Rabitz, 2001). In fact, QOC aims at developing an organized and rigorous design methodology in order to control the behavior of quantum systems so that a desired set of objectives can be obtained in an optimal way. A control field \(u(t)\) is able to produce the global maximum or minimum value of a performance index \(J(u(t))\), e.g. maximum fidelity, (Dehaghani and Pereira, 2022a) or minimum time, (Sugny, Rontz, and Jauslin, 2007), while being able to overcome decoherence and dissipation.
Despite the considered research works that have been done in quantum optimal control theory, Pontryagin maximum principle of optimal control is still far from being fully exploited in quantum context. In the current state of the art, the quantum systems to be controlled are usually simple closed systems, in which the quantum state is expressed in the form of unit vectors, and evolves according to the Schrodinger equation, (Boscain, Sigalotti, and Sugny, 2021; Werschnik and Gross, 2007; D'alessandro and Dahleh, 2001). However, in practical applications, the quantum systems to be controlled are usually not simple closed systems. They may be quantum ensembles, and their states cannot be expressed in the form of unit vectors. In this paper, we address precisely this problem by considering the evolution of the matrix-valued probability density function, and describing the system dynamics in terms of a density operator, by means of, e.g., master equations. As far as the authors' knowledge, such a study has not been done, with exception of our previous recent
work in (Dehaghani and Pereira, 2022b). In this paper, we extend and generalize our previous results by applying the Maximum Principle of Pontryagin for the matrix valued quantum dynamic control system, where the state of the quantum system is described through the density operator evolving according to Liouville-von Neumann equation. We aim a trade off between the goal of attaining the maximum fidelity in order to signify a security level for quantum state transformation, which is of high importance notably in quantum information theory, and keep the energy of the field small. The first-order necessary optimality conditions obtained from the application of PMP results in a two-point boundary value problem, which we managed to solve by proposing a shooting algorithm, where we also consider the problem of control constraints. Simulation results illustrate the effectiveness of the proposed methodology.
The paper is organized as follows: We first review the set of PMP optimality conditions for a general system. Then, we turn to the context of quantum systems, and we study the basis of a quantum optimal control problem, including the current state of the art. Next, we address the questions of existence of an optimal control and also controllability of a quantum system. Then, we present the physical description of a simple two-level quantum-mechanical system, which we have used in our work. In the next section, we describe the system under study, and, then, we formulate the optimal control problem under control constraints and obtain the necessary conditions of optimality. We later present the application of PMP by means of an indirect method through an algorithm, including simulation results. The paper ends with brief conclusions and an overview on prospective research challenges.
### Notation.
For a general continuous-time trajectory \(x\), the term \(x(t)\) indicates the trajectory assessed at a specific time \(t\). For writing partial differential equations (PDEs), we denote partial derivatives using subscripts. In the general situation where \(f\) denotes a function of \(n\) variables including \(x\), then \(f_{x}\) denotes the partial derivative relative to the \(x\) input. Throughout the paper, we have used \(i\) as the imaginary unit. For a matrix \(A\), \(A^{T}\), \(A^{\ast}\), and \(A^{\dagger}\) represent the transpose, complex conjugate, and conjugate transpose of matrix A, respectively. To denote the wavefunctions as vectors, we use the Dirac notation such that \(\left|\psi\right\rangle=\sum\limits_{j=1}^{n}\alpha_{j}\left|\hat{\psi}_{j}\right\rangle\), where \(\left|\psi\right\rangle\) indicates a state vector, \(\alpha_{j}\) are the complex-valued expansion coefficients, and \(\left|\hat{\psi}_{j}\right\rangle\) are basis vectors that are fixed.
We denote a finite-dimensional separable Hilbert space by \(\mathbb{H}\), and define it over the complex field \(\mathbb{C}\), so \(\mathbb{H}\simeq\mathbb{C}^{N}\), where \(N\) indicates the dimension of the space. We consider the set \(\mathcal{B}(\mathbb{H})\) as the set of linear operators on the Hilbert space and define it as \(\mathcal{B}(\mathbb{H})\simeq\mathbb{C}^{N\times N}\). The inner product in the set \(\mathcal{B}(\mathbb{H})\) is the Hilbert-Schmidt inner product, defined as \(\left\langle A,B\right\rangle=tr\left(A^{\dagger}B\right)\), where \(tr(A)\) indicates the trace of an square matrix \(A\). The commutator of two elements \(A\) and \(B\), as linear operators on the Hilbert space, is indicated by \([A,B]:=AB-BA\).
## 1. Pontryagin's maximum principle
In general terms, a fundamental optimal control problem is formulated as the following (Werschnik, 2006):
Given the dynamical system \(\dot{x}\left(t\right)=f\left(x\left(t\right),u\left(t\right)\right)\) and a set of admissible controls \(u(t)\in\mathcal{U}\), we have to determine an admissible control signal such that the objective functional
\[J=\Phi(x(T))+\int\limits_{0}^{T}L\left(x\left(t\right),u\left(t\right)\right)dt \tag{1}\]
is minimized. Let \(u^{\ast}(t)\in\mathcal{U}\) and \(x^{\ast}(t)\) represent the optimal control and state trajectory for the defined optimal control problem. Then, there is a time-varying adjoint trajectory \(\lambda(t)\) that together with \(u^{\ast}(t)\) and \(x^{\ast}(t)\) satisfy
_system equation and initial state condition_
\[\dot{x}^{\ast}\left(t\right)=f\left(x^{\ast}\left(t\right),u^{\ast}\left(t \right)\right) \tag{2}\]
_adjoint equation and transversality condition_
\[-\dot{\lambda}^{\dagger}\left(t\right)=\lambda^{\dagger}\left(t\right)f_{x} \left(x^{\ast}\left(t\right),u^{\ast}\left(t\right)\right)-L_{x}\left(x\left( t\right),u\left(t\right)\right) \tag{3}\]
\[\lambda^{\dagger}\left(T\right)=-\Phi_{x}\left(x^{\ast}\left(T\right)\right)\]
For all \(t\in[0,T]\) and \(u(t)\in\mathcal{U}\)
_Maximum condition_
\[\mathcal{H}\left(\lambda\left(t\right),x^{\ast}\left(t\right),u\left(t\right) \right)\leq\mathcal{H}\left(\lambda\left(t\right),x^{\ast}\left(t\right),u^{ \ast}\left(t\right)\right) \tag{4}\]
where \(\mathcal{H}\) is the Pontryagin Hamiltonian defined as
\[\mathcal{H}\left(\lambda\left(t\right),x\left(t\right),u\left(t\right)\right):= \lambda^{\dagger}\left(t\right)f\left(x\left(t\right),u\left(t\right)\right)-L \left(x\left(t\right),u\left(t\right)\right) \tag{5}\]
## 2. An overview on quantum optimal control
Quantum Optimal Control, (Glaser et al., 2015), can intuitively be formulated in the above-mentioned setting. For such problems, the state of the system may be described by the pure state vector, density operator (for both mixed and pure quantum states), or we can consider the dynamics of the evolution operator. The evolution of a pure state, which is not entangled with the environment, can be described by a wave function \(\left|\psi(t)\right\rangle\), which evolves in time according to a control-dependent Schrodinger equation, (Boscain et al., 2021; Cong, 2014; Werschnik, 2006),
\[i\hbar\!\!\left|\dot{\psi}\left(t\right)\right\rangle=H\left(u\left(t\right) \right)\left|\psi\left(t\right)\right\rangle,\quad\left|\psi(t=0)\right\rangle= \left|\psi_{0}\right\rangle \tag{6}\]
where \(H(u(t))\) is the quantum-mechanical Hamiltonian of the system, and \(\hbar\) is the reduced Planck constant, usually set as \(\hbar=1\) for convenience. The system control can be realized by an admissible set of external control signals \(u_{k}(t)\in\mathbb{R}\), which are coupled to the quantum system via time independent interaction Hamiltonians \(H_{k}\). Therefore, the total quantum Hamiltonian defined as
\[H\left(u\left(t\right)\right)=H_{0}+\sum\limits_{k=1}^{m}u_{k}(t)H_{k} \tag{7}\]
determines the controlled evolution, in which \(H_{0}\) indicates the time independent internal (free) Hamiltonian. In (6), both the system state and Hamiltonian are complex quantities. However, we can express the problem in terms of only real quantities by introducing \(x:=\left[\boldsymbol{\psi}_{R}^{T},\boldsymbol{\psi}_{I}^{T}\right]^{T}\), where \(\left|\psi_{R}(t)\right\rangle,\left|\psi_{I}(t)\right\rangle\in\mathbb{R}^{n}\), and separating the real and
imaginary parts of \(-iH\left(u(t)\right)=R\left(u(t)\right)+iI\left(u(t)\right)\), where \(R\left(u(t)\right),I\left(u(t)\right)\in\mathbb{R}^{n\times n}\) are skew-symmetric and symmetric matrices for all values of \(u(t)\), respectively. Now, we can rewrite the differential equation describing the dynamics of the system implying only real values by
\[\dot{x}=\tilde{H}\left(u(t)\right)x \tag{8}\]
in which
\[\tilde{H}\left(u(t)\right)=\begin{pmatrix}R\left(u(t)\right)&-I\left(u(t) \right)\\ I\left(u(t)\right)&R\left(u(t)\right)\end{pmatrix} \tag{9}\]
is both symplectic and skew-symmetric for all values of \(u(t)\). The cost in (1) can also be rewritten by introducing appropriate functions \(\tilde{\Phi}\) and \(\tilde{L}\). Hence, the Pontryagin Hamiltonian takes the form
\[\mathcal{H}\left(\lambda\left(t\right),x\left(t\right),u\left(t\right)\right) =\lambda^{T}\left(t\right)\tilde{H}\left(u\left(t\right)\right)x- \tilde{L}\left(x\left(t\right),u\left(t\right)\right) \tag{10}\]
from which the optimal control \(u(t)\) has to satisfy the maximum condition (4). Since \(\tilde{H}\left(u(t)\right)\) is skew-symmetric, (3) can be rewritten as (Dehaghani and Pereira, 2022c),
\[\begin{array}{c}\dot{\lambda}\left(t\right)=\tilde{H}\left(u\right)\lambda \left(t\right)+\tilde{L}_{x}^{T}\\ \lambda\left(T\right)=-\tilde{\Phi}_{x}^{T}\left(x(T)\right)\end{array} \tag{11}\]
## 3 Existence of the Optimal Control
Since we use the necessary optimality conditions, we also need to address the problem of existence of the optimal control. There are already some standard results, well explained in (Fleming and Rishel, 2012) Chapter III, that are applicable to the case considered in our work. According to (Fleming and Rishel, 2012), the existence theorem guarantee the existence of an optimal control in the Lebesgue Integrable (\(LI\)) set if a control steering a given initial state to a desired target state exists. The control functions \(u_{k}(t)\) in (8) are supposed to be \(LI\) as well. We also will suppose in this paper that the optimal control are in the set \(LI\), according to the above-mentioned theorem. In order to guarantee the existence of a control, one needs to address the question of controllability. The controllability concerns the possibility of steering the system from one state to another for every pair of states.
The complex sphere \(\mathbb{S}^{2N-1}\subset\mathbb{H}\), representing the pure quantum states, is a homogeneous space of the Lie group \(U\left(N\right)=\left\{U\in GL\left(N,\mathbb{C}\right)\left|UU^{\dagger}=U^{ \dagger}U=I\right\}\), and its proper subgroup \(SU\left(N\right)=U\left(N\right)/U\left(1\right)\). The Lie algebras of \(U(N)\) and \(SU(N)\) are
\[u\left(N\right)=\left\{A\in C^{N\times N}\left|A^{\dagger}=-A\right.\right\} \tag{12}\]
and
\[su\left(N\right)=\left\{A\in u\left(N\right)\left|tr\left(A\right)=0\right. \right\}, \tag{13}\]
respectively. The Schrodinger equation stated in (6) can be lifted to the Lie group \(SU(N)\) to obtain the Schrodinger equation for the unitary propagator as
\[i\frac{d}{dt}U\left(t\right)=\left(H_{0}+\sum_{k=1}^{m}u_{k}\left(t\right)H_{ k}\right)U\left(t\right),\quad U(0)=I \tag{14}\]
where \(U\in SU(N)\). Equation (14) is a right invariant control system on the compact Lie group \(su(n)\), so
\[\text{\em Lie}\left\{iH_{0},\ldots,iH_{m}\right\}=su\left(n\right) \tag{15}\]
which represents a necessary and sufficient condition for the controllability of the system indicated in (14), (Boothby and Wilson, 1979). Overall, if a right invariant system is controllable, then the bilinear system is also controllable, (Sachkov, 1997). Therefore, if the evolution operator of the system satisfies a controllable equation, then the bilinear system described by the certain quantum-mechanical Hamiltonian \(H\left(u\left(t\right)\right)\) is controllable as well, so it is possible to design controls to steer the bilinear system from one state to another state in the state-space. Consequently, the system indicated in (6) is controllable provided that (15) holds. The controllability of two-level quantum systems, as the case considered in this paper, is analysed in more details in (D'Alessandro, 2000).
## 4 Physical Description of a Control System
In Nuclear Magnetic Resonance (NMR) experiments, a single spin \(\frac{1}{2}\) particle is controlled by means of an electromagnetic field \(\boldsymbol{B}\left(t\right)\), in which one component is kept constant in \(z\) direction while \(x\) ans \(y\) components vary in time in order to change the direction of the spin, (D'Alessandro, 2021). The total quantum mechanical Hamiltonian is then described by the interaction of the external magnetic field \(\boldsymbol{B}\left(t\right)=\left(B_{x}\left(t\right),B_{y}\left(t\right),B _{z}\right)^{T}\) with the spin angular momentum \(\widehat{S}:=\left(\hat{S}_{x},\hat{S}_{y},\hat{S}_{z}\right)^{T}\) as
\[\begin{array}{c}H\left(t\right)=\gamma\hat{S}^{T}\boldsymbol{B}\left(t\right) \\ =\gamma\left(\hat{S}_{x}B_{x}\left(t\right)+\hat{S}_{y}B_{y}\left(t\right)+\hat{ S}_{z}B_{z}\right)\end{array} \tag{16}\]
where \(\gamma\) is the gyromagnetic ratio. Let express the \(\left|\psi\left(t\right)\right\rangle=c_{1}\left(t\right)\left|\frac{1}{2} \right\rangle+c_{2}\left(t\right)\left|-\frac{1}{2}\right\rangle\). From (6) and (16), the differential equation for \(\boldsymbol{c}\left(t\right)=\left(c_{1}\left(t\right),c_{2}\left(t\right) \right)^{T}\) is expressed as
\[i\frac{d}{dt}\boldsymbol{c}=\frac{\gamma}{2}\left(\sigma_{x}B_{x}\left(t\right) +\sigma_{y}B_{y}\left(t\right)+\sigma_{z}B_{z}\right)\boldsymbol{c} \tag{17}\]
in which the matrix representation of the operators \(\sigma_{x}\), \(\sigma_{y}\), and \(\sigma_{z}\) is done by the so-called Pauli matrices. By appropriate scaling of time and setting the magnetic field arguments as controls we have
\[-iH\left(u\left(t\right)\right)=\left(\bar{\sigma}_{z}u_{z}+\bar{\sigma}_{x}u_{ x}\left(t\right)+\bar{\sigma}_{y}u_{y}\left(t\right)\right) \tag{18}\]
where
\[\bar{\sigma}_{x}=\frac{1}{2}\begin{pmatrix}0&i\\ i&0\end{pmatrix},\ \ \bar{\sigma}_{y}=\frac{1}{2}\begin{pmatrix}0&-1\\ 1&0\end{pmatrix},\ \ \bar{\sigma}_{z}=\frac{1}{2}\begin{pmatrix}i&0\\ 0&-i\end{pmatrix}\]
span \(SU(2)\) of skew-Hermitian matrices with trace equal to zero, and satisfy the following commutation relations
\[\left[\bar{\sigma}_{x},\bar{\sigma}_{y}\right]=\bar{\sigma}_{z},\quad\left[ \bar{\sigma}_{y},\bar{\sigma}_{z}\right]=\bar{\sigma}_{x},\quad\left[\bar{ \sigma}_{z},\bar{\sigma}_{x}\right]=\bar{\sigma}_{y}. \tag{19}\]
Equation (20) can be implemented according to (8), to obtain
\[\tilde{H}\left(u\left(t\right)\right)=T_{z}u_{z}+T_{y}u_{y}\left(t\right)+T_{x }u_{x}\left(t\right) \tag{20}\]
where
\[T_{x}=\frac{1}{2}\begin{pmatrix}0&0&0&-1\\ 0&0&-1&0\\ 0&1&0&0\\ 1&0&0&0\end{pmatrix},T_{y}=\frac{1}{2}\begin{pmatrix}0&-1&0&0\\ 1&0&0&0\\ 0&0&0&-1\\ 0&0&1&0\end{pmatrix}, \tag{21}\]
From now on, we show the two components of the time varying control by the vector \(\mathbf{u}(t)=\left[u_{x}\left(t\right)\ u_{y}\left(t\right)\right]^{T}\).
## 5 System Description
Every state vector \(\left|\psi_{j}\left(t\right)\right\rangle\) at time \(t\) can be obtained by
\[\left|\psi_{j}\left(t\right)\right\rangle=U(t)\left|\psi_{j}\left(0\right)\right\rangle \tag{22}\]
in which \(U(t)\) is the solution of (14) expressed as
\[U\left(t\right)=\exp\left(-i\int\limits_{0}^{t}H\left(s\right)ds\right) \tag{23}\]
called Dyson's series in the physics context and is similar to a Volterra series in control theory, (Altafini and Ticozzi, 2012). Hence, the outer product \(\left|\psi_{j}\left(t\right)\right\rangle\left\langle\psi_{j}\left(t\right)\right|\) results in
\[\left|\psi_{j}\left(t\right)\right\rangle\left\langle\psi_{j}\left(t\right) \right|=U\left(t\right)\left|\psi_{j}\left(0\right)\right\rangle\left\langle \psi_{j}\left(0\right)\right|U^{\dagger}\left(t\right) \tag{24}\]
and the same for any convex sum of \(\left|\psi_{j}\left(t\right)\right\rangle\left\langle\psi_{j}\left(t\right)\right|\). More precisely, let \(p_{j}\) be the fraction of population of an ensemble \(\left\{p_{j},\left|\psi_{j}\left(t\right)\right\rangle\right\},\) so the corresponding quantum density operator is expressed as
\[\rho=\sum\limits_{j}p_{j}\left|\psi_{j}\right\rangle\left\langle\psi_{j} \right|,\quad p_{j}\geq 0,\quad\sum\limits_{j}p_{j}=1 \tag{25}\]
which belongs to the set of Hermitian, semi definite, and positive matrices with trace equal to one on the system's Hilbert space \(\mathbb{H}.\) Hence, we can rewrite (25) as
\[\rho\left(t\right)=U\left(t\right)\rho\left(0\right)U^{\dagger}\left(t\right) \tag{26}\]
The infinitesimal version of (26) is the quantum Liouville-von Neumann equation, (Berman and Kosloff, 1991), expressed by
\[\dot{\rho}\left(t\right)=-i\left[H\left(u\left(t\right)\right),\rho\left(t \right)\right],\quad\rho(0)=\rho_{0} \tag{27}\]
The controllability of (27) can consequently be obtained from the necessary and sufficient condition indicated in (15). The main characteristic of the Liouville-von Neumann equation is that it generates isospectral evolutions, meaning that
\[sp\left(\rho\left(t\right)\right)=sp\left(\rho\left(0\right)\right)=\Phi \left(\rho\right)=\left\{\mu_{1},\ldots,\mu_{N}\right\} \tag{28}\]
where \(\mu_{1},\ldots,\mu_{N}\) are the eigenvalues of \(\rho(t).\) As a consequence of the isospectrality of (27), the set \(\Phi\left(\rho\right)\) forms a complete set of constants of motion of (27). Let consider the set \(\mathcal{D}\left(\mathbb{H}\right)=\left\{\rho\in\mathcal{B}\left(\mathbb{H }\right)\right\}\left|\rho=\rho^{\dagger}\geq 0,tr\left(\rho\right)=1\right\},\) which is foliated into leaves uniquely determined through the set \(\Phi(\rho),\) and \(\mathcal{C}\subset\mathcal{D}\left(\mathbb{H}\right)\) as such as leaf, where \(\rho_{0}\in\mathcal{C}.\) Hence, \(\mathcal{C}=\left\{U\rho_{0}U^{\dagger},U\in SU\left(N\right)\right\}\) corresponds to the orbit of \(SU(N)\) under the action of conjugation passing through the initial density operator. Assume \(j_{1},\ldots,j_{l}\) as the geometric multiplicities of the eigenvalues \(\Phi(\rho_{0}),\) and \(j_{1}+\cdots+j_{l}=N,\)\(2\leq l\leq N,\) then \(\mathcal{C}\) is the homogeneous space as \(C=U\left(N\right)/\left(U\left(j_{1}\right)\times\ldots\times U\left(j_{l} \right)\right).\) As the eigenvalues of the density operator vary, the geometric multiplicities \(j_{1},\ldots,j_{l}\) form a flag, so the \(\mathcal{C}\) are called complex flag manifolds, (Bengtsson and Zyczkowski, 2017). The flag also determines the dimension of flag manifolds \(\mathcal{C},\) varying from \(2N-2,\) for pure states, to \(N^{2}-N,\) for all various eigenvalues, (Altafini and Ticozzi, 2012). Equation (27) will be adopted as the investigated model in the following subsections.
## 6 Formulation of the Optimal Control Problem under Control Constraints
In optimal control of quantum state transfer problems, a possible cost in the form of (1) to be minimized is to consider a trade off between the goal of attaining the maximum fidelity in order to signify a security level for quantum state transformation, while simultaneously keeping the energy of the field small. Motivated by this consideration, we propose the following optimal control problem (\(P_{1}\))
Minimize \[J=-\mathcal{F}(\rho(T),\sigma)+\eta\int\limits_{0}^{T}\mathbf{u}^{T}(t) \mathbf{u}(t)dt\] \[t\in[0,T],\quad\eta\in[0,1]\] subject to \[\text{\emph{dynamics}}:\dot{\rho}(t)=F\left(\rho(t),u(t)\right),\] \[\rho\left(t\right)\in\mathbb{C}^{n\times n}\] \[\text{\emph{initial condition}}:\rho\left(0\right)=\rho_{0}\] \[\text{\emph{control constraint}}:u(\cdot)\in\mathcal{U},\] \[\text{\emph{i.e.,}}\ u:[0,T]\rightarrow\Omega\subset\mathbb{R}^{2}\]
The coefficient \(\eta\geq 0\) is considered to signify the importance of energy minimization. The density operator \(\rho(t)\) is the quantum state variable supposed to satisfy the differential constraints according to the Liouville-von Neumann equation (27), and \(\rho_{0}\) is the so-called initial quantum state. The set of admissible controls \(\Omega\) is defined as
\[\Omega=\left\{\left[u_{x}(t)\ u_{y}(t)\right]\left|\left\|u\right\|^{2}\right. \leq\sqrt{2}u_{\max}\right\}\]
where \(u_{\max}\) is a given positive parameter. In P1, we have used the well-known Uhlmann-Jozsa definition for fidelity, (Jozsa, 1994), representing the maximal transition probability between the purification of a pair of density matrices, \(\rho(T)\) and the desired target state \(\sigma,\) (Liang, Yeh, Mendonca, Teh, Reid, and Drummond, 2019), defined as
\[\mathcal{F}\left(\rho(T),\sigma\right):=\left(tr\sqrt{\sqrt{\rho(T)}\sigma\sqrt {\rho(T)}}\right)^{2}. \tag{29}\]
## 7 Necessary Conditions of Optimality in the Form of a Maximum Principle
The Pontryagin-Hamilton function \(\mathcal{H}\) is defined for almost all \(t\in[0,T]\) by introducing the matrix-valued time-varying multiplier \(\Lambda,\) designated by costate or adjoint variable of the system, (Dehaghani and Pereira, 2022b). Thus,
\[\mathcal{H}\left(\rho,u,\Lambda\right)=tr\left(\Lambda^{\dagger}F\left(\rho,u \right)\right)-L(u(t)) \tag{30}\]
According to the Pontryagin's Maximum Principle (4), for the optimal state trajectory \(\rho^{\ast}\) and the corresponding adjoint variable \(\Lambda,\) the optimal control \(u^{\ast}(t)\) maximizes the Pontryagin-Hamiltonian function \(\mathcal{H}\) for almost all \(t\in[0,T]\) and all admissible control values \(u\in\Omega\) such that
\[\mathcal{H}\left(\rho^{\ast}(t),u(t),\Lambda(t)\right)\leq\mathcal{H}\left( \rho^{\ast}(t),u^{\ast}(t),\Lambda(t)\right) \tag{31}\]
Consequently, the adjoint equation implies that
\[-\dot{\Lambda}^{\dagger}(t) =\mathcal{H}_{\rho}(\rho^{*}(t),u^{*}(t),\Lambda(t)) \tag{32}\] \[=i\left[H(u^{*}(t)),\Lambda^{\dagger}(t)\right]\]
in which \(H^{*}(t)\) denotes the quantum mechanical Hamiltonian evaluated at each time along the optimal control \(u^{*}\). Equation (32) has the formal solution
\[\Lambda^{\dagger}(t)=e^{i\int_{t}^{T}H^{*}(s)ds}\Lambda^{\dagger}(T)e^{-i\int_ {t}^{T}H^{*}(s)ds}. \tag{33}\]
The boundary condition at the final time for adjoint variable implies that
\[\Lambda^{\dagger}(T) =\nabla_{\rho}\left(tr\sqrt{\sqrt{\rho(T)}\sigma\sqrt{\rho(T)}} \right)^{2} \tag{34}\] \[=2tr\sqrt{\rho(T)\sigma}\sum\limits_{k=0}^{n-1}\alpha_{k}\sum \limits_{i=0}^{k-1}\bar{\rho}(T)^{i}\sqrt{\sigma}\bar{\rho}(T)^{k-i-1}\]
in which \(n\) indicates the dimension of the density matrix and \(\bar{\rho}(T)=\rho(T)-I\). The derivation of (34) can be find in (Dehaghani and Pereira, 2022b), where the coefficients \(\alpha_{k}\), \(k=0,\ldots,n-1\) are obtained from the application of the Cayley-Hamilton theorem.
## 8 Application of the Pontryagin Maximum Principle
In this section, we solve the optimal control problem defined in \((P_{1})\) by means of an indirect method based on the PMP. In the presented algorithm, we have discretized the time interval \([0,T]\) into \(N\) sub intervals such that \(t_{k}=\dfrac{k}{N}\) for \(k=0,\ldots,N-1\), and setting \(j=0,\ldots\) as the iterations counter. Hence, \(j^{th}\) iteration of the function \(f\) at time \(t_{k}\) is represented by \(f_{k}^{j}\). Here, both the dynamics of the system and adjoint equation are considered by a first order Euler approximation. The proposed algorithm is as follows:
**Step 1 - Initialization**.
Initialize the values of \(u_{xk}^{\ j}\) and \(u_{yk}^{\ j}\) for \(k=0,\ldots,N-1\), and go to the first iteration (j=1).
**Step 2 - Computation of the state trajectory**.
For \(k=0,\ldots,N-1\), compute \(U_{k}^{j}=e^{\sum\limits_{i=0}^{k-1}\frac{1}{N}\bar{R}_{i}^{j}}\), where \(\bar{H}_{k}^{j}\) is the implemented Hamiltonian of the system dynamics associated to the control \(u\) at time \(t_{k}\) and \((j-1)th\) iteration. Obtain
\[\rho_{k}^{j}=U_{k}^{j}\rho_{0}^{j}U_{k}^{j\dagger}\]
**Step 3 - Computation of the adjoint trajectory**
Compute \(\Lambda_{N}^{j}\) by using \(\rho_{N}^{j}\) computed in Step 2.
Compute \(\Lambda_{k}^{j}\) by using the discretized version of (33), that is to compute \(V_{k}^{j}=e^{\sum\limits_{k}^{N-1}\frac{1}{N}\bar{R}_{k}^{j}}\), and obtain
\[\Lambda_{k}^{j\dagger}=V_{k}^{j\dagger}\Lambda_{N}^{j\dagger}V_{k}^{j}\]
**Step 4 - Computation of the Pontryagin Hamilton function.**
For \(k=0,\ldots,N-1\), let
\[\mathcal{H}_{k}^{j}(u_{k}^{j})=tr\left({\Lambda_{k}^{j\dagger}}^{T}F_{k}^{j}( \rho_{k}^{j},u_{k}^{j})\right)-L_{k}^{j}\]
**Step 5: Computation of the control function.**
For \(k=0,\ldots,N-1\), compute the temporary control values \(u_{temp,x_{k}^{j}}\), and \(u_{temp,y_{k}^{j}}\) that maximize the map
\[(u_{x},u_{y})\rightarrow\mathcal{H}_{k}^{j}(u_{x},u_{y}).\]
**Step 6: Apply the control constraints.**
\[u_{x,k}^{\ j}=\min(|u_{temp,x,k}^{\ j}|,u_{max})\text{sign}(u_{temp,x})\]
\[u_{y,k}^{\ j}=\min(|u_{temp,y,k}^{\ j}|,u_{max})\text{sign}(u_{temp,y})\]
**Step 7: Stopping test**.
For a determined tolerance error \(\varepsilon>0\), check the algorithm convergence by verifying if
\[\max_{k=0,\ldots,N-1}\{|u_{x}^{\ j}-u_{x}^{\ j-1}|\}<\varepsilon\] \[\max_{k=0,\ldots,N-1}\{|u_{y}^{\ j}-u_{y}^{\ j-1}|\}<\varepsilon\]
holds true. If yes, let \(u_{x}^{\ *}(t_{k})=u_{xk}^{\ j}\) and \(u_{y}^{\ *}(t_{k})=u_{y}^{\ j}\) for \(k=0,\ldots,N-1\), and exit the algorithm.
Otherwise, update the temporary values for the control function according to
\[u_{temp,x_{k}^{\ j}}=u_{xk}^{\ j-1}+\delta(u_{x}^{\ j}-u_{x}^{ \ j-1})\] \[u_{temp,y_{k}^{\ j}}=u_{yk}^{\ j-1}+\delta(u_{yk}^{\ j}-u_{y}^{ \ j-1})\]
where \(\delta>0\) is the learning rate coefficient.
Check the control constraints according to **Step 6**.
Then, let \(j=j+1\), go to **Step 2**.
Here, we check the algorithm convergence by verifying whether the control functions obtained in the current iteration approach the ones of the previous iteration with an acceptable tolerance or not. If not, we repeat the above steps until we achieve the desired convergence.
## 9 Simulation Results
Let consider the quantum state transfer problem given the initial state \(\rho_{0}=\begin{pmatrix}1&0\\ 0&0\end{pmatrix}\) and the desired target state \(\sigma=\begin{pmatrix}0&0\\ 0&1\end{pmatrix}\). By means of implementing the algorithm explained in the previous section, we solve the quantum state transfer problem \(P_{1}\) to drive the given initial quantum state to a desired set while simultaneously minimizing the required control power with the factor \(\eta=0.01\). In this simulation, the control is constrained to \(u_{x},u_{y}\in[-1.5,1.5]\), and the algorithm has been considered to evolve in \(t\in[0,1.6]\) to accomplish the desired fidelity. The control signal in \(z\) direction is considered constant as \(u_{z}=0.001\), and in \(x\) and \(y\) directions is initialized by \(u_{x}^{\ j}=0\) and \(u_{yk}^{\ j}=0\) for \(k=0,\ldots,N-1\). By setting the time-slicing \(N=20\), the learning coefficient \(\delta=0.1\), and the stopping threshold \(\varepsilon=10^{-3}\), the algorithm converges after \(114\) iterations. The residual graph showing the convergence of the algorithm is displayed in Fig. 1.
In Fig. 2 and Fig. 3, the evolution of the 4 elements of the density matrix during the time in the last iteration is presented. The figures show the imaginary and real parts of the state trajectory elements separately. As seen, a smooth
evolution targeting the desired state is obtained at the final iteration. The density matrix has unit trace and is Hermitian during its evolution over the time. The resulted final state is \(\rho\left(T\right)=\begin{pmatrix}0.0005&0.0157+0.0145\mathrm{i}\\ 0.0157+0.0145\mathrm{i}&0.9995\end{pmatrix}\).
The \(x\) and \(y\) components of the control signal are depicted in Fig. 4 and Fig. 5. The control signals hold the time evolution of the optimal trajectory to reach the maximum fidelity. Without these signals and by only considering the drift part of quantum mechanical Hamiltonian, fidelity would stay zero.
## 10 Conclusion
In this paper, we have shown the application of Maximum Principle of Pontryagin in order to compute the constrained optimal control at the cost of compromising between the goal of maximizing fidelity and keeping the energy of the field relatively small. For the studied optimal control problem, we described the state of the quantum system by the density operator, evolving through Liouville-von Neumann equation. As a new approach in quantum optimal control, we obtained the first-order necessary optimality conditions for the matrix-valued dynamics. The application of the Maximum Principle of Pontryagin resulted in the proposed shooting algorithm, which has been used to solve the two point boundary value problem. The suggested procedure can be easily applied for pure state vectors, or one can also consider the dynamics of the evolution operator. Future challenges consists in exploiting the versatility of the optimal control paradigm further by considering open quantum system, and also additional constraints, e.g., state constraints.
|
2301.10143
|
Certain graphs with exactly one irreducible $T$-module with endpoint
$1$, which is thin: the pseudo-distance-regularized case
|
Let $\Gamma$ denote a finite, simple and connected graph. Fix a vertex $x$ of
$\Gamma$ which is not a leaf and let $T=T(x)$ denote the Terwilliger algebra of
$\Gamma$ with respect to $x$. Assume that the unique irreducible $T$-module
with endpoint $0$ is thin, or equivalently that $\Gamma$ is
pseudo-distance-regular around $x$. We consider the property that $\Gamma$ has,
up to isomorphism, a unique irreducible $T$-module with endpoint $1$, and that
this $T$-module is thin. The main result of the paper is a combinatorial
characterization of this property.
|
Blas Fernández
|
2023-01-24T17:29:15Z
|
http://arxiv.org/abs/2301.10143v1
|
Certain graphs with exactly one irreducible \(T\)-module with endpoint \(1\), which is thin: the pseudo-distance-regularized case
###### Abstract
Let \(\Gamma\) denote a finite, simple and connected graph. Fix a vertex \(x\) of \(\Gamma\) which is not a leaf and let \(T=T(x)\) denote the Terwilliger algebra of \(\Gamma\) with respect to \(x\). Assume that the unique irreducible \(T\)-module with endpoint \(0\) is thin, or equivalently that \(\Gamma\) is pseudo-distance-regular around \(x\). We consider the property that \(\Gamma\) has, up to isomorphism, a unique irreducible \(T\)-module with endpoint \(1\), and that this \(T\)-module is thin. The main result of the paper is a combinatorial characterization of this property.
_Mathematics Subject Classifications: 05C25_
_Keywords: Terwilliger algebra; irreducible module_
## 1 Introduction
Throughout this section, let \(\Gamma\) denote a finite, simple and connected graph. Fix a vertex \(x\) of \(\Gamma\) which is not a leaf and let \(T=T(x)\) denote the Terwilliger algebra of \(\Gamma\) with respect to \(x\). The algebra \(T\) is non-commutative and semisimple as it is closed under conjugate-transpose. Therefore, in many instances this algebra can best be studied via its irreducible modules.
There has been a sizeable amount of research investigating (distance-regular) graphs that have a Terwilliger algebra \(T\) with, up to isomorphism, just a few \(T\)-modules of a certain endpoint, all of which are (non-)thin (with respect to a certain base vertex); see for example [5, 6, 7, 8, 16, 17, 18, 19, 20, 21, 22, 23]. These studies generally try to show that such algebraic conditions hold if and only if certain combinatorial conditions are satisfied. A natural follow-up to these results involving Terwilliger algebras of non-distance-regular graphs is presented here. For the most recent research where the Terwilliger algebra plays an important role, see for example [1, 12, 13, 14, 15, 24, 27, 28] and the references therein.
It turns out that there exists a unique irreducible \(T\)-module with endpoint \(0\). It was already proved in [25] that this irreducible \(T\)-module is thin if \(\Gamma\) is distance-regular around \(x\). The converse, however, is not true. Fiol and Garriga [10] later introduced the concept of _pseudo-distance-regularity_ around vertex \(x\), which is based on assigning weights to the vertices where the weights correspond to the entries of the (normalized) positive eigenvector.
They showed that the unique irreducible \(T\)-module with endpoint \(0\) is thin if and only if \(\Gamma\) is pseudo-distance-regular around \(x\) (see also [9, Theorem 3.1]). Moreover, Fernandez and Miklavic recently gave a purely combinatorial characterization of the property, that the irreducible \(T\)-module with endpoint \(0\) is thin (see [8, Theorem 6]). This characterization involves the number of walks of a certain shape between vertex \(x\) and vertices at some fixed distance from \(x\).
Assume that the unique irreducible \(T\)-module with endpoint \(0\) is thin, or equivalently that \(x\) is pseudo-distance-regularized. The main goal of this paper is to find a combinatorial characterization of graphs, which also have a unique irreducible \(T\)-module of endpoint \(1\) (up to isomorphism), and this module is thin. If \(\Gamma\) is distance-regular, then this situation occurs if and only if \(\Gamma\) is bipartite or almost-bipartite [4, Theorem 1.3]. If \(\Gamma\) is distance-biregular, then again \(\Gamma\) has (up to isomorphism) a unique irreducible \(T\)-module with endpoint \(1\), and this module is thin (see [6]). The case when \(\Gamma\) is distance-regular around \(x\) but not necessarily distance-regularized (distance-regular or distance-biregular) was recently considered in [5, 7]. Here we generalize the above results to the case when \(\Gamma\) is not necessarily distance-regular around \(x\) and thus solve [7, Problem 9.1]. The main result of the paper is a combinatorial characterization of such graphs that involves the number of some walks in \(\Gamma\) of a particular shape. Moreover, we give examples of graphs that possess the above mentioned combinatorial properties. We remark that this paper is a generalization of previous efforts in [2, 3, 4, 5, 6, 7] to understand and classify graphs which are pseudo-distance-regular around a fixed vertex and also have a unique irreducible \(T\)-module (up to isomorphism) with endpoint \(1\), and this module is thin.
Our paper is organized as follows. In Section 2 we recall basic definitions and results about Terwilliger algebras that we will find useful later in the paper. In Section 3 we then state our main result in Theorem 3.5. In Section 4, we prove that certain matrices of the Terwilliger algebra are linearly dependent, and we use this in Section 5 to prove the main result. In Section 6, we have some comments about certain distance partitions of graphs which are pseudo-distance-regular around a fixed vertex and also have a unique irreducible \(T\)-module (up to isomorphism) with endpoint \(1\), and this module is thin. We finish the article presenting some examples in Section 7.
## 2 Preliminaries
In this section we review some definitions and basic concepts. Throughout this paper, \(\Gamma=(X,\mathcal{R})\) will denote a finite, undirected, connected graph, without loops and multiple edges, with vertex set \(X\) and edge set \(\mathcal{R}\).
Let \(x,y\in X\). The **distance** between \(x\) and \(y\), denoted by \(\partial(x,y)\), is the length of a shortest \(xy\)-path. The **eccentricity of \(x\)**, denoted by \(\epsilon(x)\), is the maximum distance between \(x\) and any other vertex of \(\Gamma\): \(\epsilon(x)=\max\{\partial(x,z)\mid z\in X\}\). Let \(D\) denote the maximum eccentricity of any vertex in \(\Gamma\). We call \(D\) the **diameter of \(\Gamma\)**. For an integer \(i\) we define \(\Gamma_{i}(x)\) by
\[\Gamma_{i}(x)=\left\{y\in X\mid\partial(x,y)=i\right\}.\]
We will abbreviate \(\Gamma(x)=\Gamma_{1}(x)\). Note that \(\Gamma(x)\) is the set of neighbours of \(x\). Observe that \(\Gamma_{i}(x)\) is empty if and only if \(i<0\) or \(i>\epsilon(x)\).
We now recall some definitions and basic results concerning a Terwilliger algebra of \(\Gamma\). Let \(\mathbb{C}\) denote the complex number field. Let \(\operatorname{Mat}_{X}(\mathbb{C})\) denote the \(\mathbb{C}\)-algebra consisting of all matrices whose rows and columns are indexed by \(X\) and whose entries are in \(\mathbb{C}\). Let \(V\) denote the vector space over \(\mathbb{C}\) consisting of column vectors whose coordinates are indexed by \(X\) and whose entries are in \(\mathbb{C}\). We observe \(\operatorname{Mat}_{X}(\mathbb{C})\) acts on \(V\) by left multiplication.
We call \(V\) the **standard module**. We endow \(V\) with the Hermitian inner product \(\langle\,,\,\rangle\) that satisfies \(\langle u,v\rangle=u^{\top}\overline{v}\) for \(u,v\in V\), where \(\top\) denotes transpose and \(\overline{\phantom{\rule{0.0pt}{1.0pt}}\rule{0.0pt}{1.0pt}}\) denotes complex conjugation. For \(y\in X\), let \(\widehat{y}\) denote the element of \(V\) with a \(1\) in the \(y\)-coordinate and \(0\) in all other coordinates. We observe \(\{\widehat{y}\mid y\in X\}\) is an orthonormal basis for \(V\).
Let \(A\in\operatorname{Mat}_{X}(\mathbb{C})\) denote the adjacency matrix of \(\Gamma\):
\[\left(A\right)_{xy}=\left\{\begin{array}{ll}1&\text{if}\quad\partial(x,y)=1,\\ 0&\text{if}\quad\partial(x,y)\neq 1,\end{array}\right.\qquad(x,y\in X).\]
The **adjacency algebra of \(\Gamma\)** is a commutative subalgebra \(M\) of \(\operatorname{Mat}_{X}(\mathbb{C})\) generated by the adjacency matrix \(A\) of \(\Gamma\).
We now recall the dual idempotents of \(\Gamma\). To do this fix a vertex \(x\in X\) and let \(d=\epsilon(x)\). We view \(x\) as a _base vertex_. For \(0\leq i\leq d\), let \(E_{i}^{*}=E_{i}^{*}(x)\) denote the diagonal matrix in \(\operatorname{Mat}_{X}(\mathbb{C})\) with \((y,y)\)-entry as follows:
\[(E_{i}^{*})_{yy}=\left\{\begin{array}{ll}1&\text{if}\ \ \partial(x,y)=i,\\ 0&\text{if}\ \ \partial(x,y)\neq i\end{array}\right.\qquad(y\in X).\]
We call \(E_{i}^{*}\) the \(i\)**-th dual idempotent of \(\Gamma\) with respect to \(x\)**[26, p. 378]. We also observe (ei) \(\sum_{i=0}^{d}E_{i}^{*}=I\); (eii) \(\overline{E_{i}^{*}}=E_{i}^{*}\)\((0\leq i\leq d)\); (eiii) \(E_{i}^{*\top}=E_{i}^{*}\)\((0\leq i\leq d)\); (eiv) \(E_{i}^{*}E_{j}^{*}=\delta_{ij}E_{i}^{*}\)\((0\leq i,j\leq d)\) where \(I\) denotes the identity matrix in \(\operatorname{Mat}_{X}(\mathbb{C})\). By these facts, matrices \(E_{0}^{*},E_{1}^{*},\ldots,E_{d}^{*}\) form a basis for a commutative subalgebra \(M^{*}=M^{*}(x)\) of \(\operatorname{Mat}_{X}(\mathbb{C})\). Note that for \(0\leq i\leq d\) we have
\[E_{i}^{*}V=\operatorname{Span}\{\widehat{y}\mid y\in\Gamma_{i}(x)\}, \tag{2.1}\]
and that
\[V=E_{0}^{*}V+E_{1}^{*}V+\cdots+E_{d}^{*}V\qquad\qquad\text{(orthogonal direct sum)}.\]
We call \(E_{i}^{*}V\) the \(i\)**-th subconstituent of \(\Gamma\) with respect to \(x\)**. Moreover \(E_{i}^{*}\) is the projection from \(V\) onto \(E_{i}^{*}V\) for \(0\leq i\leq d\). For convenience we define \(E_{-1}^{*}\) and \(E_{d+1}^{*}\) to be the zero matrix of \(\operatorname{Mat}_{X}(\mathbb{C})\).
We next recall the definition of a Terwilliger algebra of \(\Gamma\) which was first studied in [26]. Let \(T=T(x)\) denote the subalgebra of \(\operatorname{Mat}_{X}(\mathbb{C})\) generated by \(M\), \(M^{*}\). We call \(T\) the **Terwilliger algebra of \(\Gamma\) with respect to \(x\)**. Recall \(M\) is generated by \(A\) so \(T\) is generated by \(A\) and the dual idempotents. We observe \(T\) has finite dimension. In addition, by construction \(T\) is closed under the conjugate-transpose map and so \(T\) is semi-simple. For a vector subspace \(W\subseteq V\), we denote by \(TW\) the subspace \(\{Bw\mid B\in T,w\in W\}\).
We now recall the lowering, the flat and the raising matrix of \(T\).
**Definition 2.1**.: _Let \(\Gamma=(X,\mathcal{R})\) denote a simple, connected, finite graph. Pick \(x\in X\). Let \(d=\epsilon(x)\) and let \(T=T(x)\) be the Terwilliger algebra of \(\Gamma\) with respect to \(x\). Define \(L=L(x)\), \(F=F(x)\) and \(R=R(x)\) in \(\operatorname{Mat}_{X}(\mathbb{C})\) by_
\[L=\sum_{i=1}^{d}E_{i-1}^{*}AE_{i}^{*},\qquad\ F=\sum_{i=0}^{d}E_{i}^{*}AE_{i}^ {*},\qquad\ R=\sum_{i=0}^{d-1}E_{i+1}^{*}AE_{i}^{*}.\]
_We refer to \(L\), \(F\) and \(R\) as the **lowering**, the **flat** and the **raising matrix with respect to \(x\)**, respectively. Note that \(L,F,R\in T\). Moreover, \(F=F^{\top}\), \(R=L^{\top}\) and \(A=L+F+R\)._
Observe that for \(y,z\in X\) we have the \((z,y)\)-entry of \(L\) equals \(1\) if \(\partial(z,y)=1\) and \(\partial(x,z)=\partial(x,y)-1\), and \(0\) otherwise. The \((z,y)\)-entry of \(F\) is equal to \(1\) if \(\partial(z,y)=1\) and
\(\partial(x,y)\), and \(0\) otherwise. Similarly, the \((z,y)\)-entry of \(R\) equals \(1\) if \(\partial(z,y)=1\) and \(\partial(x,z)=\partial(x,y)+1\), and \(0\) otherwise. Consequently, for \(v\in E_{i}^{*}V\) (\(0\leq i\leq d\)) we have
\[Lv\in E_{i-1}^{*}V,\qquad Fv\in E_{i}^{*}V,\qquad Rv\in E_{i+1}^{*}V. \tag{2.2}\]
By a \(T\)**-module** we mean a subspace \(W\) of \(V\), such that \(TW\subseteq W\). Let \(W\) denote a \(T\)-module. Then \(W\) is said to be **irreducible** whenever \(W\) is nonzero and \(W\) contains no \(T\)-modules other than \(0\) and \(W\). Since the algebra \(T\) is semi-simple, it turns out that any \(T\)-module is an orthogonal direct sum of irreducible \(T\)-modules.
Let \(W\) be an irreducible \(T\)-module. We observe that \(W\) is an orthogonal direct sum of the nonvanishing subspaces \(E_{i}^{*}W\) for \(0\leq i\leq d\). By the **endpoint** of \(W\) we mean \(r:=r(W)=\min\{i\mid 0\leq i\leq d,\ E_{i}^{*}W\neq 0\}\). Define the **diameter** of \(W\) by \(d^{\prime}:=d^{\prime}(W)=|\{i\mid 0\leq i\leq d,\ E_{i}^{*}W\neq 0\}|-1\). Using the idea from [26, Lemma 3.9(ii)] we have \(E_{i}^{*}W\neq 0\) if and only if \(r\leq i\leq r+d^{\prime}\) (\(0\leq i\leq d\)). We also say that \(W\) is **thin** whenever the dimension of \(E_{i}^{*}W\) is at most \(1\) for \(0\leq i\leq d\).
Let \(W\) and \(W^{\prime}\) denote two irreducible \(T\)-modules. By a \(T\)**-isomorphism** from \(W\) to \(W^{\prime}\) we mean a vector space isomorphism \(\sigma:W\to W^{\prime}\) such that \((\sigma B-B\sigma)\,W=0\) for all \(B\in T\). The \(T\)-modules \(W\) and \(W^{\prime}\) are said to be \(T\)**-isomorphic** (or simply **isomorphic**) whenever there exists a \(T\)-isomorphism \(\sigma:W\to W^{\prime}\). We note that isomorphic irreducible \(T\)-modules have the same endpoint. It turns out that two non-isomorphic irreducible \(T\)-modules are orthogonal.
Observe that the subspace \(T\widehat{x}=\{B\widehat{x}\mid B\in T\}\) is a \(T\)-module. Suppose that \(W\) is an irreducible \(T\)-module with endpoint \(0\). Then, \(\widehat{x}\in W\), which implies that \(T\widehat{x}\subseteq W\). Since \(W\) is irreducible, we therefore have \(T\widehat{x}=W\). Hence, \(T\widehat{x}\) is the unique irreducible \(T\)-module with endpoint \(0\). We refer to \(T\widehat{x}\) as the **trivial \(T\)-module**. If the trivial \(T\)-module is thin, then vectors \(R^{\widehat{x}}\) (\(0\leq i\leq d\)) form a basis of the trivial \(T\)-module (see [8] for more details). In the rest of this paper we will study irreducible \(T\)-modules of endpoint \(1\). Therefore, we will first characterize those vertices \(x\) of \(\Gamma\), for which the corresponding Terwilliger algebra \(T=T(x)\) has no irreducible \(T\)-modules with endpoint \(1\).
**Proposition 2.2**.: _Let \(\Gamma=(X,\mathcal{R})\) denote a simple, finite, connected graph. Pick a vertex \(x\in X\) and let \(T=T(x)\) denote the corresponding Terwilliger algebra. Then, there are no irreducible \(T\)-modules with endpoint \(1\) if and only if \(\dim(E_{1}^{*}T\widehat{x})=|\Gamma(x)|\). In particular, if the trivial module is thin, there are no irreducible \(T\)-modules with endpoint \(1\) if and only if \(|\Gamma(x)|=1\)._
Proof.: Let \(V\) denote the standard module, and let \(T\widehat{x}\) denote the trivial \(T\)-module. We observe \(E_{1}^{*}T\widehat{x}\subseteq E_{1}^{*}V\) and so, \(\dim(E_{1}^{*}T\widehat{x})\leq\dim(E_{1}^{*}V)=|\Gamma(x)|\).
Assume first that there are no irreducible \(T\)-modules with endpoint \(1\). Since \(V\) is orthogonal direct sum of irreducible \(T\)-modules and none of these \(T\)-modules has endpoint \(1\) we have \(E_{1}^{*}V=E_{1}^{*}T\widehat{x}\) which implies that \(\dim(E_{1}^{*}T\widehat{x})=\dim(E_{1}^{*}V)=|\Gamma(x)|\).
We next proceed by contraposition. Suppose there exists an irreducible \(T\)-module \(W\) with endpoint \(1\). Let \(V_{1}\) be the sum of all irreducible \(T\)-modules with endpoint \(1\). Note that \(E_{1}^{*}W\) is nonzero and since \(E_{1}^{*}W\subseteq E^{*}V_{1}\), we have \(\dim(E_{1}^{*}V_{1})>0\). We also have \(E_{1}^{*}V=E_{1}^{*}T\widehat{x}+E_{1}^{*}V_{1}\). This shows that
\[|\Gamma(x)|=\dim(E_{1}^{*}V)=\dim(E_{1}^{*}T\widehat{x})+\dim(E_{1}^{*}V_{1})> \dim(E_{1}^{*}T\widehat{x}).\]
To prove the second part of our assertion, recall that if \(T\widehat{x}\) is thin, by [8, Lemma 9], the subspace \(E_{1}^{*}T\widehat{x}\) is spanned by the nonzero vector \(R\widehat{x}\). This concludes the proof.
In view of Proposition 2.2, we will assume that \(|\Gamma(x)|\geq 2\) from now on.
The Main Result
Throughout this section let \(\Gamma=(X,\mathcal{R})\) denote a connected graph. Here we state our main result. To do this we need the following definitions.
We first define a certain partition of \(X\) that we will find useful later.
**Definition 3.1**.: _Let \(\Gamma=(X,\mathcal{R})\) denote a graph with diameter \(D\). Pick \(x,y\in X\), such that \(y\in\Gamma(x)\). For integers \(i,j\) we define sets \(D^{i}_{j}:=D^{i}_{j}(x,y)\) as follows:_
\[D^{i}_{j}=\Gamma_{i}(x)\cap\Gamma_{j}(y).\]
_Observe that \(D^{i}_{j}=\emptyset\) if \(i<0\) or \(j<0\). Similarly, \(D^{i}_{j}=\emptyset\) if \(i>\epsilon(x)\) or \(j>\epsilon(y)\). Furthermore, by the triangle inequality we have that \(D^{i}_{j}=\emptyset\) if \(|i-j|\geq 2\). Note also that if \(\Gamma\) is bipartite, the set \(D^{i}_{i}\) is empty for \(0\leq i\leq D\). The collection of all the subsets \(D^{i}_{i-1}\)\((1\leq i\leq\epsilon(x))\), \(D^{i}_{i}\)\((1\leq i\leq\min\left\{\epsilon(x),\epsilon(y)\right\})\) and \(D^{i-1}_{i}\)\((1\leq i\leq\epsilon(y))\) is called the **distance partition of \(\Gamma\) with respect to the edge \(\{x,y\}\)**._
Next, we consider walks of a certain shape with respect to a given vertex in \(\Gamma\).
**Definition 3.2**.: _Let \(\Gamma=(X,\mathcal{R})\) denote a connected graph. Pick \(x,y,z\in X\) and let \(P=[y=x_{0},x_{1},\ldots,x_{j}=z]\) denote a \(yz\)-walk. The **shape of \(P\) with respect to \(x\)** is a sequence of symbols \(t_{1}t_{2}\ldots t_{j}\), where \(t_{i}\in\{f,\ell,r\}\), and such that \(t_{i}=r\) if \(\partial(x,x_{i})=\partial(x,x_{i-1})+1\), \(t_{i}=f\) if \(\partial(x,x_{i})=\partial(x,x_{i-1})\) and \(t_{i}=\ell\) if \(\partial(x,x_{i})=\partial(x,x_{i-1})-1\)\((1\leq i\leq j)\). We use exponential notation for shapes containing several consecutive identical symbols. For instance, instead of \(rrrrff\ell\ell r\) we simply write \(r^{4}f^{3}\ell^{2}r\). Analogously, \(r^{0}f=f\) and \(r^{0}\ell=\ell r^{0}=\ell\) is also conventional. For a non-negative integer \(m\), let \(\ell r^{m}(y,z)\), \(r^{m}\ell(y,z)\), \(r^{m}f(y,z)\) and \(r^{m}(y,z)\) respectively denote the number of \(yz\)-walks of the shape \(\ell r^{m}\), \(r^{m}\ell\), \(r^{m}f\) and \(r^{m}\) with respect to \(x\) where \(r^{0}(y,z)=1\) if \(y=z\) and \(r^{0}(y,z)=0\) otherwise. We abbreviate \(r^{m}\ell(z)=r^{m}\ell(x,z)\), \(r^{m}f(z)=r^{m}f(x,z)\) and \(r^{m}(z)=r^{m}(x,z)\)._
The following observation is straightforward to prove (using elementary matrix multiplication and (2.2)).
**Lemma 3.3**.: _Let \(\Gamma=(X,\mathcal{R})\) denote a connected graph. Pick \(x\in X\) and let \(T=T(x)\) denote the Terwilliger algebra of \(\Gamma\) with respect to \(x\). Let \(L=L(x)\), \(F=F(x)\) and \(R=R(x)\) denote the lowering, the flat and the raising matrix of \(T\), respectively. Pick \(y,z\in X\) and let \(m\) be a non-negative integer. Then the following (i)-(iv) hold:_
1. _The_ \((z,y)\)_-entry of_ \(R^{m}\) _is equal to the number_ \(r^{m}(y,z)\) _with respect to_ \(x\)_._
2. _The_ \((z,y)\)_-entry of_ \(LR^{m}\) _is equal to the number_ \(r^{m}\ell(y,z)\) _with respect to_ \(x\)_._
3. _The_ \((z,y)\)_-entry of_ \(R^{m}L\) _is equal to the number_ \(\ell r^{m}(y,z)\) _with respect to_ \(x\)_._
4. _The_ \((z,y)\)_-entry of_ \(FR^{m}\) _is equal to the number_ \(r^{m}f(y,z)\) _with respect to_ \(x\)_._
For the rest of the paper we adopt the following notation.
**Notation 3.4**.: _Let \(\Gamma=(X,\mathcal{R})\) denote a finite, simple, connected graph with vertex set \(X\), edge set \(\mathcal{R}\) and diameter \(D\). Let \(A\in\operatorname{Mat}_{X}(\mathbb{C})\) denote the adjacency matrix of \(\Gamma\). Fix a vertex \(x\in X\) with \(|\Gamma(x)|\geq 2\). Let \(d\) denote the eccentricity of \(x\). Let \(E^{*}_{i}\in\operatorname{Mat}_{X}(\mathbb{C})\)\((0\leq i\leq d)\) denote the dual idempotents of \(\Gamma\) with respect to \(x\). Let \(V\) denote the standard module of \(\Gamma\) and let \(T=T(x)\) denote the Terwilliger algebra of \(\Gamma\) with respect to \(x\). Let \(L=L(x)\), \(F=F(x)\) and \(R=R(x)\) denote the lowering, the flat and the raising matrix of \(T\), respectively. Assume that the unique irreducible \(T\)-module with endpoint \(0\) is thin. We denote this \(T\)-module by \(T\widehat{x}\). For \(y\in\Gamma(x)\) let the sets \(D^{i}_{j}=D^{i}_{j}(x,y)\) be as defined in Definition 3.1. For \(w,z\in X\) let the numbers \(r^{m}\ell(w,z)\), \(r^{m}f(w,z)\) and \(r^{m}(w,z)\) be as defined in Definition 3.2._
We are now ready to state our main result.
**Theorem 3.5**.: _With reference to Notation 3.4, the following (i)-(ii) are equivalent:_
1. \(\Gamma\) _has, up to isomorphism, a unique irreducible_ \(T\)_-module with endpoint_ \(1\)_, and this module is thin._
2. _For every integer_ \(i\)__ \((1\leq i\leq d)\) _there exist scalars_ \(\kappa_{i},\mu_{i}\)_,_ \(\theta_{i},\rho_{i}\)_, such that for every_ \(y\in\Gamma(x)\) _the following (a), (b) hold:_ 1. _For every_ \(z\in D^{i}_{i+1}(x,y)\cup D^{i}_{i}(x,y)\) _we have_ \[r^{i}\ell(y,z) = \mu_{i}\;\ell r^{i}(y,z),\] \[r^{i-1}f(y,z) = \rho_{i}\;\ell r^{i}(y,z).\] 2. _For every_ \(z\in D^{i}_{i-1}(x,y)\) _we have_ \[r^{i}\ell(y,z) = \kappa_{i}\;r^{i-1}(y,z)+\mu_{i}\;\ell r^{i}(y,z),\] \[r^{i-1}f(y,z) = \theta_{i}\;r^{i-1}(y,z)+\rho_{i}\;\ell r^{i}(y,z).\] _Moreover,_ \(\rho_{i}=0\) _whenever the set_ \(D^{i}_{i+1}(x,y)\) _is nonempty for some_ \(y\in\Gamma(x)\)_._
With reference to Notation 3.4, assume that \(\Gamma\) satisfies part \((ii)\) of Theorem 3.5. The proof that in this case \(\Gamma\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), and that this module is thin is omitted as it can be carried out using similar arguments as in the proof of [5, Theorem 4.4] (see [5, Section 6]). Therefore, in the rest of this article, we will focus on the proof that part \((i)\) of Theorem 3.5 implies the combinatorial conditions \((a),(b)\) described in part \((ii)\) of Theorem 3.5.
With reference to Notation 3.4, assume that \(\Gamma\) is distance-regular around \(x\) (see [11] for the definition of distance-regularity around a vertex). In this case, it was proved in [8, 25] that the unique irreducible \(T\)-module with endpoint \(0\) is thin. In addition, for an integer \(i\)\((1\leq i\leq d)\) and vertices \(y\in\Gamma(x),z\in\Gamma_{i}(x)\), we observe the number of \(yz\)-walks of the shape \(\ell r^{i}\) with respect to \(x\) is equal to the number of paths of length \(i\) from \(z\) to \(x\). Since \(x\) is distance-regularized, there are precisely \(c_{i}(x)c_{i-1}(x)\cdots c_{1}(x)\) such paths. Consequently, \(\ell r^{i}(y,z)=c_{i}(x)c_{i-1}(x)\cdots c_{1}(x)\) and so, \(\ell r^{i}(y,z)\) is independent of the choice of \(y\) and \(z\). Therefore, [5, Theorem 4.4] and [7, Theorem 4.4] immediately follow from Theorem 3.5 and the above comments.
We finish this section with the following observations which will be needed later for the proof of Theorem 3.5.
**Proposition 3.6**.: _With reference to Notation 3.4, the following holds for \(0\leq i\leq d\):_
\[\left(E^{*}_{i}R^{i}LE^{*}_{1}\right)_{zy}=\left\{\begin{array}{ll}\ell r^ {i}(y,z)&\mbox{if }\;y\in\Gamma(x)\mbox{ and }z\in\Gamma_{i}(x),\\ 0&\mbox{otherwise}.\end{array}\right.\]
_In particular, \(E^{*}_{i}R^{i}LE^{*}_{1}\) is nonzero._
Proof.: It is straightforward to check that the \((z,y)\)-entry of \(E^{*}_{i}R^{i}LE^{*}_{1}\) is zero if either \(y\not\in\Gamma(x)\) or \(z\not\in\Gamma_{i}(x)\). It is also straightforward to check that the result is true if \(i=0\). Suppose now that \(y\in\Gamma(x)\) and \(z\in\Gamma_{i}(x)\) with \(i\geq 1\). Then \(\left(E^{*}_{i}R^{i}LE^{*}_{1}\right)_{zy}=\left(R^{i}L\right)_{zy}\) and the result follows from Lemma 3.3. Note also that in this case we have that \(\ell r^{i}(y,z)>0\) and so, \(E^{*}_{i}R^{i}LE^{*}_{1}\) is nonzero.
**Proposition 3.7**.: _With reference to Notation 3.4, the following holds for \(1\leq i\leq d\):_
\[\left(E_{i}^{*}R^{i-1}E_{1}^{*}\right)_{zy}=\left\{\begin{array}{ll}r^{i-1}(y, z)&\mbox{if \ $y\in\Gamma(x)$ and $z\in\Gamma_{i}(x)$},\\ 0&\mbox{otherwise}.\end{array}\right.\]
_In particular, \(E_{i}^{*}R^{i-1}E_{1}^{*}\) is nonzero._
Proof.: Similar to the proof of Proposition 3.6.
## 4 Linear dependency
With reference to Notation 3.4, assume that \(\Gamma\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), and that this module is thin. In this section we show that certain matrices of \(T\) are linearly dependent.
**Theorem 4.1**.: _With reference to Notation 3.4, assume that \(\Gamma\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), and that this module is thin with diameter \(d^{\prime}\). Pick matrices \(F_{1},F_{2},F_{3}\in T\) and an integer \(i\)\((1\leq i\leq d)\). Then the following (i), (ii) hold:_
1. _For every integer_ \(i\)__\((1\leq i\leq d^{\prime}+1)\) _the matrices_ \(E_{i}^{*}F_{1}E_{1}^{*}\)_,_ \(E_{i}^{*}F_{2}E_{1}^{*}\) _and_ \(E_{i}^{*}F_{3}E_{1}^{*}\) _are linearly dependent._
2. _For every integer_ \(i\)__\((d^{\prime}+1<i\leq d)\) _the matrices_ \(E_{i}^{*}F_{1}E_{1}^{*}\) _and_ \(E_{i}^{*}F_{2}E_{1}^{*}\) _are linearly dependent._
Proof.: Recall that \(T\widehat{x}\) is thin and by [8, Lemma 9], the subspace \(E_{1}^{*}T\widehat{x}\) is spanned by the nonzero vector \(R\widehat{x}\) and so, \(\dim(E_{1}^{*}T\widehat{x})=1\).
Let \(W\) be a thin irreducible \(T\)-module with endpoint \(1\) and diameter \(d^{\prime}\). Firstly, we observe that \(d^{\prime}+1\leq d\) and so, \((i)\) immediately follows from [7, Theorem 5.3]. We would like to point out the same conclusions of [7, Theorem 5.3] are true without assuming that \(\Gamma\) is bipartite and distance-regular around \(x\). Namely, in the proof of [7, Theorem 5.3], the hypothesis that \(\Gamma\) is bipartite was never applied and local distance-regularity around \(x\) was used to conclude that \(\dim\left(E_{1}^{*}T\widehat{x}\right)=1\), which is also true in our case.
We now proceed to prove the second assertion. To do that, pick an integer \(i\)\((d^{\prime}+1<i\leq d)\). We claim there exist scalars \(\lambda_{1},\lambda_{2}\), not both zero, such that \(\lambda_{1}E_{i}^{*}F_{1}E_{1}^{*}v+\lambda_{2}E_{i}^{*}F_{2}E_{1}^{*}v=0\) for every \(v\in E_{1}^{*}T\widehat{x}\). To see this, pick nonzero vectors \(v_{0}\in E_{1}^{*}T\widehat{x}\) and \(v_{1}\in E_{1}^{*}W\). Let \(u_{0}\) be an arbitrary nonzero vector of \(E_{i}^{*}T\widehat{x}\). As the trivial module is thin, there exist scalars \(r_{0,1}\), \(r_{0,2}\) such that
\[E_{i}^{*}F_{1}E_{1}^{*}v_{0}=r_{0,1}\,u_{0}\qquad\mbox{and}\qquad E_{i}^{*}F_{ 2}E_{1}^{*}v_{0}=r_{0,2}\,u_{0}. \tag{4.1}\]
It is clear that the linear equation \(r_{0,1}\)\(x_{1}+r_{0,2}\)\(x_{2}=0\) with unknowns \(x_{1},x_{2}\) has a nontrivial solution, and so there exist scalars \(\lambda_{1},\lambda_{2}\), not both zero, such that
\[r_{0,1}\;\lambda_{1}+r_{0,2}\;\lambda_{2}=0. \tag{4.2}\]
Pick a vector \(v\in E_{1}^{*}T\widehat{x}\). Since the trivial \(T\)-module is thin, there exists a scalar \(\lambda\) such that \(v=\lambda v_{0}\). Therefore, by (4.1) and (4.2) we have
\[\lambda_{1}E_{i}^{*}F_{1}E_{1}^{*}v+\lambda_{2}E_{i}^{*}F_{2}E_{1 }^{*}v = \lambda\left(\lambda_{1}E_{i}^{*}F_{1}E_{1}^{*}v_{0}+\lambda_{2}E _{i}^{*}F_{2}E_{1}^{*}v_{0}\right)\] \[= \lambda\left(\lambda_{1}\;r_{0,1}u_{0}+\lambda_{2}\;r_{0,2}u_{0}\right)\] \[= \lambda\left(r_{0,1}\;\lambda_{1}+r_{0,2}\;\lambda_{2}\right)u_{ 0}=0.\]
This proves our claim. Let \(V_{1}\) denote the sum of all irreducible \(T\)-modules with endpoint \(1\) and let \(\{W^{t}\mid t\in\mathcal{I}\}\) be the set of all irreducible \(T\)-modules with endpoint \(1\), where \(\mathcal{I}\) is an index set. Pick a vector \(v\in E_{1}^{*}V_{1}\). Observe that \(v\) can be written as a sum
\[v=\sum_{t\in\mathcal{I}}v_{t}, \tag{4.3}\]
where \(v_{t}\in E_{1}^{*}W^{t}\) for every \(t\in\mathcal{I}\). Pick now a \(T\)-module \(W^{s}\), \(s\in\mathcal{I}\). As any two irreducible \(T\)-modules with endpoint \(1\) are isomorphic, it follows that \(d^{\prime}\left(W^{s}\right)=d^{\prime}\left(W\right)=d^{\prime}\). So, we observe that in this case \(E_{i}^{*}W^{s}\) is zero. In addition, for every \(t\in\mathcal{I}\) there exists a \(T\)-isomorphism \(\sigma_{t}:W^{s}\to W^{t}\). Let \(w_{t}\in W^{s}\) be such that \(v_{t}=\sigma_{t}(w_{t})\). Then, we notice that for every \(t\in\mathcal{I}\),
\[E_{i}^{*}F_{j}E_{1}^{*}v_{t}=E_{i}^{*}F_{j}E_{1}^{*}\sigma_{t}(w_{t})=\sigma_ {t}\left(E_{i}^{*}F_{j}E_{1}^{*}w_{t}\right)=0.\]
Hence, by (4.3) we have that \(E_{i}^{*}F_{j}E_{1}^{*}v=0\) for every \(v\in E_{1}^{*}V_{1}\).
To conclude the proof, pick now an arbitrary vector \(w\in V\) and observe that \(E_{1}^{*}w=w_{0}+w_{1}\) for some \(w_{0}\in T\widehat{x}\) and \(w_{1}\in V_{1}\). It follows from the above comments that there exist scalars \(\lambda_{1},\lambda_{2}\), not both zero, such that
\[\lambda_{1}E_{i}^{*}F_{1}E_{1}^{*}w+\lambda_{2}E_{i}^{*}F_{2}E_{1 }^{*}w = \lambda_{1}E_{i}^{*}F_{1}E_{1}^{*}(w_{0}+w_{1})+\lambda_{2}E_{i}^{ *}F_{2}E_{1}^{*}(w_{0}+w_{1})=0.\]
As \(w\) was arbitrary, the result follows. \(\blacksquare\)
Observe that the conclusion of Theorem 4.1 is equivalent to the fact that the dimension of \(E_{i}^{*}TE_{1}^{*}\) (\(1\leq i\leq d^{\prime}+1\)) is at most \(2\) and that the dimension of \(E_{i}^{*}TE_{1}^{*}\) (\(d^{\prime}+1<i\leq d\)) is at most \(1\).
## 5 Algebraic condition implies combinatorial properties
With reference to Notation 3.4, assume that \(\Gamma\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), and that this module is thin. In this section we prove that in this case combinatorial conditions \((a),(b)\) described in part \((ii)\) of Theorem 3.5 hold.
**Lemma 5.1**.: _With reference to Notation 3.4, assume that \(\Gamma\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), and that this module is thin. Then for every \(i\)\((1\leq i\leq d)\) there exist scalars \(\kappa_{i},\mu_{i},\theta_{i},\rho_{i}\), such that_
\[E_{i}^{*}LR^{i}E_{1}^{*} = \kappa_{i}E_{i}^{*}R^{i-1}E_{1}^{*}+\mu_{i}E_{i}^{*}R^{i}LE_{1}^{*}, \tag{5.1}\] \[E_{i}^{*}FR^{i-1}E_{1}^{*} = \theta_{i}E_{i}^{*}R^{i-1}E_{1}^{*}+\rho_{i}E_{i}^{*}R^{i}LE_{1}^{*}. \tag{5.2}\]
Proof.: Pick \(i\) (\(1\leq i\leq d\)) and observe that by Definition 2.1, the matrices \(LR^{i}\), \(R^{i-1}\), \(FR^{i-1}\) and \(R^{i}L\) are elements of algebra \(T\). Consequently, by Theorem 4.1, there exist scalars \(\alpha_{j}^{(i)}\) (\(1\leq j\leq 3\)), not all zero, and \(\beta_{j}^{(i)}\) (\(1\leq j\leq 3\)), not all zero, such that
\[\alpha_{1}^{(i)}E_{i}^{*}LR^{i}E_{1}^{*}+\alpha_{2}^{(i)}E_{i}^{* }R^{i-1}E_{1}^{*}+\alpha_{3}^{(i)}E_{i}^{*}R^{i}LE_{1}^{*}=0, \tag{5.3}\] \[\beta_{1}^{(i)}E_{i}^{*}FR^{i-1}E_{1}^{*}+\beta_{2}^{(i)}E_{i}^{* }R^{i-1}E_{1}^{*}+\beta_{3}^{(i)}E_{i}^{*}R^{i}LE_{1}^{*}=0. \tag{5.4}\]
Assume for the moment that \(\alpha_{1}^{(i)}\beta_{1}^{(i)}\neq 0\). Then (5.1) and (5.2) hold with \(\kappa_{i}=-\alpha_{2}^{(i)}/\alpha_{1}^{(i)}\), \(\mu_{i}=-\alpha_{3}^{(i)}/\alpha_{1}^{(i)}\), \(\theta_{i}=-\beta_{2}^{(i)}/\beta_{1}^{(i)}\), and \(\rho_{i}=-\beta_{3}^{(i)}/\beta_{1}^{(i)}\).
Now, assume that \(\alpha_{1}^{(i)}\beta_{1}^{(i)}=0\). Let \(W\) denote an irreducible \(T\)-module with endpoint \(1\). Let \(k\) denote the least integer such that \(\alpha_{1}^{(k)}\beta_{1}^{(k)}=0\). We observe \(k\leq i\). Assume for a moment that \(k=1\). Without loss of generality assume that \(\alpha_{1}^{(1)}=0\). Pick \(y,z\in\Gamma(x)\)
\(y\neq z\). As the \((z,y)\)-entries of \(E^{*}_{1}\) and \(E^{*}_{1}RLE^{*}_{1}\) are \(0\) and \(1\) respectively, (5.3) implies that \(\alpha^{(1)}_{3}=0\). As \(E^{*}_{1}\) is nonzero, we get that \(\alpha^{(1)}_{2}=0\) as well, a contradiction. Therefore, \(k\geq 2\). Pick a nonzero vector \(w\in E^{*}_{1}W\) and let \(W^{\prime}\) denote the vector subspace of \(V\) spanned by the vectors \(R^{i}w\)\((0\leq i\leq d)\). Note \(W^{\prime}\) is nonzero and \(W^{\prime}\subseteq W\). Observe also that by (2.2) and by (eiv) from Section 2, the subspace \(W^{\prime}\) is invariant under the action of the dual idempotents. Since \(\alpha^{(k)}_{1}\beta^{(k)}_{1}=0\) and by Proposition 3.6 the matrix \(E^{*}_{k}R^{k}LE^{*}_{1}\) is nonzero, it follows from (5.3) and (5.4) that there exists \(\gamma\in\mathbb{C}\) such that \(E^{*}_{i}R^{k-1}E^{*}_{1}=\gamma E^{*}_{k}R^{k}LE^{*}_{1}\). Now, from (2.2) we notice that \(Lw=0\) and so, \(R^{k-1}w=0\). This implies \(FR^{j}w=LR^{j}w=R^{j}w=0\) for \(k-1\leq j\leq d\). Therefore, by construction and by (2.2), it is also clear that \(W^{\prime}\) is closed under the action of \(R\). Moreover, for every \(1\leq j\leq k-1\) the scalar \(\alpha^{(j)}_{1}\beta^{(j)}_{1}\) is nonzero. Therefore, from (5.3) and (5.4), we have that (5.1) and (5.2) hold for \(1\leq j\leq k-1\) with \(\kappa_{j}=-\alpha^{(j)}_{2}/\alpha^{(j)}_{1}\), \(\mu_{j}=-\alpha^{(j)}_{3}/\alpha^{(j)}_{1}\), \(\theta_{j}=-\beta^{(j)}_{2}/\beta^{(j)}_{1}\), and \(\rho_{j}=-\beta^{(j)}_{3}/\beta^{(j)}_{1}\). So, \(LR^{j}w=\kappa_{j}R^{j-1}w\) and \(FR^{j-1}w=\theta_{j}R^{j-1}w\) for \(1\leq j\leq k-1\). This implies that \(W^{\prime}\) is invariant under the action of \(L\) and \(F\). Since \(A=L+F+R\), it turns out that \(W^{\prime}\) is \(A\)-invariant as well. Recall that algebra \(T\) is generated by \(A\) and the dual idempotents. Therefore, \(W^{\prime}\) is a \(T\)-module and \(W^{\prime}=W\) as \(W\) is irreducible. Notice that by construction and (2.2), the subspace \(E^{*}_{i}W\) is generated by \(R^{i-1}w\). This shows \(E^{*}_{i}W=0\) since \(k\leq i\). We thus have \(d^{\prime}+1<i\leq d\) where \(d^{\prime}\) denotes the diameter of \(W\). Hence, by Theorem 4.1\((ii)\), any two matrices in \(E^{*}_{i}TE^{*}_{1}\) are linearly dependent. Consequently, there exist scalars \(\alpha,\beta\) (not both zero) and \(\alpha^{\prime},\beta^{\prime}\) (not both zero), such that
\[\alpha E^{*}_{i}LR^{i}E^{*}_{1}+\beta E^{*}_{i}R^{i-1}E^{*}_{1}=0, \tag{5.5}\] \[\alpha^{\prime}E^{*}_{i}FR^{i-1}E^{*}_{1}+\beta^{\prime}E^{*}_{i} R^{i-1}E^{*}_{1}=0. \tag{5.6}\]
If \(\alpha\) (\(\alpha^{\prime}\), respectively) is zero, then \(\beta\) (\(\beta^{\prime}\), respectively) is also zero by Proposition 3.7, a contradiction. This shows that \(E^{*}_{i}LR^{i}E^{*}_{1}=-\frac{\beta}{2}E^{*}_{i}R^{i-1}E^{*}_{1}\) and \(E^{*}_{i}FR^{i-1}E^{*}_{1}=-\frac{\beta^{\prime}}{\alpha^{\prime}}E^{*}_{i}R^ {i-1}E^{*}_{1}\). Similarly we show that \(E^{*}_{i}R^{i}LE^{*}_{1}=\lambda E^{*}_{i}R^{i-1}E^{*}_{1}\) for some nonzero scalar \(\lambda\in\mathbb{C}\). It is now clear that (5.1) and (5.2) hold for any \(\kappa_{i},\mu_{i},\theta_{i},\rho_{i}\) satisfying \(\kappa_{i}+\lambda\mu_{i}=-\beta/\alpha\) and \(\theta_{i}+\lambda\rho_{i}=-\beta^{\prime}/\alpha^{\prime}\). This finishes the proof. \(\blacksquare\)
We are now ready to prove the main result of this section.
**Theorem 5.2**.: _With reference to Notation 3.4, assume that \(\Gamma\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), and that this module is thin. For every integer \(i\)\((1\leq i\leq d)\) there exist scalars \(\kappa_{i},\mu_{i}\), \(\theta_{i},\rho_{i}\), such that for every \(y\in\Gamma(x)\) the following (a), (b) hold:_
* _For every_ \(z\in D^{i}_{i+1}(x,y)\cup D^{i}_{i}(x,y)\) _we have_ \[r^{i}\ell(y,z) = \mu_{i}\ \ell r^{i}(y,z),\] \[r^{i-1}f(y,z) = \rho_{i}\ \ell r^{i}(y,z).\]
* _For every_ \(z\in D^{i}_{i-1}(x,y)\) _we have_ \[r^{i}\ell(y,z) = \kappa_{i}\ r^{i-1}(y,z)+\mu_{i}\ \ell r^{i}(y,z),\] \[r^{i-1}f(y,z) = \theta_{i}\ r^{i-1}(y,z)+\rho_{i}\ \ell r^{i}(y,z).\]
_Moreover, \(\rho_{i}=0\) if the set \(D^{i}_{i+1}(x,y)\) is nonempty for some \(y\in\Gamma(x)\)._
Proof.: Pick an integer \(i\)\((1\leq i\leq d)\) and recall that by Lemma 5.1 equations (5.1) and (5.2) hold. Pick \(y\in\Gamma(x)\).
\((a)\) Pick \(z\in D^{i}_{i+1}(x,y)\cup D^{i}_{i}(x,y)\) and observe that by Lemma 3.3 the \((z,y)\)-entry of the left-hand side of (5.1) ((5.2), respectively) equals \(r^{i}\ell(y,z)\) (\(r^{i-1}f(y,z)\), respectively). On the other hand, again by Lemma 3.3, the \((z,y)\)-entry of \(E^{*}_{i}R^{i-1}E^{*}_{1}\) (\(E^{*}_{i}R^{i}LE^{*}_{1}\), respectively) equals \(0\) (\(\ell r^{i}(y,z)\), respectively). Therefore, the \((z,y)\)-entry of the right-hand side of (5.1) ((5.2), respectively) equals \(\mu_{i}\ \ell r^{i}(y,z)\) (\(\rho_{i}\ \ell r^{i}(y,z)\), respectively).
\((b)\) Pick now \(z\in D^{i}_{i-1}(x,y)\) and observe that by Lemma 3.3 the \((z,y)\)-entry of the left-hand side of (5.1) ((5.2), respectively) equals \(r^{i}\ell(y,z)\) (\(r^{i-1}f(y,z)\), respectively). On the other hand, again by Lemma 3.3, the \((z,y)\)-entry of \(E^{*}_{i}R^{i-1}E^{*}_{1}\) (\(E^{*}_{i}R^{i}LE^{*}_{1}\), respectively) equals \(r^{i-1}(y,z)\) (\(\ell r^{i}(y,z)\), respectively). Therefore, the \((z,y)\)-entry of the right-hand side of (5.1) ((5.2), respectively) equals \(\kappa_{i}\ r^{i-1}(y,z)+\mu_{i}\ \ell r^{i}(y,z)\) (\(\theta_{i}\ r^{i-1}(y,z)+\rho_{i}\ \ell r^{i}(y,z)\), respectively).
Moreover, for \(z\in D^{i}_{i+1}(x,y)\) we observe there is no \(yz\)-walk of the shape \(r^{i-1}f\) and so \(\rho_{i}=0\) if the set \(D^{i}_{i+1}(x,y)\) is nonempty for some \(y\in\Gamma(x)\) as \(\ell r^{i}(y,z)>0\). The result follows.
## 6 The distance partition
Throughout this section let \(\Gamma=(X,\mathcal{R})\) denote a connected graph. Let \(x\in X\) and let \(T=T(x)\). Suppose that the unique irreducible \(T\)-module with endpoint \(0\) is thin. Assume that \(\Gamma\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), which is thin. In this section we have some comments about the combinatorial structure of the intersection diagrams of \(\Gamma\) with respect to the edge \(\{x,y\}\), for every \(y\in\Gamma(x)\). In particular, we will discuss which of the sets \(D^{i}_{j}(x,y)\) are (non)empty.
**Lemma 6.1**.: _With reference to Notation 3.4, the set \(D^{i}_{i-1}(x,y)\) is nonempty for every \(i\)\((1\leq i\leq d)\) and for all \(y\in\Gamma(x)\)._
Proof.: Suppose there exist \(i\ \ (1\leq i\leq d)\) and \(y\in\Gamma(x)\) such that the set \(D^{i}_{i-1}(x,y)\) is empty. Since \(D^{1}_{0}(x,y)=\{y\}\) we observe that \(i\geq 2\). Moreover, we notice that \(D^{i}_{i+1}(x,y)\neq\emptyset\) or \(D^{i}_{i}(x,y)\neq\emptyset\), as otherwise, the set \(\Gamma_{i}(x)=D^{i}_{i+1}(x,y)\cup D^{i}_{i}(x,y)\cup D^{i}_{i-1}(x,y)\) is empty, contradicting that the eccentricity of \(x\) equals \(d\). Let \(k\) be the greatest integer such that \(D^{k}_{k-1}(x,y)\neq\emptyset\). Note that \(1\leq k\leq i-1\). Since the set \(D^{i}_{i+1}(x,y)\cup D^{i}_{i}(x,y)\neq\emptyset\) then it is easy to see that there exists an \(xw\)-path for \(w\in D^{i}_{i+1}(x,y)\cup D^{i}_{i}(x,y)\), passing through a vertex \(z\in D^{k}_{k+1}(x,y)\cup D^{k}_{k}(x,y)\). So, the numbers \(r^{k+1}\ell(z)>0\) and \(r^{k}(z)>0\). Moreover, for \(u\in D^{k}_{k-1}(x,y)\) we observe \(r^{k+1}\ell(u)=0\) and \(r^{k}(u)>0\). As the trivial module is thin, this contradicts [8, Theorem 6]. The result follows.
The proofs of the next results are ommited as it can be carried out using similar ideas as the proofs of [5, Lemma 7.1], [5, Proposition 7.2] and [5, Proposition 7.3], respectively.
**Lemma 6.2**.: _With reference to Notation 3.4, assume that \(\Gamma\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), and that this module is thin. Pick an integer \(i\)\((1\leq i\leq d)\) and assume for some \(y\in\Gamma(x)\), the set \(D^{i}_{i+1}(x,y)\neq\emptyset\). Then the set \(D^{j}_{j}(x,y)\) is empty for every \(j\ \ (1\leq j\leq i)\) and for all \(y\in\Gamma(x)\)._
The above lemma together with the fact that the set \(D^{0}_{1}(x,y)\) is nonempty for every \(y\in\Gamma(x)\) motivate the next result.
**Proposition 6.3**.: _With reference to Notation 3.4, assume that \(\Gamma\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), and that this module is thin. Pick \(y\in\Gamma(x)\) and let \(D^{i}_{j}=D^{i}_{j}(x,y)\). Then there exists an integer \(t:=t(y)\ (0\leq t\leq d)\) such that the following \((i),(ii)\) hold:_
1. _For every_ \(i\)__\((0\leq i\leq t)\) _the set_ \(D^{i}_{i+1}\) _is nonempty and the set_ \(D^{i}_{i}(x,z)\) _is empty for every_ \(z\in\Gamma(x)\)_._
2. _For every_ \(i\)__\((t<i\leq d)\) _the set_ \(D^{i}_{i+1}\) _is empty._
_Moreover, \(\Gamma_{i}(x)=D^{i}_{i+1}\cup D^{i}_{i-1}\) for every \(0\leq i\leq t\)._
**Proposition 6.4**.: _With reference to Notation 3.4, assume that \(\Gamma\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), and that this module is thin. Pick \(y\in\Gamma(x)\). Let the sets \(D^{i}_{j}=D^{i}_{j}(x,y)\) and let \(t(y)\) be as in Proposition 6.3. If there exists \(j\)\((1\leq j\leq d)\) such that \(D^{j}_{j}\) is nonempty then \(D^{i}_{i}\) is nonempty for every \(t(y)<i\leq j\)._
Propositions 6.3 and 6.4 help us to understand the combinatorial structure of graphs which have, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), which is thin.
We now consider the possible intersection diagrams of \(\Gamma\) with respect to the edge \(\{x,y\}\), for every \(y\in\Gamma(x)\). Let \(d\) denote the eccentricity of \(x\). Then, for every \(y\in\Gamma(x)\), we observe \(\epsilon(y)\in\{d-1,d,d+1\}\). Fix now \(y\in\Gamma(x)\) arbitrarily. We have two cases.
With reference to Proposition 6.3, it is easy to see the following \((i)\)-\((ii)\) are equivalent:
1. The integer \(t:=t(y)\) is independent of the choice of \(y\in\Gamma(x)\).
2. For each \(i\)\((1\leq i\leq d)\), if for some \(y\in\Gamma(x)\) the set \(D^{i}_{i+1}(x,y)\neq\emptyset\) then for every \(y\in\Gamma(x)\) the set \(D^{i}_{i+1}(x,y)\neq\emptyset\).
At this point, the next question naturally arises.
**Question 6.5**.: _With reference to Notation 3.4 and Proposition 6.3, assume that \(\Gamma\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), and that this module is thin. Does the integer \(t:=t(y)\) depend on the choice of \(y\in\Gamma(x)\)?_
The following results partially answer the above question. However, a proof of the general case seems to need a nontrivial approach.
**Proposition 6.6**.: _With reference to Notation 3.4, assume that \(\Gamma\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), and that this module is thin. For \(y\in\Gamma(x)\), let \(t(y)\) be as in Proposition 6.3. If for some \(z\in\Gamma(x)\) the set \(D^{1}_{1}(x,z)\) is nonempty then the integer \(t:=t(y)\) does not depend on the choice of \(y\in\Gamma(x)\)._
Proof.: Suppose for some \(z\in\Gamma(x)\) the set \(D^{1}_{1}(x,z)\) is nonempty. Then, by Lemma 6.2, the set \(D^{1}_{2}(x,y)\) is empty for every \(y\in\Gamma(x)\). This shows that \(t(y)=0\) for every \(y\in\Gamma(x)\). The result follows.
**Proposition 6.7**.: _With reference to Notation 3.4, assume that \(\Gamma\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), and that this module is thin. For \(y\in\Gamma(x)\), let \(t(y)\) be as in Proposition 6.3. If for every \(y\in\Gamma(x)\) there exists an integer \(i\)\((1\leq i\leq d)\) such that the set \(D^{i}_{i}(x,y)\) is nonempty then the integer \(t:=t(y)\) does not depend on the choice of \(y\in\Gamma(x)\)._
Proof.: Pick \(w\in\Gamma(x)\) such that \(t(w)=\min\{t(y)\mid y\in\Gamma(x)\}\). Then, by the choice of \(w\in\Gamma(x)\), we have that \(t(w)\leq t(y)\) for all \(y\in\Gamma(x)\). Let \(k\) be the least integer such that \(D^{k}_{k}(x,w)\neq\emptyset\). We assert that \(t(w)=k-1\). To prove our claim, we first observe that, by Lemma 6.2, we have \(D^{k}_{k+1}(x,w)=\emptyset\). This shows that \(t(w)\leq k-1\). Suppose now that \(t(w)<k-1\). Then, \(t(w)+1<k\) and, by the choice of \(k\), \(D^{t(w)+1}_{t(w)+1}(x,w)=\emptyset\), contradicting Proposition 6.4. Therefore, we have that \(t(w)=k-1\). Moreover, by Lemma 6.2, the set \(D^{k}_{k+1}(x,y)=\emptyset\) for all \(y\in\Gamma(x)\). This yields that \(t(y)\leq t(w)\) for all \(y\in\Gamma(x)\). Consequently, \(t(y)=t(w)\) for all \(y\in\Gamma(x)\). The result follows.
**Proposition 6.8**.: _With reference to Notation 3.4, assume that \(\Gamma\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), and that this module is thin. For \(y\in\Gamma(x)\), let \(t(y)\) be as in Proposition 6.3. If \(\Gamma\) is a tree then the integer \(t:=t(y)\) does not depend on the choice of \(y\in\Gamma(x)\)._
Proof.: Pick \(y\in\Gamma(x)\). Suppose there exists an integer \(i\) (\(1\leq i\leq d\)) such that the set \(D^{i}_{i+1}(x,y)\) is empty. Let \(k\) be the least integer such that \(D^{k}_{k+1}(x,y)\) is empty. Since \(\Gamma\) is bipartite and \(x\) has valency at least \(2\), we observe \(D^{1}_{2}(x,y)\) is not empty. This implies that \(k\geq 2\). By the choice of \(k\), we have that the set \(D^{k-1}_{k}(x,y)\) is nonempty. Then, since \(\Gamma\) has no cycles, for a vertex \(z\in D^{k-1}_{k}(x,y)\) we have \(b_{k-1}(x,z)=0\). By Lemma 6.1, the set \(D^{j}_{j-1}(x,y)\) is nonempty for every \(j\) (\(1\leq j\leq d\)) and so, for \(w\in D^{k-1}_{k-2}(x,y)\), the scalar \(b_{k-1}(x,w)>0\). This shows that \(\Gamma\) is not distance-regular around \(x\). Therefore, by [8, Corollary 12] the trivial module \(T\widehat{x}\) is not thin, a contradiction. Hence, for every integer \(i\) (\(1\leq i\leq d\)) the set \(D^{i}_{i+1}(x,y)\) is not empty. This yields \(t(y)=d\). The result follows.
## 7 Examples
In this section we present some examples of graphs for which the equivalent conditions of Theorem 3.5 hold for a certain vertex \(x\). Several examples of such graphs where \(x\) is distance-regularized, are presented in [5, 7]. We therefore turn our attention to the case when \(x\) is not necessarily distance-regularized. Recall that we are still refering to Definition 3.2 and Notation 3.4 throughout this section.
**Example 7.1**.: _Let \(\Gamma\) be the connected graph with vertex set \(X=\left\{1,2,3,4,5,6\right\}\) and edge set \(\mathcal{R}=\left\{\left\{1,2\right\},\left\{1,3\right\},\left\{2,3\right\}, \left\{2,4\right\},\left\{2,5\right\},\left\{3,5\right\},\left\{3,6\right\}\right\}\). See also Figure 1 and observe that \(\Gamma\) is not bipartite. Fix vertex \(1\in X\) and note that \(\epsilon(1)=2\). Notice that \(\Gamma\) is not distance-regular around \(1\). Consider the Terwilliger algebra of \(\Gamma\) with respect to vertex \(1\). It is now easy to verify that for every integer \(i\)\((0\leq i\leq 2)\) there exist scalars \(\alpha_{i},\beta_{i}\), such that for every \(y\in\Gamma_{i}(x)\) the following hold:_
\[r^{i+1}\ell(y)=\alpha_{i}\;r^{i}(y),\qquad r^{i}f(y)=\beta_{i}\;r^{i}(y),\]
_with the values of \(\alpha_{i},\beta_{i}\;(0\leq i\leq 2)\) as presented in Table 1._
_Therefore, by [8, Theorem 6] the trivial \(T\)-module is thin. Moreover, properties \((a),(b)\) described in part \((ii)\) of Theorem 3.5 are satisfied with the values of \(\kappa_{i},\mu_{i},\theta_{i},\rho_{i}\;(1\leq i\leq 2)\) as presented in Table 2. Consequently, by Theorem 3.5, it holds that \(\Gamma\) has, up to isomorphism, a unique irreducible \(T\)-module with endpoint \(1\), and this module is thin. Moreover, since \(\dim(E^{*}_{1}V)=|\Gamma(x)|=2\), it is easy to see that there is actually only one irreducible \(T\)-module with endpoint \(1\). This \(T\)-module has dimension \(s=2\) and is spanned by \(w=\widehat{3}-\widehat{2}\) and \(Rw=\widehat{6}-\widehat{4}\). Note also that the partitions given by the intersection diagrams of \(\Gamma\) with respect to the edges \(\left\{1,2\right\}\) and \(\left\{1,3\right\}\) are not equitable._
We next give another example of a non-bipartite graph where the equivalent conditions of Theorem 3.5 hold for a non-distance-regularized vertex \(x\).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(i\) & 0 & 1 & 2 \\ \hline \hline \(\alpha_{i}\) & 2 & 3 & 0 \\ \hline \(\beta_{i}\) & 0 & 1 & 0 \\ \hline \end{tabular}
\end{table}
Table 1: Values of scalars \(\alpha_{i}\) and \(\beta_{i}\), \((0\leq i\leq 2)\).
### A construction
Our next goal is to focus on the construction of infinitely many new graphs, that satisfy the equivalent conditions of Theorem 3.5 for a certain vertex. To do this, we will need the following notation.
**Notation 7.2**.: _Let \(\Gamma\) and \(\Sigma\) denote finite, simple graphs with vertex set \(X\) and \(Y\), respectively. Assume that \(\Gamma\) is a connected graph which is pseudo-distance-regular around a vertex \(x\in X\). Assume also that \(\Sigma\) is regular with order at least \(2\). Consider the Cartesian product \(\Gamma\square\Sigma\). Namely, the graph with vertex set \(X\times Y\) where two vertices \((x,y)\) and \((x^{\prime},y^{\prime})\) are adjacent if and only if \(x=x^{\prime}\) and \(y\) is adjacent to \(y^{\prime}\), or \(y=y^{\prime}\) and \(x\) is adjacent to \(x^{\prime}\). Let \(H=H(\Gamma,\Sigma)\) denote the graph obtained by adding a new vertex \(w\) to the graph \(\Gamma\square\Sigma\), and connecting this new vertex \(w\) with all vertices \((x,y)\), where \(y\) is an arbitrary vertex of \(\Sigma\); see for example Figures 1 and 1._
With reference to Notation 7.2, we observe that for an arbitrary vertex \((x^{\prime},y^{\prime})\) of \(H\) different from \(w\), the distance between \(w\) and \((x^{\prime},y^{\prime})\) satisfies \(\partial_{H}(w,(x^{\prime},y^{\prime}))=\partial_{\Gamma}(x,x^{\prime})+1\). It thus follows that \(d_{H}=d+1\), where \(d_{H}\) is the eccentricity of \(w\) in \(H\) and \(d\) is the eccentricity of \(x\) in \(\Gamma\). Moreover, for \(1\leq i\leq d_{H}\) we have
\[H_{i}(w)=\Gamma_{i-1}(x)\times Y=\{(u,y)\mid u\in\Gamma_{i-1}(x),y\in Y\}.\]
In addition, it is easy to see that \(H\) is distance-regular around \(w\) if and only if \(\Gamma\) is distance-regular around \(x\).
We are now ready to give some constructions of infinitely many graphs, that satisfy the equivalent conditions of Theorem 3.5 for a certain vertex.
**Proposition 7.3**.: _With reference to Notation 7.2, pick vertex \(w\) in \(H\) and consider the Terwilliger algebra \(T=T(w)\). Then, the trivial \(T\)-module is thin._
Proof.: Immediate from [8, Section 6.5].
With reference to Notation 7.2, in what follows, we use subscripts to distinguish the number of walks of a particular shape in \(H\) and in \(\Gamma\). For example, for \(x^{\prime}\in\Gamma_{i}(x)\), we denote the number of walks from \(x\) to \(x^{\prime}\) of the shape \(r^{i+1}\ell\) with respect to \(x\) by \(r^{i+1}\ell_{\Gamma}(x^{\prime})\). For \((x^{\prime},y^{\prime})\in H_{i}(w)\), we denote the number of walks from \(w\) to \((x^{\prime},y^{\prime})\) of the shape \(r^{i+1}\ell\) with respect to \(w\) by \(r^{i+1}\ell_{H}((x^{\prime},y^{\prime}))\). We next study the instances when \(\Sigma\) is either an empty or a complete graph.
**Proposition 7.4**.: _With reference to Notation 7.2, pick vertex \(w\) in \(H\) and consider the Terwilliger algebra \(T=T(w)\). If \(\Sigma\) is isomorphic to the empty graph \(S_{n}\)\((n\geq 2)\) then graph \(H\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), which is thin._
Proof.: By Proposition 7.3, we first observe the trivial module is thin. We will next show that \(H\) satisfies the combinatorial conditions of Theorem 3.5. Suppose that \(\Sigma\) is isomorphic to the empty graph \(S_{n}\)\((n\geq 2)\). Pick \((x,y)\in H(w)\) and consider the sets \(D_{j}^{i}=D_{j}^{i}(w,(x,y))\). Since the eccentricity of \(x\) equals \(d\) it is easy to see the sets \(D_{j+1}^{j}\)\((0\leq j\leq d_{H})\) and \(D_{j-1}^{j}\)\((1\leq j\leq d_{H})\) are all nonempty for all \((x,y)\in H(w)\). Consequently, by Lemma 6.2, the set \(D_{j}^{j}\) is empty for every \(j\)\((1\leq j\leq d_{H})\) and for all \((x,y)\in H(w)\). In addition, we also notice
\[D_{j+1}^{j}(w,(x,y)) = \Gamma_{j-1}(x)\times(Y\setminus\{y\})\] \[= \{(u,y^{\prime})\mid u\in\Gamma_{j-1}(x),y^{\prime}\in Y,y^{ \prime}\neq y\},\]
\[D_{j-1}^{j}(w,(x,y)) = \Gamma_{j-1}(x)\times\{y\}\] \[= \{(u,y)\mid u\in\Gamma_{j-1}(x)\}.\]
Pick \((x^{\prime},y^{\prime})\in H_{i}(w)\) for \(1\leq i\leq d_{H}\). We observe that
\[\ell r_{H}^{i}\left((x,y),(x^{\prime},y^{\prime})\right)=r_{\Gamma}^{i-1}(x^{ \prime}) \tag{7.1.1}\]
which is a positive integer since \(\partial_{\Gamma}(x,x^{\prime})=i-1\) implies \(r_{\Gamma}^{i-1}(x^{\prime})>0\). Moreover, for \((x^{\prime},y^{\prime})\in D_{i+1}^{i}\)\((1\leq i\leq d_{H})\) we have
\[r^{i}\ell_{H}\left((x,y),(x^{\prime},y^{\prime})\right)=r^{i-1}f_{H}\left((x, y),(x^{\prime},y^{\prime})\right)=0. \tag{7.1.2}\]
Similarly, for \((x^{\prime},y^{\prime})\in D_{i-1}^{i}\)\((1\leq i\leq d_{H})\) we have
\[r^{i}\ell_{H}\left((x,y),(x^{\prime},y^{\prime})\right) = r^{i}\ell_{\Gamma}(x^{\prime}), \tag{7.1.3}\] \[r^{i-1}f_{H}\left((x,y),(x^{\prime},y^{\prime})\right) = r^{i-1}f_{\Gamma}(x^{\prime}),\] (7.1.4) \[r_{H}^{i-1}\left((x,y),(x^{\prime},y^{\prime})\right) = r_{\Gamma}^{i-1}(x^{\prime}). \tag{7.1.5}\]
Since vertex \(x\) is pseudo-distance-regularized, by [8, Theorem 6], we know that for every integer \(i\)\((0\leq i\leq d)\) there exist scalars \(\alpha_{i},\beta_{i}\), such that for every \(z\in\Gamma_{i}(x)\) the following hold:
\[r^{i+1}\ell_{\Gamma}(z)=\alpha_{i}\;r_{\Gamma}^{i}(z),\qquad r^{i}f_{\Gamma}( z)=\beta_{i}\;r_{\Gamma}^{i}(z). \tag{7.1.6}\]
It follows from (7.1.3), (7.1.4), (7.1.5) and (7.1.6) that for \(1\leq i\leq d_{H}\) and for every \((x^{\prime},y^{\prime})\in D_{i-1}^{i}\) we have
\[r^{i}\ell_{H}\left((x,y),(x^{\prime},y^{\prime})\right) = r^{i}\ell_{\Gamma}(x^{\prime}) \tag{7.1.7}\] \[= \alpha_{i-1}r_{\Gamma}^{i-1}(x^{\prime})\] \[= \alpha_{i-1}r_{H}^{i-1}\left((x,y),(x^{\prime},y^{\prime})\right),\]
\[r^{i-1}f_{H}\left((x,y),(x^{\prime},y^{\prime})\right) = r^{i-1}f_{\Gamma}(x^{\prime}) \tag{7.1.8}\] \[= \beta_{i-1}r_{\Gamma}^{i-1}(x^{\prime})\] \[= \beta_{i-1}r_{H}^{i-1}\left((x,y),(x^{\prime},y^{\prime})\right).\]
Therefore, from (7.1.1), (7.1.2), (7.1.7) and (7.1.8), we see that vertex \(w\) of \(H\) satisfies the combinatorial conditions of Theorem 3.5 with the values of \(\kappa_{i}=\alpha_{i-1},\mu_{i}=0,\theta_{i}=\beta_{i-1},\rho_{i}=0\) for every integer \(i\)\((1\leq i\leq d_{H})\). Consequently, \(H\) has, up to isomorphism, a unique irreducible \(T\)-module with endpoint \(1\), and this module is thin.
**Example 7.5**.: _Let \(\Gamma\) be the connected graph presented in Example 7.1 and let \(S_{n}\) denote the empty graph of \(n\) vertices, for some integer \(n\geq 2\). Let \(H=H(\Gamma,S_{n})\); see for example Figure 10 for the case \(n=2\). Consider the Terwilliger algebra \(T=T(w)\) of \(H\) with respect to \(w\). Notice that \(H\) is not distance-regular around \(w\) since \(\Gamma\) is not distance-regular around \(x\). However, the trivial module is thin by Proposition 7.3. It follows from Table 1 and the above comments that the properties \((a),(b)\) described in part \((ii)\) of Theorem 3.5 hold with the values of \(\kappa_{i},\mu_{i},\theta_{i},\rho_{i}\)\((1\leq i\leq 3)\) as presented in Table 3. Consequently, by Theorem 3.5, it holds that \(H\) has, up to isomorphism, a unique irreducible \(T\)-module with endpoint \(1\), and this module is thin. Moreover, since \(\dim(E_{1}^{*}V)=|H(w)|=n\), it is easy to see that there are actually \(n-1\) irreducible \(T\)-modules with endpoint \(1\) and these isomorphic \(T\)-modules have dimension \(s=3\)._
**Proposition 7.6**.: _With reference to Notation 7.2, pick vertex \(w\) in \(H\) and consider the Terwilliger algebra \(T=T(w)\). If \(\Sigma\) is isomorphic to the complete graph \(K_{n}\)\((n\geq 2)\) then graph \(H\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), which is thin._
Proof.: By Proposition 7.3, we first observe the trivial module is thin. We will next show that \(H\) satisfies the combinatorial conditions of Theorem 3.5. Suppose that \(\Sigma\) is isomorphic to the complete graph \(K_{n}\)\((n\geq 2)\). Pick \((x,y)\in H(w)\) and consider the sets \(D_{j}^{i}=D_{j}^{i}(w,(x,y))\). Since the eccentricity of \(x\) equals \(d\) it is easy to see the sets \(D_{j}^{j}\)\((1\leq j\leq d_{H})\) and \(D_{j-1}^{j}\)\((1\leq j\leq d_{H})\) are all nonempty for all \((x,y)\in H(w)\). Consequently, by Lemma 6.2 the set \(D_{j+1}^{j}\) is empty for every \(j\)\((1\leq j\leq d_{H})\) and for all \((x,y)\in H(w)\). In addition, we also notice
\[D_{j}^{j}(w,(x,y)) = \Gamma_{j-1}(x)\times(Y\setminus\{y\}) \tag{7.1.9}\] \[= \{(u,y^{\prime})\mid u\in\Gamma_{j-1}(x),y^{\prime}\in Y,y^{ \prime}\neq y\},\]
\[D_{j-1}^{j}(w,(x,y)) = \Gamma_{j-1}(x)\times\{y\} \tag{7.1.10}\] \[= \{(u,y)\mid u\in\Gamma_{j-1}(x)\}.\]
Pick \((x^{\prime},y^{\prime})\in H_{i}(w)\) for \(1\leq i\leq d_{H}\). We observe that
\[\ell r_{H}^{i}\left((x,y),(x^{\prime},y^{\prime})\right)=r_{\Gamma}^{i-1}(x^{ \prime}) \tag{7.1.11}\]
which is a positive integer since \(\partial_{\Gamma}(x,x^{\prime})=i-1\) implies \(r_{\Gamma}^{i-1}(x^{\prime})>0\). Moreover, since every vertex in \(D_{i}^{i}\) has no neighbours in \(D_{i}^{i+1}\), for \((x^{\prime},y^{\prime})\in D_{i}^{i}\)\((1\leq i\leq d_{H})\), it follows that
\[r^{i}\ell_{H}\left((x,y),(x^{\prime},y^{\prime})\right)=0. \tag{7.1.12}\]
In addition, from the definition of \(H\), (7.1.9) and (7.1.10), it is easy to see every vertex \((x^{\prime},y^{\prime})\in D_{i}^{i}\)\((1\leq i\leq d_{H})\) has exactly one neighbour in \(D_{i-1}^{i}\) which is the vertex \((x^{\prime},y)\). This
implies the number of walks from \((x,y)\) to \((x^{\prime},y^{\prime})\) of the shape \(r^{i-1}f\) with respect to \(w\) is equal to the number of walks from \(x\) to \(x^{\prime}\) of the shape \(r^{i-1}\) with respect to \(x\). Therefore, from the above comments and (7.1.11), for \((x^{\prime},y^{\prime})\in D_{i}^{i}\)\((1\leq i\leq d_{H})\),
\[r^{i-1}f_{H}\left((x,y),(x^{\prime},y^{\prime})\right)=r_{\Gamma}^{i-1}(x^{ \prime})=\ell r_{H}^{i}\left((x,y),(x^{\prime},y^{\prime})\right). \tag{7.1.13}\]
Similarly, for \((x^{\prime},y^{\prime})\in D_{i-1}^{i}\)\((1\leq i\leq d_{H})\) we have
\[r^{i}\ell_{H}\left((x,y),(x^{\prime},y^{\prime})\right) = r^{i}\ell_{\Gamma}(x^{\prime}), \tag{7.1.14}\] \[r^{i-1}f_{H}\left((x,y),(x^{\prime},y^{\prime})\right) = r^{i-1}f_{\Gamma}(x^{\prime}),\] (7.1.15) \[r_{H}^{i-1}\left((x,y),(x^{\prime},y^{\prime})\right) = r_{\Gamma}^{i-1}(x^{\prime}). \tag{7.1.16}\]
Since vertex \(x\) is pseudo-distance-regularized, by [8, Theorem 6], we know that for every integer \(i\)\((0\leq i\leq d)\) there exist scalars \(\alpha_{i},\beta_{i}\), such that for every \(z\in\Gamma_{i}(x)\) the following hold:
\[r^{i+1}\ell_{\Gamma}(z)=\alpha_{i}\;r_{\Gamma}^{i}(z),\qquad r^{i}f_{\Gamma}(z )=\beta_{i}\;r_{\Gamma}^{i}(z). \tag{7.1.17}\]
It follows from (7.1.14), (7.1.15), (7.1.16) and (7.1.17) that for \(1\leq i\leq d_{H}\) and for every \((x^{\prime},y^{\prime})\in D_{i-1}^{i}\) we have
\[r^{i}\ell_{H}\left((x,y),(x^{\prime},y^{\prime})\right) = r^{i}\ell_{\Gamma}(x^{\prime}) \tag{7.1.18}\] \[= \alpha_{i-1}r_{\Gamma}^{i-1}(x^{\prime})\] \[= \alpha_{i-1}r_{H}^{i-1}\left((x,y),(x^{\prime},y^{\prime})\right),\]
\[r^{i-1}f_{H}\left((x,y),(x^{\prime},y^{\prime})\right) = r^{i-1}f_{\Gamma}(x^{\prime}) \tag{7.1.19}\] \[= \beta_{i-1}r_{\Gamma}^{i-1}(x^{\prime})\] \[= \beta_{i-1}r_{H}^{i-1}\left((x,y),(x^{\prime},y^{\prime})\right).\]
Therefore, from (7.1.11), (7.1.12), (7.1.13), (7.1.18) and (7.1.19), we see that vertex \(w\) of \(H\) satisfies the combinatorial conditions of Theorem 3.5 with the values of \(\kappa_{i}=\alpha_{i-1},\mu_{i}=0,\theta_{i}=\beta_{i-1}-1,\rho_{i}=1\) for every integer \(i\)\((1\leq i\leq d_{H})\). Consequently, \(H\) has, up to isomorphism, a unique irreducible \(T\)-module with endpoint \(1\), and this module is thin. \(\blacksquare\)
**Example 7.7**.: _Let \(\Gamma\) be the connected graph presented in Example 7.1 and let \(K_{n}\) denote the complete graph of \(n\) vertices, for some integer \(n\geq 2\). Let \(H=H(\Gamma,K_{n})\); see for example Figure 10 for the case \(n=2\). Consider the Terwilliger algebra \(T=T(w)\) of \(H\) with respect to \(w\). Notice that \(H\) is not distance-regular around \(w\) since \(\Gamma\) is not distance-regular around \(x\). However, the trivial module is thin by Proposition 7.3. It follows from Table 1 and the above comments that the properties \((a),(b)\) described in part \((ii)\) of Theorem 3.5 hold with the values of \(\kappa_{i},\mu_{i},\theta_{i},\rho_{i}\;(1\leq i\leq 3)\) as presented in Table 4. Consequently, by Theorem 3.5, it holds that \(H\) has, up to isomorphism, a unique irreducible \(T\)-module with endpoint \(1\), and this module is thin. Moreover, since \(\dim(E_{1}^{*}V)=|H(w)|=n\), it is easy to see that there are actually \(n-1\) irreducible \(T\)-modules with endpoint \(1\) and these isomorphic \(T\)-modules have dimension \(s=3\)._
We are now ready to prove the main result of this subsection.
**Theorem 7.8**.: _With reference to Notation 7.2, pick vertex \(w\) in \(H\) and consider the Terwilliger algebra \(T=T(w)\). Graph \(H\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), which is thin if and only if \(\Sigma\) is either isomorphic to the empty graph \(S_{n}\;(n\geq 2)\) or to the complete graph \(K_{n}\;(n\geq 2)\)._
Proof.: By Proposition 7.3, we observe the trivial module is thin. Assume first that \(H\) has, up to isomorphism, exactly one irreducible \(T\)-module with endpoint \(1\), which is thin. We next claim that \(\Sigma\) is either isomorphic to the empty graph \(S_{n}\) (\(n\geq 2\)) or to the complete graph \(K_{n}\) (\(n\geq 2\)). Let \(Y\) denote the vertex set of \(\Sigma\). If \(|Y|=2\) then the statement trivially follows. So, to prove this assertion, assume that \(|Y|>2\). Given any three vertices \(y,y^{\prime},y^{\prime\prime}\in Y\), suppose there exist both a pair of adjacent vertices and a pair of nonadjacent vertices in \(\Sigma\). Without loss of generality we could assume that \(y\) is adjacent to \(y^{\prime}\) but not to \(y^{\prime\prime}\). Since \(y\) and \(y^{\prime}\) are adjacent, we thus have that \((x,y^{\prime})\) is a common neighbour of both \(w\) and \((x,y)\) in \(H\). Moreover, note that \(\partial_{H}(w,(x,y^{\prime\prime}))=1\) and since \(y\) and \(y^{\prime\prime}\) are not adjacent, \(\partial_{H}((x,y),(x,y^{\prime\prime}))=2\). Hence, the sets \(D^{1}_{2}(w,(x,y))\) and \(D^{1}_{1}(w,(x,y))\) are both nonempty, contradicting Lemma 6.2. Consequently, any three vertices in \(Y\) either form a stable set or a clique. This clearly implies that \(\Sigma\) is either isomorphic to the empty graph \(S_{n}\) (\(n\geq 2\)) or to the complete graph \(K_{n}\) (\(n\geq 2\)), which proves our claim. Notice also that the second part of the result immediately follows from Proposition 7.4 and Proposition 7.6. This finishes the proof.
## 8 Acknowledgement
The author would like to thank Stefko Miklavic for helpful and constructive comments that greatly contributed to improving the final version of this article. This work is supported in part by the Slovenian Research Agency (research program P1-0285, research project J1-2451 and Young Researchers Grant).
## 9 Data availability
Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.
|
2304.02399
|
Gravitational echoes of lepton number symmetry breaking with light and
ultralight Majorons
|
We formulate a version of the low-scale Majoron model equipped with an
inverse seesaw mechanism featuring lepton-number preserving dimension-6
operators in the scalar potential. Contrary to its dimension-4 counterpart, we
find that the model can simultaneously provide light and ultralight Majorons,
neutrino masses and their mixing, while featuring strong first-order
cosmological phase transitions associated to the spontaneous breaking of the
lepton number and the electroweak symmetries in the early Universe. We show by
a detailed numerical analysis that under certain conditions on the parameter
space accounted for in collider physics, the model can be probed via the
primordial gravitational wave spectrum potentially observable at LISA and other
planned facilities.
|
Andrea Addazi, Antonino Marcianò, António P. Morais, Roman Pasechnik, João Viana, Hao Yang
|
2023-04-05T12:18:06Z
|
http://arxiv.org/abs/2304.02399v2
|
# Gravitational echoes of lepton number symmetry breaking with light and ultralight Majorons
###### Abstract
We formulate a version of the low-scale Majoron model equipped with an inverse seesaw mechanism featuring lepton-number preserving dimension-6 operators in the scalar potential. Contrary to its dimension-4 counterpart, we find that the model can simultaneously provide light and ultralight Majorons, neutrino masses and their mixing, while featuring strong first-order cosmological phase transitions associated to the spontaneous breaking of the lepton number and the electroweak symmetries in the early Universe. We show by a detailed numerical analysis that under certain conditions on the parameter space accounted for in collider physics, the model can be probed via the primordial gravitational wave spectrum potentially observable at LISA and other planned facilities.
###### Contents
* 1 Introduction
* 2 Setting the stage: Which Majoron model?
* 3 The 6D Majoron model
* 3.1 The scalar potential
* 3.2 Inverted equations for scalar couplings
* 3.3 Majoron decays
* 3.4 Higgs trilinear coupling
* 4 Thermal effective potential and Gravitational Waves
* 4.1 The one-loop \(T\)-dependent effective potential
* 4.2 Dynamics of the Phase Transition
* 4.3 Primordial Gravitational Waves: a semi-analytical approximation
* 5 Results and discussion
* 5.1 Revisiting the 4D limit
* 5.2 SGWB in the 6D EIS model
* 5.3 Connection to collider observables
* 5.4 Connection to the neutrino sector
* 6 Conclusions
* A One-loop expressions for the physical trilinear couplings
## 1 Introduction
With the discovery of Gravitational Waves (GWs) by the LIGO detectors in 2015 [1] a new era of multi-messenger astronomy has started. The most popular example to date was the observation of gravitational ripples by the LIGO and Virgo collaborations resulting from a neutron star binary merger [2], accompanied by a faint electromagnetic counterpart detected seconds later by the gamma-ray telescopes Fermi-GRB and INTEGRAL [3]. However, the possibilities for multi-messenger signals are by no means exhausted with the gravitational and electromagnetic (EM) interactions. Furthermore, the observation of a neutrino flux together with GWs and an EM signal from a supernova explosion would offer a weak interaction component to the observed event, enabling us to collect more information and broadening the scope of our understanding about such phenomena.
Another type of gravitational footprints that current and future experiments will be looking for traces back to the early moments of our Universe and is expected to be reflected in the form of a stochastic background. This can be a manifestation of e.g. inflationary dynamics [4], cosmic strings [5] or strong first-order phase transitions (SFOPTs)[6], which we study in detail in this article. The latter can be responsible for a stochastic GW background generated by expanding and colliding vacuum bubbles of an energetically favoured vacuum configuration, which is expanding in a Universe filled up with a false vacuum phase. If the bubble wall velocity does not runaway, which can happen for very strong FOPTs, where the released latent heat during the transition, or equivalently the difference in the trace anomaly \(\alpha\), is very large [7] and the wall velocity \(v_{\rm w}\to 1\), the dominant contribution to the primordial GW spectrum results from the sound waves (SW) component. Furthermore, if \(v_{\rm w}\) is larger than the Chapman-Jouget limit, i.e. \(v_{\rm w}>v_{\rm J}\), the peak frequency and amplitude of the primordial GW background is largely dominated by supersonic detonations [8], as we will consider in our numerical analysis.
The details of the phase-transitions are dependent on the underlying Particle Physics model. For example, in the Standard Model (SM), with the Higgs boson mass observed by the ATLAS and CMS experiments [9], an electroweak (EW) scale SFOPT is not possible. However, the inclusion of a SM singlet in the scalar potential is sufficient to enhance the electroweak phase-transition to a strong one. Typically, the stronger the transition, the larger the GW peak amplitude and for scales not too far from the EW/TeV scale the frequency range can be in the reach of the _Laser Interferometer Space Antenna_ (LISA) mission [10, 11]. While collider experiments are undoubtedly the preferred source for directly measuring new particles and couplings, GW interferometers such as LISA can potentially offer a gravitational portal to probe New Physics (NP) scenarios complementary to collider experiments. In some cases, GW detectors can even go beyond the reach of current and future collider experiments, both in the very-high or ultra-low energy limits [12, 13, 14].
In this article we study a class of Majoron models equipped with a spontaneously broken global lepton-number symmetry \(\mathrm{U}(1)_{\mathrm{L}}\) and a neutrino inverse seesaw mechanism. We investigate under which circumstances the Majoron is long-lived or even stable, thus becoming a potential Dark Matter (DM) candidate. We attribute the emergence of a seesaw scale \(\Lambda\) to some unknown physics, which is currently beyond the reach of collider experiments, parameterizing the effects of such an unknown physics via \(\mathrm{U}(1)_{\mathrm{L}}\)-preserving dimension-6 operators in the scalar potential. For a possible explicit breaking of a global \(\mathrm{U}(1)_{\mathrm{L}}\) via higher-dimensional operators induced by gravitational effects, see e.g. Ref. [15].
In this work, however, we solely consider a constrained set of the \(\mathrm{U}(1)_{\mathrm{L}}\)-preserving dimension-6 operators such that the only explicit \(\mathrm{U}(1)_{\mathrm{L}}\) breaking effects come in the form of a soft Majoron mass term. The model yields a generic low-scale inverse seesaw mechanism, with a possibility to explain the smallness of the light active neutrino mass scale. As previously discussed by some of the authors, the standard dimension-4 Majoron model [16] cannot simultaneously predict visible GW in the reach of LISA while offering a possible good candidate for DM. This follows from the fact that the size of the scalar quartic portal coupling needed to enhance the potential barrier between false and true vacua is too large to comply with invisible Higgs decays at the LHC [17], if the Majoron mass is lighter than a half of the Higgs boson mass. On the contrary, due to a richer structure of the dimension-6 operator extension of the model, a light Majoron state is found to be in full consistency with experimental bounds on invisible Higgs boson decays. Finally, the model features SFOPTs, which can produce primordial stochastic GW signals potentially observable at future experimental facilities such as LISA, or a next generation of interferometers such as BBO or DECIGO.
The article is organised as follows. In Sec. 2, we review three canonical Majoron models, highlight their features and discuss whether low-scale lepton number symmetry breaking is to be expected. In Sec. 3 we discuss the extended inverse seesaw model with dimension-six operators in the scalar sector, focusing on the observables that will be studied in the numerical analysis. In Sec. 4 we review the basic details of the thermal effective potential and the spectrum of Gravitational Waves, in Sec. 5 we present and discuss our results before concluding in Sec. 6.
## 2 Setting the stage: Which Majoron model?
Majoron models are well motivated frameworks engineered to offer a mechanism for neutrino mass generation. These are typically equipped with a global \(\mathrm{U}(1)_{\mathrm{L}}\) lepton number symmetry, broken by the VEV of a complex SM singlet scalar, \(\sigma\), whose CP-odd component is the Majoron. Among the possible realizations it is worth briefly discussing the basic properties of the type-I (T1S), inverse (IS) and extended inverse seesaw (EIS) scenarios whose quantum numbers and particle content are shown in Tab. 1.
For each of the three Majoron models [18], the corresponding neutrino-sector Lagrangian can be written as
\[\mathcal{L}_{\nu}^{\mathrm{T1S}}= y_{\nu}^{ij}\overline{L}_{i}\tilde{H}\nu_{\mathrm{R}j}+y_{\sigma}^{ ij}\bar{\nu}_{\mathrm{R}i}^{c}\nu_{\mathrm{R}j}\sigma+\mathrm{h.c.}\,, \tag{1}\] \[\mathcal{L}_{\nu}^{\mathrm{IS}}= y_{\nu}^{ij}\overline{L}_{i}\tilde{H}\nu_{\mathrm{R}j}+y_{ \sigma}^{ij}\bar{S}_{i}^{c}\nu_{\mathrm{R}j}\sigma+\Lambda^{ij}\bar{S}_{i}^{c} S_{j}+\mathrm{h.c.}\,,\] \[\mathcal{L}_{\nu}^{\mathrm{EIS}}= y_{\nu}^{ij}\overline{L}_{i}\tilde{H}\nu_{\mathrm{R}j}+y_{\sigma}^{ij}\bar{S}_{i}^{c} S_{j}\sigma+y_{\sigma}^{\prime ij}\bar{\nu}_{\mathrm{R}i}^{c}\nu_{\mathrm{R}j} \sigma^{*}+\Lambda^{ij}\bar{\nu}_{\mathrm{R}i}^{c}S_{j}+\mathrm{h.c.}\,,\]
where
\[L_{i}=\begin{pmatrix}\nu_{\rm Li}\\ e_{\rm Li}\end{pmatrix}\qquad\text{and}\qquad\tilde{H}\equiv i\tau_{2}H^{\dagger}\,. \tag{2}\]
The mass matrices, written in the basis \(\left\{\bar{\nu}_{\rm Li},\bar{\nu}_{\rm R}^{c},\bar{S}_{i}^{c}\right\}\otimes \left\{\nu_{\rm Li},\nu_{\rm Rj},S_{j}\right\}\) are given, in a block compact form, as
\[\mathbf{M}_{\mathbf{\nu}}^{\rm T1S}=\left(\begin{array}{ccc}0&\frac{v_{h}}{\sqrt{2}} \mathbf{y}_{\mathbf{\nu}}&0\\ \frac{v_{h}}{\sqrt{2}}\mathbf{y}_{\mathbf{\nu}}&\frac{v_{\pi}}{\sqrt{2}}\mathbf{y}_{\mathbf{ \sigma}}\end{array}\right)\,,\quad\mathbf{M}_{\mathbf{\nu}}^{\rm IS}=\left(\begin{array} []{ccc}0&\frac{v_{h}}{\sqrt{2}}\mathbf{y}_{\mathbf{\nu}}&0\\ \frac{v_{h}}{\sqrt{2}}\mathbf{y}_{\mathbf{\nu}}&0&\frac{v_{\pi}}{\sqrt{2}}\mathbf{y}_{\mathbf{ \sigma}}\\ 0&\frac{v_{\pi}}{\sqrt{2}}\mathbf{y}_{\mathbf{\sigma}}&\mathbf{\Lambda}\end{array}\right)\,, \quad\mathbf{M}_{\mathbf{\nu}}^{\rm EIS}=\left(\begin{array}{ccc}0&\frac{v_{h}}{ \sqrt{2}}\mathbf{y}_{\mathbf{\nu}}&0\\ \frac{v_{h}}{\sqrt{2}}\mathbf{y}_{\mathbf{\nu}}&\frac{v_{\pi}}{\sqrt{2}}\mathbf{y}_{\mathbf{ \sigma}}&\mathbf{\Lambda}\\ 0&\mathbf{\Lambda}&\frac{v_{\pi}}{\sqrt{2}}\mathbf{y}_{\mathbf{\sigma}}\end{array}\right)\,, \tag{3}\]
such that the light neutrino masses scale as
\[\mathbf{m}_{\nu}^{\rm T1S}\approx\frac{1}{\sqrt{2}}\frac{\mathbf{y}_{\mathbf{\nu}}^{2}}{ \mathbf{y}_{\mathbf{\sigma}}}\frac{v_{h}^{2}}{v_{\sigma}}\,,\qquad\mathbf{m}_{\nu}^{\rm IS }\approx\frac{\mathbf{y}_{\mathbf{\nu}}^{2}}{\mathbf{y}_{\mathbf{\sigma}}^{2}}\frac{\mathbf{ \Lambda}v_{h}^{2}}{v_{\sigma}^{2}}\,,\qquad\mathbf{m}_{\nu}^{\rm EIS}\approx\frac {\mathbf{y}_{\mathbf{\nu}}^{2}\mathbf{y}_{\mathbf{\sigma}}}{2\sqrt{2}}\frac{v_{h}^{2}v_{\sigma} }{\mathbf{\Lambda}^{2}}\,, \tag{4}\]
where matrix product is implicit. If one assumes that the Yukawa couplings are all of a comparable size, in particular that the \(\mathbf{y}_{\mathbf{\nu}}\) are not tuned to be extremely small, the scale of active neutrino masses will be essentially driven by \(v_{\sigma}\) and \(\Lambda\) (the latter for the IS and EIS). This implies that the following relations are in order:
* \(v_{\sigma}\gg v_{h}\) for the T1S;
* \(v_{\sigma}\gg v_{h}\) and/or \(\Lambda\ll v_{h}\) for the IS;
* \(v_{\sigma}\sim v_{h}\) and \(\Lambda\gg v_{h}\) for the EIS.
The T1S with Majoron requires a U(1)\({}_{\rm L}\) breaking scale well above the electroweak one which is out of the sensitivity reach of LISA frequencies and possibly even future experiments such as the Einstein Telescope or the Cosmic Explorer. For the considered IS model, while a small \(\Lambda\) can relax the size of \(v_{\sigma}\), the fact that the \(\Lambda^{ij}\bar{S}_{i}^{c}S_{j}\) operator is U(1)\({}_{\rm L}\) preserving one does not expect it to be tiny. Therefore, it is natural and rather more elegant to consider \(v_{\sigma}\) in the multi-TeV regime, such that GWs from FOPTs in the IS model are likely above the sensitivity reach of LISA. Last but not least, the EIS model is the best candidate scenario to naturally accommodate U(1)\({}_{\rm L}\) lepton number symmetry breaking at the EW-TeV scale, thus at the reach of the LISA frequency range, provided that either a large \(\Lambda\) scale or a small \(\mathbf{y}_{\mathbf{\nu}}^{2}\mathbf{y}_{\mathbf{\sigma}}\) product, or a combination of both, offer the needed suppression to generate the neutrino mass scale. Furthermore, with \(\Lambda>v_{h}\) the inclusion of dimension-6 operators is well motivated in the EIS model as opposed to the IS one where \(\Lambda\) is preferred to be lighter than \(v_{h}\).
In the remainder of this article we will then focus on the EIS model as a well motivated scenario to study the interplay between gravitational echoes at LISA and their implications for collider physics and the properties of the neutrino sector.
In addition to three light neutrinos, the EIS model features six heavy ones \(N_{1,2,3}^{\pm}\), whose masses are
\[\mathbf{m}_{N^{\pm}}\approx\mathbf{\Lambda}\pm\frac{v_{\sigma}}{2\sqrt{2}}\left(\mathbf{y }_{\mathbf{\sigma}}+\mathbf{y}_{\mathbf{\sigma}}^{\prime}\right)\,, \tag{5}\]
\begin{table}
\begin{tabular}{c|c c c c|c} & \(L^{i}\) & \(\nu_{\rm R}^{i}\) & \(S^{i}\) & \(\sigma\) & \(H\) & Model \\ \hline & 1 & 1 & \(\times\) & \(-2\) & 0 & T1S \\ U(1)\({}_{\rm L}\) & 1 & 1 & 0 & \(-1\) & 0 & IS \\ & 1 & 1 & \(-1\) & 2 & 0 & EIS \\ \end{tabular}
\end{table}
Table 1: Quantum numbers of the scalar and neutrino sectors of three distinct realizations of Majoron models. In the first line neutrinos masses are generated via a standard type-I seesaw mechanism whereas the second and third depict the inverse and extended inverse seesaw cases respectively. The \(\times\) in the first line indicates the absence of the \(S\) field while \(i=1,2,3\) is a family index. As usual, \(L\) and \(H\) are electroweak lepton and Higgs doublets, \(\nu_{\rm R}\) and \(S\) denote SM-singlet neutrinos, whereas \(\sigma\) is a SM complex scalar singlet.
when expanded to second order in \(v_{h}\ll\Lambda\) and first order in \(v_{\sigma}\ll\Lambda\). Without loss of generality for this article's goals and for cleaner numerical calculations, one chooses \(y_{\sigma\,i}\gg y^{\prime}_{\sigma\,i}\to 0\) as well as a flavour diagonal basis.
While the heavy neutrinos masses are essentially dominated by the \(\Lambda\)-scale, the lightness of active neutrino results from a combination of such a new-physics scale and the sizes of the Yukawa couplings \(y_{\nu}\) and \(y_{\sigma}\). In our analysis we have chosen to assign the normal hierarchy among light neutrinos as a phenomenological input1. Last but not least, it is convenient to invert \(m_{\nu}^{\text{EIS}}\) in Eq. (4) recasting it as
Footnote 1: We have also checked the case of the inverted hierarchy and found no difference in the results. Therefore, all our conclusions in this article hold for the inverted scenario.
\[y^{i}_{\sigma}=2\sqrt{2}\frac{m_{\nu_{i}}\Lambda^{2}}{v_{h}^{2}v_{\sigma}y_{ \nu_{i}}^{2}}\,, \tag{6}\]
where the the parameters on the right-hand-side are used as input in our numerical analysis. Note that for ease of notation one has dropped the EIS label in \(m_{\nu_{i}}\) in Eq. (6) and anywhere else in the remainder of this article.
## 3 The 6D Majoron model
The standard dimension-4 Majoron model, equipped with an extended inverse seesaw mechanism, offers a great description for the generation of neutrino masses and mixing. One of the allowed lepton number symmetry conserving terms is the fermion bilinear \(\Lambda\nu_{\text{R}}^{c}S\) according to the transformation properties in Tab. 1. It is therefore legitimate to ask what is the origin and size of the \(\Lambda\) scale, whose relevance for active neutrino mass generation, in particular their smallness, is crucial. While in our previous work we have solely assumed such a parameter to lie in the sub-TeV regime, in the current analysis we go one step forward allowing it to be in a range from 10 TeV up to 1 PeV. The unknown nature of the UV physics beyond such a scale is encoded in effective \(\text{U}(1)_{\text{L}}\)-preserving dimension-6 operators in the scalar sector.
One of the main goals of this work is to understand whether a light or ultralight Majoron can coexist with an observable spectrum of primordial GWs, triggered by the breaking of the \(\text{U}(1)_{\text{L}}\) symmetry at finite temperature. Our aim is to use show how future GW information can be used to extract the preferred sizes for the \(\Lambda\) scale and the Yukawa couplings. In other words, the potential observation (or lack of it) of primordial GWs at LISA in the coming decade can provide us concrete hints about the scale of new physics and in particular of neutrino mass generation. The same philosophy can be applied to other observables, such as the trilinear Higgs coupling, the mass of new scalars and the \(\text{U}(1)_{\text{L}}\) breaking scale as we discuss in Sec. 5.
### The scalar potential
Our model contains two scalar fields, an electroweak doublet \(H\) and a complex singlet \(\sigma\) whose quantum numbers under the \(\text{U}(1)_{\text{L}}\) symmetry can be found in Tab. 1 (third line). The tree-level scalar potential can then be written as
\[V_{\text{\tiny{0}}}(H,\sigma)=V_{\text{\tiny{SM}}}(H)+V_{\text{\tiny{4D}}}(H, \sigma)+V_{\text{\tiny{6D}}}(H,\sigma)+V_{\text{\tiny{soft}}}(\sigma)\,, \tag{7}\]
with
\[\begin{split} V_{\text{\tiny{SM}}}(H)&=\mu_{h}^{2}H^ {\dagger}H+\lambda_{h}(H^{\dagger}H)^{2}\,,\\ V_{\text{\tiny{4D}}}(H,\sigma)&=\mu_{\sigma}^{2} \sigma^{\dagger}\sigma+\lambda_{\sigma}(\sigma^{\dagger}\sigma)^{2}+\lambda_{ \sigma h}H^{\dagger}H\sigma^{\dagger}\sigma\,,\\ V_{\text{\tiny{6D}}}(H,\sigma)&=\frac{\delta_{0}}{ \Lambda^{2}}(H^{\dagger}H)^{3}+\frac{\delta_{2}}{\Lambda^{2}}(H^{\dagger}H)^{2 }\sigma^{\dagger}\sigma+\frac{\delta_{4}}{\Lambda^{2}}H^{\dagger}H(\sigma^{ \dagger}\sigma)^{2}+\frac{\delta_{6}}{\Lambda^{2}}(\sigma^{\dagger}\sigma)^{3 }\,,\\ V_{\text{\tiny{soft}}}(\sigma)&=\frac{1}{2}\mu_{b}^{2 }\left(\sigma^{2}+\sigma^{*2}\right)\,.\end{split} \tag{8}\]
The \(H\) and \(\sigma\) fields can be expanded in terms of their real valued components as
\[H=\frac{1}{\sqrt{2}}\begin{pmatrix}\omega_{1}+i\omega_{2}\\ \phi_{h}+h+i\eta\end{pmatrix}\,,\qquad\sigma=\frac{1}{\sqrt{2}}\left(\phi_{ \sigma}+h^{\prime}+iJ\right)\,, \tag{9}\]
with \(h\) and \(h^{\prime}\) denoting radial quantum fluctuations around the classical field configurations \(\phi_{h}\) and \(\phi_{\sigma}\), whereas \(\omega_{1,2}\), \(\eta\) and \(J\) represent Goldstone modes. While \(\omega_{1,2}\) and \(\eta\) are eaten by longitudinal degrees of freedom of the \(W\) and \(Z\) bosons upon electroweak symmetry breaking, the U(1)\({}_{\rm L}\) generators are global, implying that \(J\), the Majoron, is a physical real scalar field present in the particle spectrum. As we will see below, the Majoron explicitly acquires its mass from the last term in Eq. (10), \(V_{{}_{\rm{sol}}}(\sigma)\), which preserves a remnant \(\mathbb{Z}_{2}\subset\text{U}(1)_{\rm L}\) symmetry in the scalar sector such that the potential becomes invariant under the transformation \(J\to-J\).
The EFT description presented in this article is valid at energy scales below \(\Lambda\), such that vacuum stability conditions are considered only in the range of applicability of the EFT, _i.e._ for field values below \(\Lambda\). In our numerical analysis, we use the public software tool CosmoTransitions[19], tailored to find global and local minima via a phase tracing algorithm. In the case of parameter space points with unbounded from below directions, the action of the tunneling path does not converge and the point is rejected. In order to maximize the number of viable points, we use as first guess the usual tree-level boundedness from below conditions \(\lambda>0\), \(\lambda_{\sigma}>0\) and \(\lambda_{\sigma h}>-2\sqrt{\lambda\lambda_{\sigma}}\).
In our previous work [16] we concluded that observable GWs at the reach of forthcoming space-based interferometers is typically favoured by a not too small quartic portal coupling \(\lambda_{\sigma h}\gtrsim 0.1\). However, LHC constraints from invisible Higgs decays [17] disfavour \(\lambda_{\sigma h}\) values larger than order \(\mathcal{O}(0.01)\), posing strong constraints on Majoron dark matter production via the freeze-out mechanism [20]. In this article we investigate whether effects coming from new physics above the electroweak (EW) scale and associated to the neutrino sector modify our previous conclusions allowing for visible GWs at LISA or future planned experiments, such as BBO or DECIGO. In our analysis one considers that the scale \(\mu_{\sigma}\) is not too far from the EW scale and below 1 TeV, while \(\Lambda\) is set to lie between 10 TeV and 1000 \(\text{TeV}\). In the scalar sector such effects are parametrized by \(V_{{}_{\rm{GD}}}(H,\sigma)\) while in the fermion sector, the scale \(\Lambda\) is linked to the generation of neutrino masses as we discuss in Sec. 2.
The classical field configurations \(\phi_{h}\) and \(\phi_{\sigma}\) acquire their vacuum expectation values (VEVs) when the scalar potential \(V_{{}_{\rm{0}}}(\phi_{h},\phi_{\sigma})\) is extremized
\[\left\langle\frac{\partial V_{{}_{\rm{0}}}}{\partial\phi_{\alpha}}\right\rangle _{\rm{vac}}=0\,,\qquad\left\langle\phi_{h}\right\rangle_{\rm{vac}}\equiv v_{ h}\simeq 246\,\text{GeV}\,,\qquad\left\langle\phi_{\sigma}\right\rangle_{\rm{ vac}}\equiv v_{\sigma}\,, \tag{11}\]
from where we obtain the minimization conditions that read as
\[\begin{split}\mu_{h}^{2}&=-v_{h}^{2}\lambda_{h}- \frac{1}{2}v_{\sigma}^{2}\lambda_{\sigma h}-\frac{3}{4}\frac{v_{h}^{4}\delta_ {0}}{\Lambda^{2}}-\frac{1}{2}\frac{v_{h}^{2}v_{\sigma}^{2}\delta_{2}}{\Lambda ^{2}}-\frac{1}{4}\frac{v_{\sigma}^{4}\delta_{4}}{\Lambda^{2}}\,,\\ \mu_{\sigma}^{2}&=-v_{\sigma}^{2}\lambda_{\sigma}- \mu_{b}^{2}-\frac{1}{2}v_{h}^{2}\lambda_{\sigma h}-\frac{1}{4}\frac{v_{h}^{4} \delta_{2}}{\Lambda^{2}}-\frac{1}{2}\frac{v_{h}^{2}v_{\sigma}^{2}\delta_{4}}{ \Lambda^{2}}-\frac{3}{4}\frac{v_{\sigma}^{4}\delta_{6}}{\Lambda^{2}}\,.\end{split} \tag{12}\]
Taking the Hessian matrix and using the tadpole expressions in Eq. (12), we can cast the mass matrix of the CP-even states as
\[\mathbf{M}^{2}=\left(\begin{array}{cc}M_{hh}^{2}&M_{\sigma h}^{2}\\ M_{\sigma h}^{2}&M_{\sigma\sigma}^{2}\end{array}\right)\,, \tag{13}\]
with
\[\begin{split} M_{hh}^{2}&=2v_{h}^{2}\lambda_{h}+\frac{3v_{h}^{4 }\delta_{0}}{\Lambda^{2}}+\frac{v_{h}^{2}v_{\sigma}^{2}\delta_{2}}{\Lambda^{2} }\,,\qquad M_{\sigma\sigma}^{2}=2v_{\sigma}^{2}\lambda_{\sigma}+\frac{v_{h}^{ 2}v_{\sigma}^{2}\delta_{4}}{\Lambda^{2}}+\frac{3v_{\sigma}^{4}\delta_{6}}{ \Lambda^{2}}\,,\\ M_{\sigma h}^{2}&=v_{h}v_{\sigma}\lambda_{\sigma h}+\frac{v_{h}^{3}v_{ \sigma}\delta_{2}}{\Lambda^{2}}+\frac{v_{h}v_{\sigma}^{3}\delta_{4}}{\Lambda^ {2}}\,.\end{split} \tag{14}\]
We can now rotate \(\mathbf{M}\) to the mass eigenbasis as follows
\[\mathbf{m}^{2}={O^{\dagger}}_{i}{}^{m}M_{mn}^{2}{O^{n}}_{j}=\begin{pmatrix}m_{h_{1 }}^{2}&0\\ 0&m_{h_{2}}^{2}\end{pmatrix}\,,\qquad\text{with}\qquad\mathbf{O}=\begin{pmatrix} \cos\alpha_{h}&\sin\alpha_{h}\\ -\sin\alpha_{h}&\cos\alpha_{h}\end{pmatrix}\,, \tag{15}\]
such that the physical basis vectors \(h_{1}\) and \(h_{2}\) are obtained in terms of the gauge eigenbasis ones, \(h\) and \(h^{\prime}\), as follows:
\[\begin{pmatrix}h_{1}\\ h_{2}\end{pmatrix}=\mathbf{O}\begin{pmatrix}h\\ h^{\prime}\end{pmatrix}\,. \tag{16}\]
In what follows we identify the \(h_{1}\) with a SM-like Higgs boson with mass \(125.09\) GeV, while \(h_{2}\) is a new visible scalar that can either be heavier or lighter than the Higgs. Upon rotation to the mass eigenbasis one obtains
\[m_{h_{1,2}}^{2}=\frac{1}{2}\left[M_{hh}^{2}+M_{\sigma\sigma}^{2}\pm\left(M_{hh} ^{2}-M_{\sigma\sigma}^{2}\right)\sec(2\alpha_{h})\right]\quad\text{with}\quad \cot(2\alpha_{h})=\frac{1}{2}\frac{M_{hh}^{2}-M_{\sigma\sigma}^{2}}{M_{\sigma h }^{2}}\,. \tag{23}\]
As in our numerical analysis we will use both the scalar mixing angle \(\cos\alpha_{h}\) and the physical masses \(m_{h_{1,2}}\) as input parameters, it is convenient to invert Eq. (23) and recast it as
\[M_{hh,\sigma\sigma}^{2}=\frac{1}{2}\left[m_{h_{1}}^{2}+m_{h_{2}}^{2}\pm\left( m_{h_{1}}^{2}-m_{h_{2}}^{2}\right)\cos(2\alpha_{h})\right]\quad\text{and} \quad M_{\sigma h}^{2}=\frac{1}{2}\left(m_{h_{1}}^{2}-m_{h_{2}}^{2}\right) \sin(2\alpha_{h})\,, \tag{24}\]
which will be used to determine the elements of the Hessian matrix in terms of the physical masses and mixing angle. In the CP-odd sector, the mass of the pseudo-Goldstone boson, the Majoron, is simply given by
\[m_{J}^{2}=-2\mu_{b}^{2}\,, \tag{25}\]
implying that \(\mu_{b}^{2}<0\).
### Inverted equations for scalar couplings
The main objective of this article is to show that it is possible to reconcile light and ultralight Majorons with observable GWs at LISA when effects from dimension-6 operators are sizeable. It is therefore necessary to reevaluate how such operators modify the invisible Higgs decay rate and confront it with experimental data.
The invisible Higgs decay rate to a pair of Majorons reads as [21]
\[\Gamma\left(h_{1}\to JJ\right)=\frac{1}{32\pi}\frac{\left(\lambda_{JJh_{1}}^{ (0)}\right)^{2}}{m_{h_{1}}}\sqrt{1-4\frac{m_{J}^{2}}{m_{h_{1}}^{2}}}\,, \tag{26}\]
with \(\lambda_{JJh_{1}}^{(0)}\) the effective triple Higgs-Majoron coupling expressed in the mass eigenbasis as
\[\lambda_{JJh_{1}}^{(0)}=\frac{v_{h}}{\Lambda^{2}}\left[(v_{h}^{2}\delta_{2}+v _{\sigma}^{2}\delta_{4}+\Lambda^{2}\lambda_{\sigma h})\cos\alpha_{h}+v_{\sigma }(v_{h}^{2}\delta_{4}+3v_{\sigma}^{2}\delta_{6}+2\Lambda^{2}\lambda_{\sigma}) \sin\alpha_{h}\right]\,, \tag{27}\]
and where the superscript \((0)\) denotes tree-level accuracy. The latest observed (expected) upper bound on the Higgs invisible decay branching ratio as reported by the CMS experiment [17] is \(0.18\) (\(0.10\)) at the 95% confidence level, and can be written in terms of the decay width as
\[\text{Br}(h_{1}\to JJ)=\frac{\Gamma\left(h_{1}\to JJ\right)}{\Gamma\left(h_{1 }\to JJ\right)+\Gamma\left(h_{1}\to\text{SM}\right)}\,, \tag{28}\]
where \(\Gamma\left(h_{1}\to\text{SM}\right)=4.07\text{ MeV}\). In the absence of dimension-6 operators, that is \(\delta_{0,2,4,6}=0\), Eq. (27) would simply reduce to \(\lambda_{JJh_{1}}^{(0)}=\frac{1}{2}v_{h}\lambda_{\sigma h}\cos\alpha_{h}\), requiring small values of the portal coupling \(\lambda_{\sigma h}\lesssim\mathcal{O}(0.01)\) in order to comply with experimental data. On the contrary, for the considered model, it is the combination \(\frac{v_{h}^{2}}{\Lambda^{2}}\delta_{2}+\frac{v_{\sigma}^{2}}{\Lambda^{2}} \delta_{4}+\lambda_{\sigma h}\) that has the dominant role instead of the individual couplings. As it shown in the numerical analysis, this feature is crucial for the generation of observable primordial GW signals compatibly with light/ultralight Majorons. Such an analysis also indicates the preferred scale \(\Lambda\), identified with the heavy neutrino masses. In other words, a potential observation (or lack of it) of primordial GWs at LISA can be seen as a gravitational probe for the underlying mechanism of neutrino mass generation and its scale.
In total, one considers five physical observables as input parameters, \(m_{h_{1}}\), \(m_{h_{2}}\), \(\alpha_{h}\), \(\text{Br}\left(h_{1}\to JJ\right)\) and \(m_{J}\), which means that it is possible to find five Lagrangian parameters that can be determined in terms of the physical ones. One of such parameters is \(\mu_{b}^{2}\), which is related to the physical Majoron mass \(m_{J}\) via Eq. (25). One can also express \(\lambda_{\sigma h}\), \(\lambda_{\sigma}\), \(\lambda_{h}\), and \(\delta_{6}\) in terms of the above physical
observables. To do this we first use the expression for \(\lambda^{(0)}_{h_{1}JJ}\) in Eq. (3.14), replacing it in \(\Gamma\left(h_{1}\to JJ\right)\) as given in Eq. (3.13). Substituting now the decay width in Eq. (3.15), it is convenient to recast \(\text{Br}(h_{1}\to JJ)\) as follows
\[A(\text{Br})=\left[\delta_{2}+\frac{v_{h}\left(v_{\sigma}^{2}\delta_{4}+\Lambda ^{2}\lambda_{\sigma h}\right)+v_{\sigma}\left(v_{h}^{2}\delta_{4}+3v_{\sigma}^ {3}\delta_{6}+2\Lambda^{2}\lambda_{\sigma}\right)\tan\alpha_{h}}{v_{h}^{3}} \right]\cos\alpha_{h}\,, \tag{3.16}\]
where, on the left-hand-side \(A(\text{Br})\) is defined as
\[A(\text{Br})\equiv\pm 4\sqrt{2\pi}\left(1-4\frac{m_{J}^{2}}{m_{h}^{2}}\right)m_{ h}^{3/2}\frac{\Lambda^{2}}{v_{h}^{3}}\sqrt{\frac{\text{Br}(h\to JJ)\Gamma(h \to\text{SM})}{\left[1-\text{Br}(h\to JJ)\right]\left(m_{h}^{2}-4m_{J}^{2} \right)}}\,. \tag{3.17}\]
If we now combine Eq. (3.16) with Eqs. (3.7) and (3.10), we obtain the following closed set of formulas for the theory couplings:
\[\lambda_{\sigma h}= \frac{\tan\left(2\alpha_{h}\right)\left(M_{hh}^{2}-M_{\sigma \sigma}^{2}\right)}{2v_{h}v_{\sigma}}-\frac{\delta_{2}v_{h}^{2}+\delta_{4}v_{ \sigma}^{2}}{\Lambda^{2}}\,,\] \[\lambda_{\sigma}= -\frac{2A(\text{Br})v_{h}^{3}v_{\sigma}\csc\left(\alpha_{h}\right) +\Lambda^{2}\sec\left(2\alpha_{h}\right)\left(M_{\sigma\sigma}^{2}-M_{hh}^{2} \right)+\Lambda^{2}\left(-M_{hh}^{2}+M_{\sigma\sigma}^{2}-2M_{\sigma\sigma}^{2 }v_{\sigma}\right)}{4\Lambda^{2}\left(v_{\sigma}-1\right)v_{\sigma}^{2}}\] \[+\frac{\delta_{4}v_{h}^{2}}{2\Lambda^{2}}\,, \tag{3.18}\] \[\lambda_{h}= \frac{1}{2}\left(\frac{M_{hh}^{2}}{v_{h}^{2}}-\frac{3\delta_{0}v _{h}^{2}+\delta_{2}v_{\sigma}^{2}}{\Lambda^{2}}\right)\,,\] \[\delta_{6}= \frac{2A(\text{Br})v_{h}^{3}v_{\sigma}\csc\left(\alpha_{h}\right) -\Lambda^{2}\left(\sec\left(2\alpha_{h}\right)\left(M_{hh}^{2}-M_{\sigma \sigma}^{2}\right)+M_{hh}^{2}+M_{\sigma\sigma}^{2}\right)}{6(v_{\sigma}-1)v_{ \sigma}^{4}}\,,\]
with the elements of the Hessian matrix \(M_{hh,\sigma\sigma}^{2}\) expressed in terms of the physical scalar masses \(m_{h_{1,2}}\) and the mixing angle \(\alpha_{h}\), as in Eq. (3.11). The remaining parameters of the scalar potential, including the singlet VEV \(v_{\sigma}\), are kept unconstrained.
The inverted equations obtained above are derived from tree-level relations. They are used as a first approximation in order to keep the invisible Higgs decay branching fraction under control in the sense that it allows one use \(\text{Br}\left(h_{1}\to JJ\right)\) as an input parameter. However, one-loop corrections to \(\lambda^{(0)}_{h_{1}JJ}\) coupling can be important [22] and modify \(\text{Br}(h_{1}\to JJ)\). These are calculated in the effective potential approach using the formalist developed in [22]. This is justified in the zero external momentum limit, \(p^{2}\to 0\), [23] when the kinematical suppression of the \(p^{2}\neq 0\) corrections from heavy states is \(m_{h}^{2}/(4m_{\text{heavy}}^{2})<0.1\). The one-loop corrected coupling reads as
\[\lambda_{JJh_{1}}=\lambda^{(0)}_{JJh_{1}}+\frac{1}{16\pi^{2}}\left(\lambda^{N} _{JJh_{1}}+\lambda^{h_{2}}_{JJh_{1}}\right)\,, \tag{3.19}\]
with the superscript \(N\) and \(h_{2}\) denoting contributions from heavy neutrinos and the CP-even heavy Higgs boson when \(m_{h_{1}}^{2}/(4m_{h_{2}}^{2})<0.1\)[23]. Expressions for \(\lambda^{N}_{JJh_{1}}\) and \(\lambda^{h_{2}}_{JJh_{1}}\) are given in Appendix A.
### Majoron decays
It is beyond the scope of the article to study the dark properties of the Majoron, however, it is instructive to identify whether it can be stable, unstable or long-lived by verifying its decay rate to photons [24; 25] and neutrinos [26].
At two-loop level, the decay rate of the Majoron to a pair of photons can be expressed as [25]
\[\Gamma(J\to\gamma\gamma)=\frac{|g_{J\gamma\gamma}|^{2}m_{J}^{3}}{64\pi}\,, \qquad\text{with}\qquad g_{J\gamma\gamma}=g_{J\gamma\gamma}^{(1)}+g_{J\gamma \gamma}^{(2)}\,. \tag{3.20}\]
There are two contributions to the coupling, which take the form
\[\begin{split} g^{(1)}_{J\gamma\gamma}=&\frac{\alpha}{8 \pi^{3}v_{h}^{2}v_{\sigma}}\sum_{l}(\mathbf{M_{D}M_{D}^{\dagger}})_{ll}F\left(\frac{ m_{J}^{2}}{4m_{l}^{2}}\right)\,,\\ g^{(2)}_{J\gamma\gamma}=&\frac{\alpha}{8\pi^{3}v_{h} ^{2}v_{\sigma}}\text{tr}(\mathbf{M_{D}M_{D}^{\dagger}})\sum_{f}N_{c}^{f}Q_{f}^{2}T _{3}^{f}F\left(\frac{m_{J}^{2}}{4m_{f}^{2}}\right)\,,\end{split} \tag{31}\]
where \(N_{c}^{f}\), \(T_{3}^{f}\), \(Q_{f}\) denote color, isospin and electric charge of the of fermion \(f\) respectively. Note that the index \(l\) goes over lepton flavors while \(f\) goes over charged fermion flavors. The Dirac mass matrix is \(\mathbf{M_{D}}=\frac{v_{h}}{\sqrt{2}}\mathbf{y_{\nu}}\), and the loop function reads as
\[F(x)\equiv-\frac{1}{4x}\left\{\log\left[1-2x+2\sqrt{x(x-1)}\right]\right\}^{2 }-1. \tag{32}\]
In a flavour diagonal basis, the interaction between Majorons and neutrinos can be parameterized as [18; 27]
\[\mathcal{L}=\frac{i}{2}\lambda_{\nu_{j}}J\overline{\nu}_{j}\gamma_{5}\nu_{j}\,, \tag{33}\]
with \(j=1,2,3\) denoting active neutrinos from the lightest, \(m_{1}\), to the heaviest, \(m_{3}\), and \(\lambda_{\nu_{j}}\equiv m_{j}/v_{\sigma}\). One of the predictions of Majoron models is that it can also decay to neutrinos if kinematically allowed. Such a decay takes place at tree-level and, for the case of the extended inverse seesaw model, its total decay width to neutrinos is given by
\[\Gamma\left(J\rightarrow\nu\nu\right)=\frac{m_{J}}{16\pi v_{\sigma}^{2}}\sum_ {i}\left(m_{\nu_{i}}^{2}\sqrt{1-\frac{4m_{\nu_{i}}^{2}}{m_{J}^{2}}}\right)\,. \tag{34}\]
### Higgs trilinear coupling
The presence of SFOPTs can result from sizeable scalar trilinier interactions [28] responsible for inducing a potential barrier between the false and true vacua. Particularly relevant is the Higgs boson triple coupling, \(\lambda_{h_{1}h_{1}h_{1}}\), which can simultaneously be probed at colliders and have an impact in the spectrum of primordial GWs [29]. For instance, at the LHC, the dominant process to measure \(\lambda_{h_{1}h_{1}h_{1}}\) is the gluon fusion into a pair of Higgs bosons [30; 31; 32; 33; 34].
Similarly to \(\lambda_{h_{1}JJ}\), the calculation of the Higgs trilinear coupling is performed at one-loop in the effective potential approach. One can express it according to
\[\lambda_{h_{1}h_{1}h_{1}}=\lambda_{h_{1}h_{1}h_{1}}^{(0)}+\frac{1}{16\pi^{2}} \left(\lambda_{h_{1}h_{1}h_{1}}^{t}+\lambda_{h_{1}h_{1}h_{1}}^{N}+\lambda_{h_{ 1}h_{1}h_{1}}^{h_{2}}\right) \tag{35}\]
with the superscript \(t\), \(N\) and \(h_{2}\) denoting one-loop contributions from the top quark, heavy neutrinos and the second CP-even Higgs boson when \(m_{h_{1}}^{2}/(4m_{h_{2}}^{2})<0.1\). Note that we also include the top quark it in the calculation as the kinematical suppression from \(p^{2}\neq 0\) corrections is \(m_{h}^{2}/(4m_{t}^{2})\approx 0.1\). We show in A expressions for the r.h.s of Eq. (35).
## 4 Thermal effective potential and Gravitational Waves
As the universe expands and cools down, the configuration of its temperature dependent vacuum state changes, typically giving rise to transitions between symmetric and broken phases. In this article we specialize to the case of the electroweak and lepton number symmetries, and study the impact on the stochastic primordial gravitational wave background.
### The one-loop \(T\)-dependent effective potential
The shape of the effective potential depends on the symmetries and on the field content of the underlying theory. Therefore, for the purpose of exploring the features of phase-transitions in the model under consideration, we construct the one-loop temperature dependent effective potential in the following form [35; 36]:
\[V_{\rm eff}(T)=V_{0}+V_{\rm CW}^{(1)}+\Delta V(T)+V_{\rm ct}\,, \tag{10}\]
where \(V_{0}\) denotes the classical (tree-level) potential as given in Eq. (11), \(V_{\rm CW}^{(1)}\) is the zero-temperature, one-loop, Coleman-Weinberg (CW) potential, \(\Delta V(T)\) describes the leading order thermal corrections and \(V_{\rm ct}\) is the counter-term potential.
The CW potential, expressed in the Landau gauge, reads as
\[V_{\rm CW}^{(1)}=\sum_{i}(-1)^{F_{i}}n_{i}\frac{m_{i}^{2}(\phi_{\alpha})}{64 \pi^{2}}\left(\log\left[\frac{m_{i}^{2}(\phi_{\alpha})}{Q^{2}}\right]-c_{i} \right)\,, \tag{11}\]
where \(m_{i}^{2}(\phi_{\alpha})\) is the \(\phi_{\alpha}\)-field dependent mass of the particle \(i\), \(n_{i}\) is the number of degrees of freedom (d.o.f.'s) for a given particle \(i\), \(F=0(1)\) for bosons (fermions), \(Q\) is the renormalization scale and, in the \(\overline{\rm MS}\)-scheme, the constant \(c_{i}\) is equal to \(3/2\) for each d.o.f. of scalars, fermions and longitudinally polarised gauge bosons, and to \(1/2\) for transversely polarised gauge boson d.o.f.'s.
In what follows we fix the renormalization scale to the EW VEV \(Q\equiv v_{h}\). In our analysis the scalar boson masses, the U(1)\({}_{\rm L}\) breaking VEV \(v_{\sigma}\) and nucleation temperatures are not far from \(v_{h}\). Therefore, a renormalization improved treatment as in scenarios with large separation of scales, as e.g. in scenarios with classical conformal symmetry [37; 38] is unnecessary in the EIS model.
For a simpler analysis we require that the tree-level minimum conditions and masses are identical to their one-loop values. The countertem potential is then introduced as
\[V_{\rm ct}=\frac{\partial V_{0}}{\partial p_{i}}\delta_{p_{i}}\,, \tag{12}\]
with \(p_{i}\) denoting the parameters in \(V_{0}\) and \(\delta_{p_{i}}\) the corresponding parameter counterterms. The renormalization conditions necessary to fulfil the above requirements read as
\[\left\langle\frac{\partial V_{\rm ct}}{\partial\phi_{i}}\right\rangle=\left \langle-\frac{\partial V_{\rm CW}^{(1)}}{\partial\phi_{i}}\right\rangle\,, \qquad\left\langle\frac{\partial^{2}V_{\rm ct}}{\partial\phi_{i}\partial\phi _{j}}\right\rangle=\left\langle-\frac{\partial^{2}V_{\rm CW}^{(1)}}{\partial \phi_{i}\partial\phi_{j}}\right\rangle\,,\qquad{\rm with}\qquad\phi_{i}=\{ \phi_{h},\phi_{\sigma}\}\;, \tag{13}\]
with the following solutions
\[\delta_{\mu_{h}^{2}} =\frac{1}{2}\frac{\partial^{2}V_{\rm CW}^{(1)}}{\partial v_{h}^ {2}}+\frac{1}{2}\frac{v_{\sigma}}{v_{h}}\frac{\partial^{2}V_{\rm CW}^{(1)}}{ \partial v_{h}\partial v_{\sigma}}-\frac{3}{2}\frac{1}{v_{h}}\frac{\partial V _{\rm CW}^{(1)}}{\partial v_{h}}+a\frac{3}{4}\frac{v_{h}^{4}}{\Lambda^{2}}+b \frac{1}{2}\frac{v_{h}^{2}v_{\sigma}^{2}}{\Lambda^{2}}+c\frac{1}{4}\frac{v_{ \sigma}^{4}}{\Lambda^{2}}\,,\] \[\delta_{\mu_{\sigma}^{2}} =\frac{1}{2}\frac{\partial^{2}V_{\rm CW}^{(1)}}{\partial v_{ \sigma}^{2}}+\frac{1}{2}\frac{v_{h}}{\sigma}\frac{\partial^{2}V_{\rm CW}^{(1)} }{\partial v_{h}\partial v_{\sigma}}-\frac{3}{2}\frac{1}{v_{\sigma}}\frac{ \partial V_{\rm CW}^{(1)}}{\partial v_{\sigma}}+b\frac{1}{4}\frac{v_{h}^{4}}{ \Lambda^{2}}+c\frac{1}{2}\frac{v_{h}^{2}v_{\sigma}^{2}}{\Lambda^{2}}+d\frac{3} {4}\frac{v_{\sigma}^{4}}{\Lambda^{2}}-f\,,\] \[\delta_{\lambda_{h}} =-\frac{1}{2}\frac{1}{v_{h}^{2}}\frac{\partial^{2}V_{\rm CW}^{(1) }}{\partial v_{h}^{2}}+\frac{1}{2}\frac{1}{v_{h}^{3}}\frac{\partial V_{\rm CW }^{(1)}}{\partial v_{h}}-a\frac{3}{2}\frac{v_{h}^{2}}{\Lambda^{2}}-b\frac{1}{ 2}\frac{v_{\sigma}^{2}}{\Lambda^{2}}\,, \tag{14}\] \[\delta_{\lambda_{\sigma}} =-\frac{1}{2}\frac{1}{v_{\sigma}^{2}}\frac{\partial^{2}V_{\rm CW }^{(1)}}{\partial v_{\sigma}^{2}}+\frac{1}{2}\frac{1}{v_{\sigma}^{3}}\frac{ \partial V_{\rm CW}^{(1)}}{\partial v_{\sigma}}-c\frac{1}{2}\frac{v_{h}^{2}}{ \Lambda^{2}}-d\frac{3}{2}\frac{v_{\sigma}^{2}}{\Lambda^{2}}\,,\] \[\delta_{\lambda_{\sigma h}} =-\frac{1}{v_{h}v_{\sigma}}\frac{\partial^{2}V_{\rm CW}^{(1)}}{ \partial v_{h}\partial v_{\sigma}}-b\frac{v_{h}^{2}}{\Lambda^{2}}-c\frac{v_{ \sigma}^{2}}{\Lambda^{2}}\,,\] \[\delta_{\delta_{0}} =a\,,\quad\delta_{\delta_{2}}=b\,,\quad\delta_{\delta_{4}}=c\,, \quad\delta_{\delta_{6}}=d\,,\quad\delta_{\mu_{h}^{2}}=f\,,\]
with \(a\), \(b\), \(c\), \(d\) and \(f\) being arbitrary dimensionless constants. In our numerical analysis we fix \(a=b=c=d=f=0\).
One-loop thermal corrections are given by [35]
\[\Delta V(T)=\frac{T^{4}}{2\pi^{2}}\left\{\sum_{b}n_{b}J_{B}\left[\frac{m_{b}^{2}( \phi_{\alpha})}{T^{2}}\right]-\sum_{f}n_{f}J_{F}\left[\frac{m_{f}^{2}(\phi_{ \alpha})}{T^{2}}\right]\right\}\,, \tag{4.6}\]
where \(J_{B}\) and \(J_{F}\) are the thermal integrals for bosons and fermions, respectively, provided by
\[J_{B/F}(y^{2})=\int_{0}^{\infty}dx\,x^{2}\log\left(1\mp\exp[-\sqrt{x^{2}+y^{2} }]\right)\,. \tag{4.7}\]
Corrections for the first non-trivial order of the thermal expansion \(\sim(m/T)^{2}\) have the following form
\[\Delta V^{(1)}(T)|_{\text{L.O.}}=\frac{T^{2}}{24}\left\{\text{Tr}\left[M_{ \alpha\beta}^{2}(\phi_{\alpha})\right]+\sum_{i=W,Z,\gamma}n_{i}m_{i}^{2}(\phi _{\alpha})+\sum_{i=f_{i}}\frac{n_{i}}{2}m_{i}^{2}(\phi_{\alpha})\right\}\,, \tag{4.8}\]
where in the last sum all SM fermions plus six heavy neutrinos, \(N_{1,2,3}^{+},N_{1,2,3}^{-}\), are implicit. The first term in Eq. (4.8) denotes the trace of the field-dependent scalar Hessian matrix \(M_{\alpha\beta}^{2}(\phi_{\alpha})\), which is a basis invariant quantity. In practical calculations, it is convenient to use the diagonal elements of the gauge eigenbasis mass form that, for the considered model, are given by \(M_{hh}^{2}\) and \(M_{\sigma\sigma}^{2}\) in Eq. (3.7) with the replacement of the VEVs by the classical field configurations \(v_{h}\to\phi_{h}\) and \(v_{\sigma}\to\phi_{\sigma}\). The \(n_{i}\) coefficients in Eq. (4.8) represent the number of d.o.f for a given particle, as indicated by the sums. In particular, for the SM gauge bosons (\(W,Z\) and transversely polarised photon \(\gamma\)) we have
\[n_{W}=6,\qquad n_{Z}=3,\qquad n_{\gamma}=2\,, \tag{4.9}\]
whereas for scalars and the longitudinally polarized photon (\(A_{L}\)) we have
\[n_{s}=6,\qquad n_{A_{L}}=1\,, \tag{4.10}\]
while for fermions
\[n_{u,d,c,s,t,b}=12,\qquad n_{c,\mu,\tau}=4\,,\qquad n_{\nu_{1,2,3}}=n_{N_{1,2,3 }^{\pm}}=2\,. \tag{4.11}\]
The appearance of \(T^{2}\) terms in the thermal expansion signals the restoration of symmetries broken at zero temperature. Furthermore, such a restored symmetry by \(T^{2}\) contributions to the effective potential can lead to the breakdown of perturbation theory in a close vicinity of the critical temperature. To address this, an all-order resummation process via the so called daisy or ring diagrams is required [39; 40; 41; 42]. In practice, such a procedure can be achieved taking into account finite temperature corrections to the field-dependent masses entering in the effective potential Eq. (4.1) as follows
\[\mu_{\alpha}^{2}(T)=\mu_{\alpha}^{2}+c_{\alpha}T^{2}\,. \tag{4.12}\]
The \(c_{\alpha}\) coefficients can be calculated from Eq. (4.8) as
\[c_{\alpha}=\frac{\partial\Delta V^{(1)}(T,\phi_{h},\phi_{\sigma})^{2}}{\partial ^{2}\phi_{\alpha}}\,, \tag{4.13}\]
where for the 6D EIS model one has
\[\begin{split} c_{h}&=\frac{3}{16}g^{2}+\frac{1}{16}{ g^{\prime}}^{2}+\frac{1}{2}\lambda_{h}+\frac{1}{12}\lambda_{\sigma h}+\frac{1}{4} \sum_{q}y_{q}^{2}+\frac{1}{12}\sum_{\ell}y_{\ell}^{2}+\frac{1}{24}K_{\nu}+K_{ \Lambda}^{h}\,,\\ c_{\sigma}&=\frac{1}{3}\lambda_{\sigma}+\frac{1}{6} \lambda_{\sigma h}+\frac{1}{24}K_{\sigma}+K_{\Lambda}^{\sigma}\,,\end{split} \tag{4.14}\]
with \(q\) and \(\ell\) denoting SM quarks and charged leptons respectively. Notice that, in practice, only the third generation Yukawa couplings play a sizeable role. In Eq. (4.14) one also defines
\[K_{\nu} =\sum_{i=1}^{3}y_{\nu_{i}}^{\rm eff}\quad\text{with}\quad y_{\nu_{ i}}^{\rm eff}=\frac{\phi_{h}\phi_{\sigma}}{2}\frac{y_{\nu_{i}}^{2}y_{\sigma_{i}}}{ \Lambda^{2}}\quad\text{and}\quad m_{\nu_{i}}(\phi_{h})=\frac{\phi_{h}}{\sqrt{2 }}y_{\nu_{i}}^{\rm eff}\qquad K_{\sigma}=\sum_{i=1}^{3}y_{\sigma_{i}}^{2} \tag{4.15}\] \[K_{\Lambda}^{h} =\frac{3\phi_{h}^{2}}{\Lambda^{2}}\delta_{0}+\frac{\phi_{h}^{2}+ \phi_{\sigma}^{2}}{4\Lambda^{2}}\delta_{2}+\frac{\phi_{\sigma}^{2}}{6\Lambda^ {2}}\delta_{4}\qquad K_{\Lambda}^{\sigma}=\frac{\phi_{h}^{2}}{4\Lambda^{2}} \delta_{2}+\frac{\phi_{h}^{2}}{6\Lambda^{2}}\delta_{4}+\frac{\phi_{\sigma}^{2 }}{2\Lambda^{2}}\delta_{4}+\frac{9\phi_{\sigma}^{2}}{4\Lambda^{2}}\delta_{6}\,.\]
Similarly, the temperature dependence of the vector boson masses at leading order is introduced by adding \(T^{2}\) corrections to the diagonal terms of the gauge boson Hessian matrix. Note that only the longitudinal polarizations, including that of the photon, \(\{W_{L}^{+},W_{-}^{-},Z_{L},A_{L}\}\), receive finite corrections in a thermal medium such that the mass spectrum is obtained upon diagonalization of the \(T^{2}\)-corrected mass form
\[M_{\rm gauge}^{2}(\phi_{h};T)=M_{\rm gauge}^{2}(\phi_{h})+\frac{11}{6}T^{2} \left(\begin{array}{cccc}g^{2}&0&0&0\\ 0&g^{2}&0&0\\ 0&0&g^{2}&0\\ 0&0&0&g^{\prime 2}\end{array}\right)\,, \tag{4.16}\]
whose eigenvalues of the zero-temperature mass matrix \(M_{\rm gauge}^{2}(\phi_{h})\) read as
\[m_{W}^{2}(\phi_{h})=\frac{\phi_{h}^{2}}{4}g^{2}\,,\quad m_{Z}^{2}(\phi_{h})= \frac{\phi_{h}^{2}}{4}(g^{2}+{g^{\prime}}^{2})\,. \tag{4.17}\]
Rotating to the physical basis one obtains the following mass spectrum
\[m_{W_{L}}^{2}(\phi_{h};T)=m_{W}^{2}(\phi_{h})+\frac{11}{6}g^{2}T ^{2}\,, \tag{4.18}\] \[m_{Z_{L},A_{L}}^{2}(\phi_{h};T)=\frac{1}{2}m_{Z}^{2}(\phi_{h})+ \frac{11}{12}(g^{2}+{g^{\prime}}^{2})T^{2}\pm\mathcal{D}\,, \tag{4.19}\]
with the field-dependent \(W,Z\) boson masses given in Eq. (4.17), and
\[\mathcal{D}^{2}=\Big{(}\frac{1}{2}m_{Z}^{2}(\phi_{h})+\frac{11}{12}(g^{2}+{g^ {\prime}}^{2})T^{2}\Big{)}^{2}-\frac{11}{12}g^{2}{g^{\prime}}^{2}T^{2}\Big{(} \phi_{h}^{2}+\frac{11}{3}T^{2}\Big{)}\,. \tag{4.20}\]
New techniques accounting for a great improvement in the calculation of sizeable higher order thermal effects, in particular considering the resummed effective field theory constructed from dimensional reduction [43; 44; 45; 46; 47], were recently developed. We leave for future work the inclusion of such methods in our studies.
### Dynamics of the Phase Transition
Phase transitions can be described as dynamical processes occurring via non-perturbative solutions of the equations of motion. For the low-\(T\) regime they are essentially realized through quantum tunneling, or instantons [48; 49], whereas at high temperature these processes are dominated by thermal jumps. The formalism behind both cases is identical and can be described by a classical motion in Euclidean space. In particular, the classical action reads as [50]
\[\hat{S}_{3}(\hat{\phi},T)=4\pi\int_{0}^{\infty}\mathrm{d}r\,r^{2}\left\{\frac{ 1}{2}\left(\frac{\mathrm{d}\hat{\phi}}{\mathrm{d}r}\right)^{2}+V_{\rm eff}( \hat{\phi},T)\right\}\,, \tag{4.21}\]
with the full one-loop effective potential specified in Eq. (4.1), with \(\hat{\phi}\) a particular solution found by the path that minimizes the action [19; 50].
The transition rate from the false to the true vacuum is, to a good approximation, given by [51, 52]
\[\Gamma(T)\approx T^{4}\left(\frac{\hat{S}_{3}}{2\pi T}\right)^{3/2}e^{-\hat{S}_{3 }/T}\,. \tag{4.22}\]
There are three relevant temperatures characterizing the phase transition. When the Universe reaches a temperature for which multiple minima are degenerate, one defines the critical temperature \(T_{c}\). As the Universe cools down below \(T_{c}\), thermal fluctuations can become large enough to nucleate one true vacuum bubble per cosmological horizon. The nucleation temperature, \(T_{n}\), is then defined as the solution of
\[\int_{T_{n}}^{T_{c}}\frac{dT}{T}\frac{\Gamma(T)}{H(T)^{4}}=1\,, \tag{4.23}\]
with \(H(T)\) the Hubble rate at temperature \(T\). In an alternative definition [53], \(T_{n}\) is determined as the temperature at which the transition rate matches the Hubble rate, i.e.
\[\frac{\Gamma(T_{n})}{H(T_{n})^{4}}=1\,, \tag{4.24}\]
with the total Hubble rate accounting for the radiation and vacuum energy density contributions given by [54]
\[H(T)^{2}=\frac{g_{*}(T)\pi^{2}T^{4}}{90M_{\rm Pl}^{2}}+\frac{\Delta V}{3M_{\rm Pl }^{2}}\,, \tag{4.25}\]
where \(g_{*}(T)\) represents the number of relativistic degrees of freedom at a temperature \(T\). In the presence of strong supercooling, where the strength of the phase transition \(\alpha\) can be several orders of magnitude above 1, the vacuum energy contribution dominates [37]. Conversely, for scenarios with slight or no supercooling, where according to the definition in [53]\(\alpha\lesssim 0.1\), the radiation component will dominate. Eq. (4.24) can then be approximated to the well known relation
\[\frac{\hat{S}_{3}(T_{n})}{T_{n}}\approx 140\,, \tag{4.26}\]
valid for temperatures of the order of the EW scale and small \(\alpha\). We implement Eq. (4.24) in our calculations neglecting the vacuum contribution when \(\alpha\lesssim\mathcal{O}(0.1)\).
Finally, the percolation temperature \(T_{*}\) is defined when \(34\%\) of the false vacuum has transited to the true one. This condition results in the presence of a large structure, denoted as percolation cluster, where the true vacuum spans over the whole Universe such that it cannot collapse back into the false vacuum. The probability of finding a point in the false vacuum reads as [51]
\[P(T)=e^{-I(T)},\qquad\qquad\qquad I(T)=\frac{4\pi v_{b}^{3}}{3}\int_{T}^{T_{c} }\frac{\Gamma(T^{\prime})dT^{\prime}}{T^{\prime 4}H(T^{\prime})}\left(\int_{T}^{T^{ \prime}}\frac{d\tilde{T}}{H(\tilde{T})}\right)^{3}\,, \tag{4.27}\]
and the percolation temperature is obtained by solving \(I(T_{*})=0.34\) or, equivalently, \(P(T_{*})=0.7\).
One of the key quantities relevant for the study of primordial GWs is the so called order parameter. It can be defined as
\[\frac{\Delta v_{\alpha}}{T_{*}}=\eta_{\alpha}\,,\qquad\text{with}\qquad\Delta v _{\alpha}=|v_{\alpha}(T_{*}+\delta T)-v_{\alpha}(T_{*}-\delta T)|\,\qquad\alpha=\{h,\sigma\}\,, \tag{4.28}\]
such that \(\Delta v_{\alpha}\) is the absolute value of the difference between \(v_{\alpha}(T)\) computed before and after a phase transition, with \(\delta T\) taken to be sufficiently small, i.e. \(\delta T\ll T_{n}\). Note that \(\eta_{\alpha}\) can be regarded as a measure of the strength of the phase transition. In particular, in the context of EW baryogenesis, a phase transition is said to be strong whenever the criterion \(\eta_{h}\gtrsim 1\) is obeyed. However, for the
case of GWs the most commonly used parameter to define the strength of the phase transition is the difference in the trace anomaly [8; 55]
\[\alpha=\frac{1}{\rho_{\gamma}}\left[\Delta V-\frac{T}{4}\left(\frac{\partial \Delta V}{\partial T}\right)\right]\,, \tag{4.29}\]
where \(\Delta V=V_{i}-V_{f}\) with \(V_{i}\equiv V_{\rm eff}(\phi^{i}_{h,\sigma};T_{*})\) and \(V_{f}\equiv V_{\rm eff}(\phi^{f}_{h,\sigma};T_{*})\) the values of the effective potential in the initial (metastable) and final (stable) phases respectively, and
\[\rho_{\gamma}=g_{*}\frac{\pi^{2}}{30}T_{n}^{4}\,,\qquad g_{*}\simeq 108.75\,, \tag{4.30}\]
is the energy density of the radiation medium at the bubble nucleation epoch found in terms of the number of relativistic d.o.f. In the determination of \(g_{*}\), besides the SM particles we have only considered the Majoron \(J\) and the new CP-even Higgs boson \(h_{2}\) provided that heavy neutrinos are non-relativistic at \(T_{n}\), i.e. \(m_{N_{i}}\gg T_{n}\).
Another relevant parameter characterizing a phase transition is the inverse time scale, which, in units of the Hubble parameter \(H\) reads as
\[\frac{\beta}{H}=T_{*}\left.\frac{\partial}{\partial T}\left(\frac{\dot{S}_{3}} {T}\right)\right|_{T_{*}}\,. \tag{4.31}\]
The \(\beta/H\) parameter offers a description of the duration of the phase transition which means that to smaller values of \(\beta/H\) typically corresponds a larger energy density amplitude of the stochastic spectrum of primordial GWs. And \(\alpha\)
### Primordial Gravitational Waves: a semi-analytical approximation
Strong FOPTs are violent processes occurring in the early Universe and are expected to leave a signature in the form of a stochastic background of primordial GWs. In the first approximation, the primordial stochastic GW background is statistically isotropic, stationary and Gaussian. Furthermore, both the \(+\) and the \(\times\) polarizations are assumed to have the same spectrum and are mutually uncorrelated. The GW power spectrum is given in terms of the energy-density of the gravitational radiation per logarithmic frequency as
\[h^{2}\Omega_{\rm GW}(f)\equiv\frac{h^{2}}{\rho_{c}}\frac{\partial\rho_{\rm GW }}{\partial\log f}\,, \tag{4.32}\]
with \(\rho_{c}\) the critical energy density today. This observable is independent of uncertainties in the measurement of the Hubble parameter. For details on the derivation of Eq. (4.32) see for example [56; 57; 58] and references therein.
In our analysis we are interested in a regime where vacuum bubbles undergo supersonic expansion. In turn, the leading contribution for the GW power spectrum results from sound waves (SW) [59] generated by the so called supersonic detonations. The most up to date understanding for the SW contribution to the SGWB production is discussed in [60]. Other sources of GW production are collisions between bubble walls [61] and magnetohydrodynamics (MHD) tubulences in the plasma [62]. However, following the discussion in [7; 8], the collision component is typically very inefficient unless for the case of very strong FOPTs with runaway bubbles where the wall velocity undergoes unbounded acceleration, i.e. \(v_{w}\to 1\)[63; 64; 37]. In such scenarios \(\alpha\) may become orders of magnitude larger than unity and the efficiency of bubble collisions in producing gravitational radiation surpasses the sound waves one. The impact of MHD turbulence is also neglected in the current work and left for future studies when a better understand of the importance of such a component becomes available. Nonetheless, using the formulas in [65], the MHD component is found to have no impact to the peak amplitude and frequency of the GW power spectrum. Therefore we will solely consider the sound waves contribution in the remainder of our analysis.
While it is quite challenging to provide a precise estimate for the bubble wall velocity [66, 67], in our numerical studies we fix the it to \(v_{w}=0.95\). The regime of supersonic detonations is realized by further requiring \(v_{w}>v_{\rm J}\), with \(v_{\rm J}\) the Chapman-Jouguet velocity defined as
\[v_{\rm J}=\frac{1}{1+\alpha}\left(c_{s}+\sqrt{\alpha^{2}+\frac{2}{3}\alpha} \right)\,. \tag{4.33}\]
The SGWB spectrum expressed in terms of the peak amplitude \(h^{2}\Omega_{\rm GW}^{\rm peak}\) and the spectral function reads as
\[h^{2}\Omega_{\rm GW}=h^{2}\Omega_{\rm GW}^{\rm peak}\left(\frac{4}{7}\right)^ {-\frac{7}{2}}\left(\frac{f}{f_{\rm peak}}\right)^{3}\left[1+\frac{3}{4}\left( \frac{f}{f_{\rm peak}}\right)\right]^{-\frac{7}{2}}\,, \tag{4.34}\]
where \(f_{\rm peak}\) is the peak-frequency. Semi-analytic expressions for peak-amplitude and peak-frequency in terms of \(\beta/H\) and \(\alpha\) can be found in Ref. [60] and can be summarised as follows
\[\begin{split}& f_{\rm peak}=26\times 10^{-6}\left(\frac{1}{HR} \right)\left(\frac{T_{*}}{100}\right)\left(\frac{g_{*}}{100~{}{\rm GeV}} \right)^{\frac{1}{6}}{\rm Hz}\,,\\ & h^{2}\Omega_{\rm GW}^{\rm peak}=1.159\times 10^{-7}\left(\frac{100} {g_{*}}\right)\left(\frac{HR}{\sqrt{c_{s}}}\right)^{2}K^{\frac{3}{2}}\qquad{ \rm for}\qquad{\rm H}\tau_{\rm sh}=\frac{2}{\sqrt{3}}\frac{{\rm HR}}{{\rm K}^{1 /2}}<1\,,\\ & h^{2}\Omega_{\rm GW}^{\rm peak}=1.159\times 10^{-7}\left(\frac{100} {g_{*}}\right)\left(\frac{HR}{c_{s}}\right)^{2}K^{2}\qquad{\rm for}\qquad{\rm H }\tau_{\rm sh}=\frac{2}{\sqrt{3}}\frac{{\rm HR}}{{\rm K}^{1/2}}\simeq 1\,, \end{split} \tag{4.35}\]
where \(\tau_{\rm sh}\) is the fluid turnover time or the shock formation time, which quantifies the time the GW source was active. In these expressions, \(c_{s}=1/\sqrt{3}\) is the speed of sound, \(R\) is the mean bubble separation,
\[K=\frac{\kappa\alpha}{1+\alpha} \tag{4.36}\]
is the fraction of the kinetic energy in the fluid to the total bubble energy, and
\[HR=\frac{H}{\beta}\left(8\pi\right)^{\frac{1}{3}}\max\left(v_{b},c_{s}\right)\,. \tag{4.37}\]
The efficiency factor \(\kappa\) is taken from the numerical fits in the appendix of [68]. One can also express the peak amplitude in terms of the peak frequency, i.e. \(h^{2}\Omega_{\rm GW}^{\rm peak}(f_{\rm peak})\), by solving Eq. (4.35) with respect to \(HR\). The recasted form of the peak energy density amplitude reads as
\[\begin{split} h^{2}\Omega_{\rm GW}^{\rm peak}(f_{\rm peak})& =7.835\times 10^{-17}f_{\rm peak}^{-2}\left(\frac{100}{g_{*}}\right)^{2/3} \left(\frac{T_{*}}{100}\right)^{2}\frac{K^{\frac{3}{2}}}{c_{s}}\quad{\rm for} \quad{\rm H}\tau_{\rm sh}=\frac{2}{\sqrt{3}}\frac{{\rm HR}}{{\rm K}^{1/2}}<1 \,,\\ h^{2}\Omega_{\rm GW}^{\rm peak}(f_{\rm peak})&=7.83 5\times 10^{-17}f_{\rm peak}^{-2}\left(\frac{100}{g_{*}}\right)^{2/3} \left(\frac{T_{*}}{100}\right)^{2}\frac{K^{2}}{c_{s}^{2}}\quad{\rm for}\quad{ \rm H}\tau_{\rm sh}=\frac{2}{\sqrt{3}}\frac{{\rm HR}}{{\rm K}^{1/2}}\simeq 1\,, \end{split} \tag{4.38}\]
which is more conveniently written for our numerical analysis.
## 5 Results and discussion
We perform our numerical calculations with CosmoTransitions[19], using the method discussed in the appendix of [69] in order to obtain a smooth effective action \(S_{3}/T\). In particular, for every single point in parameter space, we interpolate the action around the nucleation temperature with a polynomial fit in order to mitigate numerical instabilities in the determination of the \(\beta/H\), \(T_{n}\) and \(T_{*}\) parameters.
### Revisiting the 4D limit
The analysis developed in [16] revealed that for Majorons with mass \(m_{J}<m_{h_{1}}/2\), constraints from invisible Higgs decays imply a small portal coupling \(\lambda_{\sigma h}\lesssim\mathcal{O}(10^{-2})\), excluding SGWB signatures in the observable region. In this subsection we improve the previous work focusing on the 4D limit of the EIS model taking \(\delta_{0,2,4,6}\to 0\), and using an inversion procedure to fix \(\text{Br}(h_{1}\to JJ)<0.18\) as input for all generated points. One can then define:
\[A^{\prime}(\text{Br})=\text{Sign}\left[\left(M_{hh}^{2}-M_{\sigma\sigma}^{2} \right)\right]4\sqrt{2\pi}\frac{m_{h}}{v_{h}}\sqrt{\frac{\text{Br}(h_{1}\to JJ) \Gamma(h\to\text{SM})}{\left[1-\text{Br}(h\to JJ)\right](m_{h}^{2}-4m_{J}^{2} )^{1/2}}}\,. \tag{11}\]
in order to obtain the 4D version of the inverted equations
\[\lambda_{\sigma h}= A^{\prime}(\text{Br})\sec\alpha_{h}\,, \tag{12}\] \[\lambda_{\sigma}= \tfrac{1}{2}A^{\prime}(\text{Br})^{2}\frac{M_{\sigma\sigma}^{2} v_{h}^{2}\cos(2\alpha_{h})^{2}\csc(\alpha_{h})^{2}\sec(\alpha_{h})^{4}}{ \left(M_{hh}^{2}-M_{\sigma\sigma}^{2}\right)^{2}}\,,\] \[\lambda_{h}= \tfrac{1}{2}\frac{M_{hh}^{2}}{v_{h}^{2}}\,,\] \[v_{\sigma}= \frac{\left(M_{hh}^{2}-M_{\sigma\sigma}^{2}\right)\cos(\alpha_{h })^{2}\sec(2\alpha_{h})\sin(\alpha_{h})}{A^{\prime}(\text{Br})v_{h}}\,.\]
In Tab. 2 we show the ranges of the input parameters used in the scan of the revisited 4D scenario.
We show in Fig. 1 the results obtained for the peak energy density amplitude \(h^{2}\Omega_{\rm GW}^{\rm peak}\) of the SGWB
\begin{table}
\begin{tabular}{c c c} \hline Parameter & Range & Distribution \\ \hline \(m_{h_{2}}\) & [60, 1000] GeV & linear \\ \(m_{J}\) & [\(10^{-10}\) eV, 100 keV] & exponential \\ \(\text{Br}(h_{1}\to JJ)\) & [\(10^{-15}\), 0.18] & exponential \\ \(\sin\left(\alpha_{h}\right)\) & \(\pm\)[0, 0.24] & linear \\ \hline \end{tabular}
\end{table}
Table 2: Ranges of the input parameters in the scans for the 4D limit of the EIS model. We fix \(m_{h_{1}}=125.01\) GeV and \(v_{h}=246.22\) GeV.
Figure 1: Scatter plots showing the strength (left) and inverse duration (right) of the phase transition in terms of the peak frequency and peak energy density amplitude of the SGWB for the 4D limit of the EIS model.
in terms of the peak frequency. The colour scale on the left panel represents the strength of the phase transition \(\alpha\), while on the right panel it describes its inverse duration \(\beta/H\). With the current improved analysis we confirm our previous results where all FOPTs are rather weak, \(\alpha<10^{-4}\), and very short lasting, \(\beta/H>10^{5}\). Indeed, none of the generated points is within the sensitivity reach of possible future GW detectors such as BBO or DECIGO. The maximum absolute value of the portal coupling \(\lambda_{\sigma h}\) was indeed 0.01, once again consistent with the conclusions in [16].
### SGWB in the 6D EIS model
In this section we study how the effect of UV physics, encoded in the form of dimension-6 operators in the scalar potential, can influence the properties of the phase transitions described in Sec. 5.1. Using the inversion equations in Eq. (3.18) we have performed a parameter space scan with the ranges shown in Tab. 3. In the last three lines of Tab. 3 we sample the values of \(\delta_{0,2,4}\) requiring that the effective
quartic couplings \(\frac{\delta_{0}v_{1}^{2}}{2\Lambda^{2}}\), \(\frac{\delta_{2}\max(v_{1}^{2},v_{2}^{2})}{2\Lambda^{2}}\) and \(\frac{\delta_{4}v_{2}^{2}}{2\Lambda^{2}}\) are perturbative. Notice that the negative signs in the sampling range of such dimension-six couplings is not problematic provided that all FOPT scenarios found with CosmoTransitions feature a stable vacuum, at least for energy scales below \(\Lambda\). The neutrino masses are sampled according to a normal ordering, _i.e._
\[\begin{split} m_{\nu_{2}}^{2}&=m_{\nu_{1}}^{2}+ \Delta m_{21}^{2}\qquad\text{with}\qquad\Delta m_{21}^{2}=8\times 10^{-5}\, \,\text{eV}^{2}\,,\\ m_{\nu_{3}}^{2}&=m_{\nu_{2}}^{2}+\Delta m_{32}^{2} \qquad\text{with}\qquad\Delta m_{32}^{2}=3\times 10^{-3}\,\,\text{eV}^{2}\,. \end{split} \tag{5.3}\]
We show in Fig. 2 the SGWB peak amplitude and frequency for all generated points in terms of the phase transition parameters \(\alpha\) and \(\beta/H\) in the colour scale. Notice that the linear distribution of points is justified by Eq. (4.38) where, in a log-log scale, \(h^{2}\Omega_{\text{GW}}^{\text{peak}}\propto f_{\text{peak}}^{-2}\) does indeed linearly depends on \(-2\log_{10}f_{\text{peak}}\) with a negative slope. The effect of including dimension-6 operators fundamentally
\begin{table}
\begin{tabular}{c c c} \hline Parameter & Range & Distribution \\ \hline \(m_{h_{2}}\) & [60, 1000] GeV & linear \\ \(m_{J}\) & [\(10^{-10}\) eV, 100 keV] & exponential \\ \(m_{\nu_{1}}\) & [\(10^{-6}\), \(10^{-1}\)] eV & exponential \\ \(\text{Br}(h_{1}\to JJ)\) & [\(10^{-15}\), 0.18] & exponential \\ \(\sin\left(\alpha_{h}\right)\) & \(\pm\)[0, 0.24] & linear \\ \(v_{\sigma}\) & [100, 1000] GeV & linear \\ \(\Lambda\) & [10, 1000] TeV & exponential \\ \(\frac{\delta_{0}v_{h}^{2}}{2\Lambda^{2}}\) & \(\pm\)[\(10^{-10}\), \(4\pi\)] & exponential \\ \(\frac{\delta_{2}\max(v_{h}^{2},v_{2}^{2})}{2\Lambda^{2}}\) & \(\pm\)[\(10^{-10}\), \(4\pi\)] & exponential \\ \(\frac{\delta_{4}v_{2}^{2}}{2\Lambda^{2}}\) & \(\pm\)[\(10^{-10}\), \(4\pi\)] & exponential \\ \hline \end{tabular}
\end{table}
Table 3: Randomly sampled ranges of the input parameters in the scans for the 6D EIS model. We fix \(m_{h_{1}}=125.01\) GeV and \(v_{h}=246.22\) GeV. In the last three lines we show the expressions implemented in our code used to calculate the parameters highlighted in red in terms of the \(\Lambda\) scale and the VEVs.
changes the conclusions revisited in Sec. 5.1. In particular, for peak frequencies \(f_{\rm peak}\lesssim 10^{-3}\) Hz, the SGWB becomes observable at LISA. For such scenarios both the strength and the duration of the phase transition are considerably larger, i.e. \(\alpha\sim 0.1\) and \(10\lesssim\beta/H\lesssim 100\).
In Fig. 3 we show the Signal to Noise Ratio (SNR) for LISA with the colour scale indicating the electroweak (left) and lepton number symmetry (right) order parameters defined in Eq. (4.28).
The colored isolines represent the expected SNR values for a five year exposure time. The dashed contours display the shock formation time \(\tau_{\rm sh}\) where the grey shaded area corresponds to an acoustic period lasting longer than the Hubble time where the sound waves treatment is mostly reliable [70, 8]. Conversely, if \(\tau_{\rm sh}<<1\), the turbulence effects may become important. However, none of the generated points feature a too small shock formation time with the majority having \(\tau_{\rm sh}>0.1\). The order parameters in the colour scales indicate that both the EW and the U(1)\({}_{\rm L}\) phase transitions must be simultaneously strong, with \(\Delta v_{h}/T_{*}\approx 4\) and \(\Delta v_{\sigma}/T_{*}\approx 2\), such that the SNR at LISA is larger than 10.
The Majoron masses considered in this study span over 15 orders of magnitude, equally distributed as shown in Tab. 3. Whenever \(m_{J}>2m_{\nu}\) the Majoron can decay in a pair of neutrinos with a rate given by Eq. (3.24) [71, 72, 73, 74, 18]. According to the combined analysis of Planck, WMAP, WiggleZ and BOSS [75], the Majoron is very long lived and a dark matter candidate if \(\Gamma(J\to\nu\nu)<1.9\times 10^{-19}s^{-1}\) @ 95% C.L.. This is typically achieved for large U(1)\({}_{\rm L}\) breaking scales, several orders of magnitude above \(v_{\sigma}\sim\mathcal{O}\)(TeV) as considered in this work and motivated by what we believe being a natural scale in the EIS model (see Eq. (2.4) and the discussion below). On the other hand, ultralight Majorons, \(m_{J}<2m_{\nu}\), with negligible decay rates to photons are stable. Whether
Figure 3: SNR plots showing the order parameters of the EW (left) and lepton number symmetry (right) phase transitions. The coloured isolines represent the SNR at LISA while the grey dashed ones denote the shock formation time. The grey shaded are represents the area where the sound waves treatment is mostly reliable. These plots were produced using the public software PTPlot[60].
Figure 2: Scatter plots showing the strength (left) and inverse duration (right) of the phase transition in terms of the peak frequency and peak energy density amplitude of the SGWB.
they can offer a good dark matter is not studied here and left for future work.
### Connection to collider observables
The potential discovery of a SGWB can become the first direct measurement of the Universe prior to the Big Bang Nucleosynthesis era and a breakthrough comparable to the Cosmic Microwave Background detection [76]. Such an observation (or the lack of it), will pose constraints on NP models in the form of bounds (or upper limits) on the amount of allowed Gravitational radiation in the early Universe. It is therefore legitimate to expect correlations between collider and GW observables, in particular for models featuring Higgs portal interactions. In what follows we study how the scalar mixing angle \(\sin\alpha_{h}\), the Higgs trilinear coupling modifier \(\kappa_{\lambda}\) and the mass of a second visible scalar \(m_{h_{2}}\), are related to the phase transition parameters and the peak amplitude \(h^{2}\Omega_{\rm GW}^{\rm peak}\) of the associated SGWB.
The presence of the 6D operator \((H^{\dagger}H)^{3}\), parameterized by \(\delta_{0}\) in this work, can on its own induce FOPTs as discussed in [77; 30; 78]. However, recall that from Fig. 3, an observable SGWB requires both \(\Delta v_{h}/T_{*}>1\) and \(\Delta v_{\sigma}/T_{*}>1\), suggesting sizeable \(\delta_{2}\) and/or \(\delta_{4}\).
For a finer scrutiny we show in Fig. 4 two selections of data where in the left panel \(\delta_{0}=0\) and in the right we require \(\delta_{0}\neq 0\). The magenta points represent the \((\kappa_{\lambda},m_{h_{2}})\) region with SGWBs potentially observable at LISA, BBO and DECIGO. The values of \(\kappa_{\lambda}\) comply with current CMS constraints [79], while the new CP-even Higgs boson couplings to the SM are suppressed by a small mixing angle \(|\sin\alpha_{h}|<0.23\)[80; 81]. As anticipated, \(\delta_{0}\neq 0\) significantly increases the area populated with FOPTs. However, a considerable subset of such points fill out the green region on the right panel where phase transitions are not strong enough to be within the sensitivity reach of future GW detectors. In both scenarios we have found that testable SGWB signals prefer \(100\lesssim m_{h_{2}}^{2}/{\rm GeV}\lesssim 300\) and \(0\lesssim\kappa_{\lambda}\lesssim 2\).
For completeness, we show in Fig. 5 how the GW peak amplitude and the 6D Higgs self coupling \(\frac{v_{h}^{2}\delta_{0}}{2\Lambda^{2}}\) are related to the effective portal interactions \(\frac{v_{h}^{2}\delta_{0}}{2\Lambda^{2}}\) (top-left panel), \(\frac{v_{h}^{2}\delta_{4}}{2\Lambda^{2}}\) (top-right panel) and \(\lambda_{\sigma h}\) (bottom-right). In the bottom-left panel we show the same parameter space projection as in the top-left with the Higgs trilinear coupling modifier in the colour scale. The brighter magenta points, where the SGWB peak amplitude is maximized, populate a region with small \(\frac{v_{h}^{2}\delta_{0}}{2\Lambda^{2}}\). These also correspond to both magenta blobs in Fig. 4, with the left panel highlighting those scenarios which overlap the \(\frac{v_{h}^{2}\delta_{0}}{2\Lambda^{2}}=0\) axis. Larger absolute values of the 6D Higgs self interaction, in particular represented by the sparse purple region where \(-1.0\lesssim\frac{v_{h}^{2}\delta_{0}}{2\Lambda^{2}}\lesssim-0.5\), the peak amplitude of the SGWB was found to be smaller than \(h^{2}\Omega_{\rm GW}^{\rm peak}\lesssim\mathcal{O}(10^{-16})\), at least three orders of magnitude below the LISA reach. However, these are possibly accessible to a next generation of GW detectors such as BBO or DECIGO, sensitive to peak frequencies in the mHz to Hz range.
Larger absolute values of \(\delta_{0}\) typically enhance the Higgs trilinear self coupling as one can see in the term proportional to \(\cos^{3}\alpha_{h}\) in Eq. (100). Such a leading order effect is visible in the colour
Figure 4: Scatter plots showing the dependency of the Higgs trilinear coupling modifier in terms of the second CP-even Higgs boson mass and the energy density amplitude of the SGWB in the colour scale. In the left panel \(\delta_{0}=0\) whereas in the right panel \(\delta_{0}\neq 0\).
gradient of the bottom-left panel of Fig. 5. The vertical separation between the sparsely and densely populated regions is due to the current CMS upper bound where \(\kappa_{\lambda}<6.5\)[79]. Future measurements at colliders can impose stronger constraints as further discussed below.
The approximately symmetric distribution of points along the horizontal axis in the top and bottom-right panels of Fig. 5, particularly evident in the densely populated magenta-blue region, is a reflection of the combined effect of sizeable portal interactions \(\frac{v_{h}^{2}\delta_{0}}{2\Lambda^{2}}\), \(\frac{v_{\lambda}^{2}\delta_{4}}{2\Lambda^{2}}\) and \(\lambda_{\sigma h}\), needed to induce FOPTs, with the required cancellation among them to keep the invisible Higgs decay branching ratio under control. We refer to the discussion related to Eqs. (3.13) to (3.15) for further details. We also find that strong FOPTs with gravitational radiation observable in the form of a SGWB at LISA, requires non-vanishing \(\delta_{2}<0\) and \(\delta_{4}>0\).
In Fig. 6 we show the dependency of the Higgs trilinear self coupling modifier, \(\kappa_{\lambda}\), in terms of the sine of the scalar mixing angle, \(\sin\alpha_{h}\). On both left panels the colour scale represents the peak amplitude of the SGWB, \(h^{2}\Omega_{\rm GW}^{\rm peak}\), with the top-left panel including only those points generated with \(\delta_{0}=0\) while for the bottom-left one no cut in \(\delta_{0}\) was applied. Notice that the \(\delta_{0}\to 0\) limit offers cleaner results while capturing the key features relevant for our discussion. On the top-right panel the colour gradation describes the mass of the new CP-even Higgs boson while on the bottom-right it represents the size of the one-loop corrections on the trilinear Higgs coupling, defined as
\[\Delta\kappa_{\lambda}(\%)=\left|\frac{\kappa_{\lambda}-\kappa_{\lambda}^{ \rm tree}}{\kappa_{\lambda}^{\rm tree}}\right|\times 100\,. \tag{5.4}\]
For the considered model, the vast majority of the scenarios testable at LISA populate the dense magenta band in the left panels with \(0<\kappa_{\lambda}<2\). Comparing with the right plots we observe that such a region coincides with the green band on the top-right panel where \(150\lesssim m_{h_{2}}/{\rm GeV}\lesssim 250\). The one-loop contributions to \(\kappa_{\lambda}\) are of a sub-percent level as indicated by the red points in the bottom-right panel overlapping the magenta and green bands. For scenarios falling outside this region, sizeable
one-loop corrections modifying the Higgs trilinear coupling up to a factor of four can enhance the strength of the phase transition. However, the latter can be significantly constrained, if not entirely ruled out, with future measurements of the scalar mixing angle (red vertical lines) and the Higgs trilinear self coupling (blue horizontal lines). Notice that in Fig. 6, the shaded regions between the red and blue lines correspond to the allowed parameter space projected for future measurements.
The results in Fig. 6 are a good illustration of the potential interplay between collider and astrophysical measurements. For example, the hypothetical observation of a SGWB at LISA cannot, on its own, offer conclusive information about \(\kappa_{\lambda}\), \(\sin\alpha\) or \(m_{h_{2}}\). However, with an increased precision in the determination of the mixing angle and trilinear Higgs coupling bounds at the high-luminosity (HL) or high-energy (HE) LHC upgrades, the viable parameter space becomes largely reduced as indicated by the shaded blue and red regions. Last but not least, in the decoupling limit, _i.e._\(\sin\alpha_{h}\to 0\) and \(\kappa_{\lambda}\to 1\), only through GW experiments one can possibly test the considered Majoron model via parameter inference [83; 84].
### Connection to the neutrino sector
Compared with a number of astrophysical observations[85; 74; 86], we observe in our numerical results that there are several scenarios for which the Majoron is stable or long-lived. This is possible either when all \(J\to\nu_{j}\nu_{j}\) channels are kinematically forbidden, or due to a large suppression caused by the lightest neutrino mass, _i.e._\(\lambda_{\nu_{1}}=m_{1}/v_{\sigma}\), with \(m_{1}\ll m_{2}<m_{3}\). For scenarios with larger \(\lambda_{\nu_{j}}\) constraints from CMB data must be taken care of. In particular, it was demonstrated in [27] that current Planck2018 results [87; 88] can provide an indirect probe to the U(1)\({}_{\rm L}\) lepton number symmetry breaking scale in the range 100 GeV to 1 TeV, which precisely coincides with the values of
Figure 6: Scatter plots showing the correlations between the Higgs trilinear coupling modifier \(\kappa_{\lambda}\) and the scalar mixing angle \(\sin\alpha_{h}\). On both left panels, where the top one includes only \(\delta_{0}=0\) data while the bottom one features all generated viable points, the colour scale denotes the energy density peak amplitude of the SGWB. On the top-right panel the colour gradient describes the mass of the new CP-even Higgs boson, \(h_{2}\), and on the bottom-right one it quantifies the size of the one-loop contribution to the Higgs trilinear self coupling as defined in Eq. (11). The vertical red lines represent future constraints on \(\sin\alpha_{h}\) assuming a precision of 1% (solid lines) and 0.1% (dashed lines) at future colliders [80]. The blue horizontal lines indicate the projected 95% CL limits in \(\kappa_{\lambda}\) measurements for the high-luminosity LHC (dot-dashed lines) and the future \(\sqrt{s}=27\) TeV high-energy upgrade (dotted lines) [82]. The regions under the darker blue and red shades correspond to the least constrained ones upon future measurements.
\(v_{\sigma}\) relevant for our analysis as shown in the colour scale of the top-right panel of Fig. 7. The excluded region at 95% CL, illustrated with a blue contour, in which neutrinos and Majorons thermalize after BBN, features Majoron masses of approximately 0.1 eV to 100 eV and couplings to neutrinos in the range \(10^{-13}\lesssim\lambda_{\nu_{i}}\lesssim 10^{-12}\). There are three distinct regions in the top-right panel whose separation results from kinematical thresholds. The rightmost one corresponds to \(m_{J}>2m_{\nu_{3}}\) with the upper and lower flat boundaries explained by \(\lambda_{\nu_{3}}^{\rm upper}=\frac{\max m_{\nu_{3}}}{\min v_{\sigma}}\approx \frac{0.1\;{\rm eV}}{60\;{\rm GeV}}\approx 1.7\times 10^{-12}\) and \(\lambda_{\nu_{3}}^{\rm lower}=\frac{\min m_{\nu_{3}}}{\max v_{\sigma}}\approx \frac{0.06\;{\rm eV}}{1\;{\rm TeV}}\approx 6\times 10^{-14}\), respectively. In the second domain \(2m_{\nu_{2}}<m_{J}<2m_{\nu_{3}}\) whereas in the third one \(2m_{\nu_{1}}<m_{J}<2m_{\nu_{2}}\). Scenarios with \(m_{J}<2m_{\nu_{1}}\) are not represented in both top panels of Fig. 7.
In the top-left plot of Fig. 7 we show the Majoron decay lifetime \(\tau_{{}_{J\to\nu\nu}}=\Gamma^{-1}(J\to\nu\nu)\) normalized to the age of the Universe \(\tau_{0}=13.787\) Gyr [89]. The Majoron decay width to a pair of neutrinos is given in Eq. (3.24). Note that Planck2018 only marginally constrains our parameter space and do not affect the magenta band where the amplitude of the SGWB peaks are within LISA sensitivity reach.
Last but not least, the bottom panel in Fig. 7 shows the size of the product of third generation Yukawa couplings \(y_{\nu_{3}}^{2}y_{\sigma_{3}}\) against the heavy neutrino masses scale \(\Lambda\). These parameters provide the leading contribution to the active neutrino mass \(m_{\nu_{3}}\) according to Eq. (2.6). One can see that the two regions previously identified, the magenta band with strong FOPTs and observable SGWB, and the blue one where FOPTs are weak and beyond reach, partially overlap, with the former typically favouring smaller \(y_{\nu_{3}}^{2}y_{\sigma_{3}}\). A similar behaviour was found for the first and second generation Yukawa couplings.
## 6 Conclusions
With the LISA mission scheduled to begin operations during the 2030 decade, and with the high-luminosity phase of the LHC expected to deliver at least an order of magnitude more data than so far collected, an opportunity to scrutinize a wealth of New Physics models in multiple channels is opening
Figure 7: Scatter plots showing the Majoron decay life-time normalized to the age of the Universe \(\tau_{0}\) in terms of the Majoron mass and the SGWB peak amplitude (top-left), the strength of the neutrino coupling to Majorons versus the Majoron mass and the U(1)\({}_{\rm L}\) lepton number symmetry breaking scale (top-right), and the third generation nutrino Yukawa couplings in terms of the heavy neutrinos mass scale and the amplitude of the SGWB (bottom).
up. The key goal, and yet a great challenge, is to understand how to obtain information relevant for our favourite HEP models from SGWB measurements. In this article we have focused on a well motivated Majoron scenario equipped with an extended inverse seesaw mechanism and dimension-six effective operators in the scalar sector as a benchmark example for the type of physics to explore with LISA.
We have first verified that, in the absence of dimension-six operators, the only portal coupling in the theory is bounded to be small due to severe constraints from invisible Higgs decays. This results in a small potential barrier between the true and false vacua, resulting in weak FOPTs with too small SGWB peak amplitudes to be observable in a foreseeable future. A model with purely renormalizable operators is justified either if it is assumed to be UV complete or if new particles are several orders of magnitude heavier, thus decoupled. However, if New Physics is not significantly heavier, say 1 to 3 orders of magnitude larger, effective higher dimensional operators in the scalar sector play a game-changing role as we have demonstrated. In particular, the emergence of new portal-like interactions offers extra freedom to sufficiently enhance the potential barrier between the true and false vacua while keeping the invisible Higgs decay rate under control. This is achieved with not too small couplings of order \(\mathcal{O}(0.1)\) to \(\mathcal{O}(1)\), and a mild 1% to 10% cancellation among them.
We have also searched for correlations between collider and astrophysical observables and found that, for the EIS model, observable SGWB signatures at LISA indicate a preference for a triliniear Higgs self coupling modifier within the range \(0<\kappa_{\lambda}<2\) and a new CP-even Higgs boson mass \(m_{h_{2}}\approx(200\pm 50)~{}\mathrm{GeV}\). Direct searches for new scalars at the LHC as well as improved cosmological bounds from CMB data, can further constrain the allowed parameter space and shed new light on the current picture, with relevance for the future LISA mission.
A.A. work is supported by the Talent Scientific Research Program of College of Physics, Sichuan University, Grant No.1082204112427 & the Fostering Program in Disciplines Possessing Novel Features for Natural Science of Sichuan University, Grant No. 2020SCUNL209 & 1000 Talent program of Sichuan province 2021. A.M. wishes to acknowledge support by the Shanghai Municipality, through the grant No. KBH1512299, by Fudan University, through the grant No. JJH1512105, the Natural Science Foundation of China, through the grant No. 11875113, and by the Department of Physics at Fudan University, through the grant No. IDH1512092/001. This work was supported by the grants CERN/FIS-PAR/0021/2021, CERN/FIS-PAR/0019/2021, CERN/FIS-PAR/0024/2021, CERN/FIS-PAR/0025/2021 and PTDC/FIS-AST/3041/2020. A.P.M. is supported by the Center for Research and Development in Mathematics and Applications (CIDMA) through the Portuguese Foundation for Science and Technology (FCT - Fundacao para a Ciencia e a Tecnologia), references UIDB/04106/2020 and UIDP/04106/2020, and by national funds (OE), through FCT, I.P., in the scope of the framework contract foreseen in the numbers 4, 5 and 6 of the article 23, of the Decree-Law 57/2016, of August 29, changed by Law 57/2017, of July 19. A.P.M. also wants to thank Gongjun Choi, Jeremie Quevillon and Miguel Escudero Abenza for insightful discussions about the cosmology of Majorons and axion-like particles in general. A.P.M. also acknowledges Rui Santos, Tania Robens and Johannes Braathen for discussions about the trilinar Higgs coupling and invisible Higgs decays. R.P. is supported in part by the Swedish Research Council grant, contract number 2016-05996, as well as by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 668679). J.V. is supported by FCT under contracts UIDB/00618/2020, UIDP/00618/2020, PTDC/FIS-PAR/31000/2017, PRT/BD/154191/2022, CERN/FIS-PAR/0025/2021.
One-loop expressions for the physical trilinear couplings
One-loop corrections to the coupling of the Higgs boson with a pair of Majorons solely result from heavy CP-even Higgs bosons in the zero external momentum approach. These are given by
\[\begin{split}\lambda_{h_{1}JJ}^{h_{2}}=& f_{JJh_{2}}\lambda_{JJh_{1}}^{(0)}\left(\lambda_{JJh_{2}}^{(0)} \right)^{2}+2f_{Jh_{1}h_{2}}\lambda_{JJh_{1}}^{(0)}\lambda_{JJh_{2}}^{(0)} \lambda_{h_{1}h_{1}h_{2}}^{(0)}+f_{Jh_{2}h_{2}}\lambda_{h_{1}h_{2}h_{2}}^{(0)} \left(\lambda_{JJh_{2}}^{(0)}\right)^{2}\\ &+2(f_{Jh_{2}}-1)\lambda_{JJh_{2}}^{(0)}\lambda_{JJh_{1}h_{2}}^{(0) }+(f_{h_{1}h_{2}}-1)\lambda_{h_{1}h_{1}h_{2}}^{(0)}\lambda_{JJh_{1}h_{2}}^{(0) }+\frac{1}{2}(f_{h_{2}h_{2}}-1)\lambda_{h_{1}h_{2}h_{2}}^{(0)}\lambda_{JJh_{2 }h_{2}}^{(0)}\end{split} \tag{104}\]
For the trilinear Higgs coupling we start with the fermionic contributions from top-quark and heavy neutrino loops:
\[\begin{split}\lambda_{h_{1}h_{1}h_{1}}^{t}=&-\left[ \frac{12}{6}f_{ttt}\lambda_{tth_{1}}^{(0)}+\frac{18}{6}(f_{tt}-1)\lambda_{tth_ {1}}^{(0)}\lambda_{tth_{1}h_{1}}^{(0)}\right]\,,\\ \lambda_{h_{1}h_{1}h_{1}}^{N}\approx&-2f_{NNN}\left[ \sum_{j=1}^{6}\left(\lambda_{N_{j}N_{j}h_{1}}^{(0)}\right)^{3}+3\sum_{j=1}^{3} \lambda_{N_{2j-1}N_{2j-1}h_{1}}^{(0)}\left(\lambda_{N_{2j-1}N_{2j}h_{1}}^{(0)} \right)^{2}\right.\\ &+\left.3\sum_{j=1}^{3}\lambda_{N_{2j}N_{2j}h_{1}}^{(0)}\left( \lambda_{N_{2j-1}N_{2j}h_{1}}^{(0)}\right)^{2}\right]-3\left(f_{NN}-1\right) \left[\sum_{j=1}^{6}\lambda_{N_{j}N_{j}h_{1}}^{(0)}\lambda_{N_{j}N_{j}h_{1}h_{ 1}}^{(0)}\right.\\ &+\left.2\sum_{j=1}^{3}\lambda_{N_{2j-1}N_{2j}h_{1}}^{(0)}\lambda _{N_{2j-1}N_{2j}h_{1}h_{1}}^{(0)}\right]\,.\end{split} \tag{105}\]
Note that in Eq. (105) we have considered, to a good approximation, the limit of degenerate heavy neutrino masses motivated by \(m_{N_{1,\ldots,6}}\approx\Lambda\). Last but not least, the scalar contributions to the Higgs trilinear coupling read as
\[\begin{split}\lambda_{h_{1}h_{1}h_{1}}^{h_{2}}=& 3f_{h_{1}h_{1}h_{2}}\lambda_{h_{1}h_{1}h_{1}}^{(0)} \left(\lambda_{h_{1}h_{1}h_{2}}^{(0)}\right)^{2}+3f_{h_{1}h_{2}h_{2}}\lambda_ {h_{1}h_{2}}^{(0)}\left(\lambda_{h_{1}h_{1}h_{1}}^{(0)}\right)^{2}+f_{h_{2}h_ {2}h_{2}}\left(\lambda_{h_{2}h_{2}}^{(0)}\right)^{2}\\ +& 3\left(f_{h_{1}h_{2}}-1\right)\lambda_{h_{1}h_{1}h_{2}} ^{(0)}\lambda_{h_{1}h_{1}h_{1}h_{2}}^{(0)}+\frac{3}{2}\left(f_{h_{2}h_{2}}-1 \right)\lambda_{h_{1}h_{2}h_{2}}^{(0)}\lambda_{h_{1}h_{1}h_{2}h_{2}}^{(0)}\,. \end{split} \tag{106}\]
The loop functions, as defined in [22], read as
\[f_{a_{1}\ldots a_{N}}=\sum_{x=1}^{N}\frac{m_{a_{x}}^{2}\log\left(\frac{m_{a_{x }}^{2}}{\mu^{2}}\right)}{\Pi_{y\neq x}\left(m_{a_{x}}^{2}-m_{a_{y}}^{2}\right)}\,, \tag{107}\]
whith \(\mu\) the renormalization scale. While the \(\lambda_{JJh_{1}}^{(0)}\) is given in Eq. (3.14), all remaining tree-level couplings entering Eqs. (104) to (106) read as:
\[\lambda_{JJh_{2}}^{(0)}=\frac{\cos(\alpha_{h})v_{\sigma}\left(\delta_{4}v_{h}^{ 2}+2\Lambda^{2}\lambda_{\sigma}+3\delta_{6}v_{\sigma}^{2}\right)-\sin(\alpha_ {h})v_{h}\left(\delta_{2}v_{h}^{2}+\Lambda^{2}\lambda_{\sigma h}+\delta_{4}v_{ \sigma}^{2}\right)}{\Lambda^{2}} \tag{108}\]
\[\begin{split}\lambda_{JJh_{1}h_{2}}^{(0)}=&\ \frac{\sin \left(2\alpha_{h}\right)\left(\left(\delta_{4}-3\delta_{2}\right)v_{h}^{2}+ \Lambda^{2}\left(2\lambda_{\sigma}-\lambda_{\sigma h}\right)-\left(\delta_{4}- 9\delta_{6}\right)v_{\sigma}^{2}\right)+4\delta_{4}v_{h}v_{\sigma}\cos\left(2 \alpha_{h}\right)}{2\Lambda^{2}}\\ \lambda_{JJh_{2}h_{2}}^{(0)}=&\ \frac{1}{\Lambda^{2}} \left[\sin^{2}\left(\alpha_{h}\right)\left(3\delta_{2}v_{h}^{2}+\Lambda^{2} \lambda_{\sigma h}+\delta_{4}v_{\sigma}^{2}\right)+\cos^{2}\left(\alpha_{h} \right)\left(\delta_{4}v_{h}^{2}+2\Lambda^{2}\lambda_{\sigma}+9\delta_{6}v_{ \sigma}^{2}\right)\right.\\ &-\left.4\delta_{4}v_{h}v_{\sigma}\sin\left(\alpha_{h}\right)\cos \left(\alpha_{h}\right)\right]\\ \lambda_{h_{1}h_{1}h_{1}}^{(0)}=&\ \frac{3}{\Lambda^{2}} \left[v_{\sigma}\sin\left(\alpha_{h}\right)\cos^{2}\left(\alpha_{h}\right) \left(3\delta_{2}v_{h}^{2}+\Lambda^{2}\lambda_{\sigma h}+\delta_{4}v_{\sigma}^{2} \right)+v_{h}\cos^{3}\left(\alpha_{h}\right)\left(2\Lambda^{2}\lambda_{h}+5 \delta_{0}v_{h}^{2}+\delta_{2}v_{\sigma}^{2}\right)\right.\\ &+v_{h}\sin^{2}\left(\alpha_{h}\right)\cos\left(\alpha_{h}\right) \left(\delta_{2}v_{h}^{2}+\Lambda^{2}\lambda_{\sigma h}+3\delta_{4}v_{\sigma}^{2} \right)+v_{\sigma}\sin^{3}\left(\alpha_{h}\right)\left(\delta_{4}v_{h}^{2}+2 \Lambda^{2}\lambda_{\sigma}+5\delta_{6}v_{\sigma}^{2}\right)\right]\end{split} \tag{109}\]
\[\lambda^{(0)}_{h_{1}h_{1}h_{2}} =\frac{3}{\Lambda^{2}}\left[-v_{h}\sin\left(\alpha_{h}\right)\cos^{2 }\left(\alpha_{h}\right)\left(\delta_{2}v_{h}^{2}+\Lambda^{2}\lambda_{\sigma h}+3 \delta_{4}v_{\sigma}^{2}\right)+v_{\sigma}\cos^{3}\left(\alpha_{h}\right)\left( \delta_{4}v_{h}^{2}+2\Lambda^{2}\lambda_{\sigma}+5\delta_{6}v_{\sigma}^{2} \right)\right.\] (111) \[+\left.v_{\sigma}\sin^{2}\left(\alpha_{h}\right)\cos\left(\alpha_ {h}\right)\left(3\delta_{2}v_{h}^{2}+\Lambda^{2}\lambda_{\sigma h}+\delta_{4} v_{\sigma}^{2}\right)-v_{h}\sin^{3}\left(\alpha_{h}\right)\left(2\Lambda^{2} \lambda_{h}+5\delta_{6}v_{\sigma}^{2}\right)\right]\] \[\lambda^{(0)}_{h_{1}h_{1}h_{2}} =\frac{1}{\Lambda^{2}}\left[-v_{h}\sin^{3}\left(\alpha_{h} \right)\left(\delta_{2}v_{h}^{2}+\Lambda^{2}\lambda_{\sigma h}+3\delta_{4}v_{ \sigma}^{2}\right)+v_{\sigma}\cos^{3}\left(\alpha_{h}\right)\left(3\delta_{2}v _{h}^{2}+\Lambda^{2}\lambda_{\sigma h}+\delta_{4}v_{\sigma}^{2}\right)\right.\] (112) \[+\left.v_{h}\sin\left(\alpha_{h}\right)\cos^{2}\left(\alpha_{h} \right)\left(2\Lambda^{2}\left(3\lambda_{h}-\lambda_{\sigma h}\right)+\left(1 5\delta_{0}-2\delta_{2}\right)v_{h}^{2}+3\left(\delta_{2}-2\delta_{4}\right)v _{\sigma}^{2}\right)\right]\] \[\lambda^{(0)}_{h_{1}h_{1}h_{2}} =\frac{3}{8\Lambda^{2}}\left[-\sin\left(4\alpha_{h}\right)\left( 2\Lambda^{2}\left(\lambda_{\mu}+\lambda_{\sigma}-\lambda_{\sigma h}\right)+ \left(15\delta_{0}-6\delta_{2}+\delta_{4}\right)v_{h}^{2}\right.\right.\] (113) \[+\left.\left.\left.\left.\left.\left(\delta_{2}-6\delta_{4}+15 \delta_{6}\right)v_{\sigma}^{2}\right)+2\sin\left(2\alpha_{h}\right)\left(2 \Lambda^{2}\left(\lambda_{\sigma}-\lambda_{\lambda}\right)+\left(\delta_{4}-15 \delta_{0}\right)v_{h}^{2}-\left(\delta_{2}-15\delta_{6}\right)v_{\sigma}^{2}\right)\right.\] \[+8\left(\delta_{2}-\delta_{4}\right)v_{h}v_{\sigma}\cos\left(4 \alpha_{h}\right)+8\left(\delta_{2}+\delta_{4}\right)v_{h}v_{\sigma}\cos\left( 2\alpha_{h}\right)\right]\] \[\lambda^{(0)}_{h_{1}h_{1}h_{2}} =\frac{1}{\Lambda^{2}}\left[2\Lambda^{2}\left(3\left(\lambda_{h}+ \lambda_{\sigma}\right)+\lambda_{\sigma h}\right)-3\cos\left(4\alpha_{h} \right)\left(2\Lambda^{2}\left(\lambda_{h}+\lambda_{\sigma}-\lambda_{\sigma h }\right)\right.\right.\] (114) \[+\left.\left.\left.\left(15\delta_{0}-6\delta_{2}+\delta_{4} \right)v_{h}^{2}+\left(\delta_{2}-6\delta_{4}+15\delta_{6}\right)v_{\sigma}^{2 }\right)+24\left(\delta_{4}-\delta_{2}\right)v_{h}v_{\sigma}\sin\left(4\alpha_ {h}\right)\right.\] \[+3\left(15\delta_{0}+2\delta_{2}+\delta_{4}\right)v_{h}^{2}+3 \left(\delta_{2}+2\delta_{4}+15\delta_{6}\right)v_{\sigma}^{2}\right]\] \[\lambda^{(0)}_{th_{1}h_{1}} =v_{h}y_{t}^{2}\cos\left(\alpha_{h}\right)\] (115) \[\lambda^{(0)}_{th_{1}h_{1}} =v_{t}^{2}\cos^{2}\left(\alpha_{h}\right)\] (116) \[\lambda^{(0)}_{h_{1}h_{1}h_{1}} =8v_{g}y_{\sigma_{1}}^{2}\sin\left(\alpha_{h}\right)\left(1-\frac{v_ {g}y_{\sigma_{1}}}{\sqrt{2\Lambda^{2}+v_{g}^{2}y_{\sigma_{1}}^{2}}}\right)- \frac{8\sin\left(\alpha_{h}\right)\left(\sqrt{2\Lambda^{2}v_{g}^{2}y_{\sigma_{ 1}}^{2}+v_{g}^{4}y_{\sigma_{1}}^{2}}+v_{g}^{2}y_{\sigma_{2}}^{2}\right)}{v_{ \sigma}\sqrt{\frac{v_{g}y_{\sigma_{1}}\left(\sqrt{2\Lambda^{2}+v_{g}^{2}y_{ \sigma_{1}}^{2}+v_{g}^{4}y_{\sigma_{1}}^{2}}+v_{g}^{2}y_{\sigma_{2}}^{2}}{ \Lambda^{2}}}\right)+2\sqrt{\frac{\sqrt{2\Lambda^{2}v_{g}^{2}y_{\sigma_{1}}^{2} +v_{g}^{4}y_{\sigma_{1}}^{2}+v_{g}^{2}y_{\sigma_{2}}^{2}}{\Lambda^{2}}}}\] (117) \[\lambda^{(0)}_{h_{1}h_{1}} =2v_{g}y_{\sigma_{1}}^{2}\sin\left(\alpha_{h}\right)\left(1-\frac{v_ {g}y_{\sigma_{1}}}{\sqrt{2\Lambda^{2}+v_{g}^{2}y_{\sigma_{1}}^{2}}}\right)- \frac{2\sin\left(\alpha_{h}\right)\left(\sqrt{2\Lambda^{2}v_{g}^{2}y_{\sigma_{ 1}}^{2}+v_{g}^{4}y_{\sigma_{1}}^{2}}+v_{g}^{2}y_{\sigma_{2}}^{2}\right)}{v_{ \sigma}\sqrt{\frac{v_{g}y_{\sigma_{1}}\left(\sqrt{2\Lambda^{2}+v_{g}^{2}y_{ \sigma_{1}}^{2}+v_{g}^{4}y_{\sigma_{1}}^{2}}+v_{g}^{2}y_{\sigma_{2}}^{2}}{ \Lambda^{2}}}\right)+2\sqrt{\frac{\sqrt{2\Lambda^{2}v_{g}^{2}y_{\sigma_{1}}^{2}+v _{g}^{4}y_{\sigma_{1}}^{2}+v_{g}^{2}y_{\sigma_{2}}^{2}}{\Lambda^{2}}}}+2\] (118) \[\lambda^{(0)}_{h_{1}h_{1}} =2v_{g}y_{\sigma_{1}}^{2}\sin\left(\alpha_{h}\right)\left(1+\frac{v_ {g}y_{\sigma_{1}}}{\sqrt{2\Lambda^{2}+v_{g}^{2}y_{\sigma_{1}}^{2}}}\right)+ \frac{2\sin\left(\alpha_{h}\right)\left(\sqrt{2\Lambda^{2}v_{g}^{2}y_{\sigma_{ 1}}^{2}+v_{g}^{4}y_{\sigma_{1}}^{2}}-v_{g}^{2}y_{\sigma_{2}}^{2}\right)}{v_{ \sigma}\sqrt{\frac{v_{g}y_{\sigma_{1}}\left(v_{g}y_{\sigma_{1}}\left(v_{g}y_{ \sigma_{1}}-\sqrt{2\Lambda^{2}+v_{g}^{2}y_{\sigma_{1}}^{2}}\right)}{\Lambda^{2}}} \right)+2\sqrt{\frac{\sqrt{2\Lambda^{2}v_{g}^{2}y_{\sigma_{1}}^{2}-\sqrt{2 \Lambda^{2}v_{g}^{2}y_{\sigma_{1}}^{2}+v_{g}^{2}y_{\sigma_{1}}^{2}}}{\Lambda^{2}}}}+2\] (119) \[\lambda^{(0)}_{h_{1}h_{1}} =2v_{g}y_{\sigma_{1}}^{2}\sin\left(\alpha_{h}\right)\left(1- \frac{v_{g}y_{\sigma_{1}}}{\sqrt{2\Lambda^{2}+v_{g}^{2}y_{\sigma_{1}}^{2}}} \right)-\frac{2\sin\left(\alpha_{h}\right)\left(\sqrt{2\Lambda^{2}v_{g}^{2}y_{ \sigma_{1}}^{2}+v_{g}^{4}y_{\sigma_{1}}^{2}}+v_{g}^{2}y_{\sigma_{1}}^{2}\right)}{v_{ \sigma}\sqrt{\frac{v_{g}y_{\sigma_{1}}\left(\sqrt{2\Lambda^{2}+v_{g}^{2}y_{ \sigma_{1}}^{2}+v_{g}^{4}y_{\sigma_{1}}^{2}}+v_{g}^{2}y_{\sigma_{1}}^{2}}{ \Lambda^{2}}}}+2\sqrt{\frac{\sqrt{2\Lambda^{2}v_{g}^{2}y_{\sigma_{1}}^{2}+v_{g}^
\[\lambda^{(0)}_{N_{3}N_{4}h_{1}} =\frac{16v_{\sigma}y_{\sigma_{2}}^{2}\sin\left(\alpha_{h}\right)}{ \sqrt{\frac{v_{\sigma}y_{\sigma_{2}}\left(v_{\sigma}y_{\sigma_{2}}-\sqrt{2 \Lambda^{2}+v_{\sigma}^{2}y_{\sigma_{2}}^{2}}\right)}{\Lambda^{2}}+2\sqrt{ \frac{v_{\sigma}y_{\sigma_{2}}\left(\sqrt{2\Lambda^{2}+v_{\sigma}^{2}y_{\sigma _{2}}^{2}}+v_{\sigma}y_{\sigma_{2}}\right)}{\Lambda^{2}}+2}}+2 \tag{103}\] \[+\frac{4\sin\left(\alpha_{h}\right)\left(\sqrt{2\Lambda^{2}v_{ \sigma}^{2}y_{\sigma_{2}}^{2}+v_{\sigma}^{4}y_{\sigma_{2}}^{4}}-v_{\sigma}^{2} y_{\sigma_{2}}^{2}\right)}{v_{\sigma}\sqrt{\frac{v_{\sigma}y_{\sigma_{2}}\left( \sqrt{2\Lambda^{2}+v_{\sigma}^{2}y_{\sigma_{2}}^{2}}+v_{\sigma}y_{\sigma_{2}} \right)}{\Lambda^{2}}+2\sqrt{\frac{v_{\sigma}y_{\sigma_{2}}^{2}-\sqrt{2 \Lambda^{2}v_{\sigma}^{2}y_{\sigma_{2}}^{2}}+v_{\sigma}y_{\sigma_{2}}^{2}}{ \Lambda^{2}}+2}}}\] \[-\frac{4\sin\left(\alpha_{h}\right)\left(\sqrt{2\Lambda^{2}v_{ \sigma}^{2}y_{\sigma_{2}}^{2}+v_{\sigma}^{4}y_{\sigma_{2}}^{4}}+v_{\sigma}^{2} y_{\sigma_{2}}^{2}\right)}{\sqrt{\frac{v_{\sigma}y_{\sigma_{2}}\left(v_{\sigma}y_{ \sigma_{2}}-\sqrt{2\Lambda^{2}+v_{\sigma}^{2}y_{\sigma_{2}}^{2}}\right)}{ \Lambda^{2}}+2\sqrt{\frac{\sqrt{2\Lambda^{2}v_{\sigma}^{2}y_{\sigma_{2}}^{2}+v _{\sigma}^{2}y_{\sigma_{2}}^{2}}+v_{\sigma}y_{\sigma_{2}}^{2}}{\Lambda^{2}}+2}}\] \[\lambda^{(0)}_{N_{5}N_{6}h_{1}} =\frac{16v_{\sigma}y_{\sigma_{3}}^{2}\sin\left(\alpha_{h}\right)} {\sqrt{\frac{v_{\sigma}y_{\sigma_{3}}\left(v_{\sigma}y_{\sigma_{3}}-\sqrt{2 \Lambda^{2}+v_{\sigma}^{2}y_{\sigma_{3}}^{2}}\right)}{\Lambda^{2}}+2\sqrt{ \frac{v_{\sigma}y_{\sigma_{3}}\left(\sqrt{2\Lambda^{2}+v_{\sigma}^{2}y_{\sigma _{3}}^{2}}+v_{\sigma}y_{\sigma_{3}}\right)}{\Lambda^{2}}+2}}}\] (104) \[+\frac{4\sin\left(\alpha_{h}\right)\left(\sqrt{2\Lambda^{2}v_{ \sigma}^{2}y_{\sigma_{3}}^{2}+v_{\sigma}^{4}y_{\sigma_{3}}^{4}}-v_{\sigma}^{2} y_{\sigma_{3}}^{2}\right)}{v_{\sigma}\sqrt{\frac{v_{\sigma}y_{\sigma_{3}}\left( \sqrt{2\Lambda^{2}+v_{\sigma}^{2}y_{\sigma_{3}}^{2}}+v_{\sigma}y_{\sigma_{3}} \right)}{\Lambda^{2}}+2\sqrt{\frac{v_{\sigma}^{2}y_{\sigma_{3}}^{2}-\sqrt{2 \Lambda^{2}v_{\sigma}^{2}y_{\sigma_{3}}^{2}}+v_{\sigma}y_{\sigma_{3}}^{2}}{ \Lambda^{2}}}}}\] \[-\frac{4\sin\left(\alpha_{h}\right)\left(\sqrt{2\Lambda^{2}v_{ \sigma}^{2}y_{\sigma_{3}}^{2}+v_{\sigma}^{4}y_{\sigma_{3}}^{4}}+v_{\sigma}^{2} y_{\sigma_{3}}^{2}\right)}{v_{\sigma}\sqrt{\frac{v_{\sigma}y_{\sigma_{3}}\left(v_{ \sigma}y_{\sigma_{3}}-\sqrt{2\Lambda^{2}+v_{\sigma}^{2}y_{\sigma_{3}}^{2}}\right) }{\Lambda^{2}}+2\sqrt{\frac{\sqrt{2\Lambda^{2}v_{\sigma}^{2}y_{\sigma_{3}}^{2} }+v_{\sigma}^{2}y_{\sigma_{3}}^{2}}{\Lambda^{2}}+2}}}\]
\[\lambda^{(0)}_{N_{1}N_{1}h_{1}h_{1}} =8y_{\sigma_{1}}^{2}\sin^{2}\left(\alpha_{h}\right)\left(1-\frac{ v_{\sigma}y_{\sigma_{1}}}{\sqrt{2\Lambda^{2}+v_{\sigma}^{2}y_{\sigma_{1}}^{2}}}\right) \tag{105}\] \[\lambda^{(0)}_{N_{2}N_{2}h_{1}h_{1}} =8y_{\sigma_{1}}^{2}\sin^{2}\left(\alpha_{h}\right)\left(1+\frac{ v_{\sigma}y_{\sigma_{1}}}{\sqrt{2\Lambda^{2}+v_{\sigma}^{2}y_{\sigma_{1}}^{2}}}\right)\] (106) \[\lambda^{(0)}_{N_{3}N_{3}h_{1}h_{1}} =8y_{\sigma_{2}}^{2}\sin^{2}\left(\alpha_{h}\right)\left(1-\frac{ v_{\sigma}y_{\sigma_{2}}}{\sqrt{2\Lambda^{2}+v_{\sigma}^{2}y_{\sigma_{2}}^{2}}}\right)\] (107) \[\lambda^{(0)}_{N_{4}N_{4}h_{1}h_{1}} =8y_{\sigma_{2}}^{2}\sin^{2}\left(\alpha_{h}\right)\left(1+\frac{ v_{\sigma}y_{\sigma_{2}}}{\sqrt{2\Lambda^{2}+v_{\sigma}^{2}y_{\sigma_{2}}^{2}}}\right)\] (108) \[\lambda^{(0)}_{N_{5}N_{5}h_{1}h_{1}} =8y_{\sigma_{3}}^{2}\sin^{2}\left(\alpha_{h}\right)\left(1-\frac{ v_{\sigma}y_{\sigma_{3}}}{\sqrt{2\Lambda^{2}+v_{\sigma}^{2}y_{\sigma_{3}}^{2}}}\right)\] (109) \[\lambda^{(0)}_{N_{6}N_{6}h_{1}h_{1}} =8y_{\sigma_{3}}^{2}\sin^{2}\left(\alpha_{h}\right)\left(1+\frac{ v_{\sigma}y_{\sigma_{3}}}{\sqrt{2\Lambda^{2}+v_{\sigma}^{2}y_{\sigma_{3}}^{2}}}\right)\] (110) \[\lambda^{(0)}_{N_{1}N_{2}h_{1}h_{1}} =\frac{16y_{\sigma_{1}}^{2}\sin^{2}\left(\alpha_{h}\right)}{\sqrt {\frac{v_{\sigma}y_{\sigma_{1}}\left(v_{\sigma}y_{\sigma_{1}}-\sqrt{2\Lambda^{2}+v_{ \sigma}^{2}y_{\sigma_{1}}^{2}}\right)}{\Lambda^{2}}+2\sqrt{\frac{v_{\sigma}y_{ \sigma_{1}}\left(\sqrt{2\Lambda^{2}+v_{\sigma}^{2}y_{\sigma_{1}}^{2}}+v_{\sigma}y_{ \sigma_{1}}\right)}{\Lambda^{2}}+2}}}\] (111) \[\lambda^{(0)}_{N_{3}N_{4}h_{1}h_{1}} =\frac{16y_{\sigma_{2}}^{2}\sin^{2}\left(\alpha_{h}\right)}{\sqrt {\frac{v_{\sigma}y_{\sigma_{2}}\left(v_{\sigma}y_{\sigma_{2}}-\sqrt{2\Lambda^{2}+v_{ \sigma}^{2}y_{\sigma_{2}}^{2}}\right)}{\Lambda^{2}}+2\sqrt{\frac{v_{\sigma}y_{ \sigma_{2}}\left(\sqrt{2\Lambda^{2}+v_{\sigma}^{2}y_{\sigma_{2}}^{2}}+v_{\sigma}y_{ \sigma_{2}}\right)}{\Lambda^{2}}+2}}}\] (112) \[\lambda^{(0)}_{N_{5}N_{6}h_{1}h_{1}} =\frac{16y_{\sigma_{3}}^{2}\sin^{2}\left(\alpha_{h}\right)}{\sqrt {\frac{v_{\sigma}y_{\sigma_{3}}\left(v_{\sigma}y_{\sigma_{3}}-\sqrt{2\Lambda^{2}+v_ {\sigma}^{2}y_{\sigma_{3}}^{2}}\right)}{\Lambda^{2}}+2\sqrt{\frac{v_{\sigma}y_{ \sigma_{3}}\left(\sqrt{2\Lambda^{2}+v_{\sigma}^{2}y_{\sigma_{3}}^{2}}+v_{\sigma}y_{ \sigma_{3}}\right)}{\Lambda^{2}}+2}}} \tag{113}\]
|
2306.14382
|
Central Limit Theorems and Approximation Theory: Part II
|
In Part I of this article (Banerjee and Kuchibhotla (2023)), we have
introduced a new method to bound the difference in expectations of an average
of independent random vector and the limiting Gaussian random vector using
level sets. In the current article, we further explore this idea using finite
sample Edgeworth expansions and also established integral representation
theorems.
|
Arun Kumar Kuchibhotla
|
2023-06-26T02:19:02Z
|
http://arxiv.org/abs/2306.14382v1
|
# Central Limit Theorems and Approximation Theory: Part II
###### Abstract
In Part I of this article (Banerjee and Kuchibhotla, 2023), we have introduced a new method to bound the difference in expectations of an average of independent random vector and the limiting Gaussian random vector using level sets. In the current article, we further explore this idea using finite sample Edgeworth expansions and also established integral representation theorems.
## 1 Introduction
A sequence \(\{W_{n}\}\) of random variables in a measurable space \(\mathcal{W}\) converges weakly to another random variable \(W\) if and only if \(\mathbb{E}f(W_{n})\) converges to \(\mathbb{E}f(W)\) for every bounded continuous function \(f:\mathcal{W}\to\mathbb{R}\)(Bhattacharya and Rao, 2010, Theorem 1.3). In other words,
\[\Delta_{f}(W_{n},W):=|\mathbb{E}[f(W_{n})]-\mathbb{E}[f(W)]|,\]
converges to zero for every bounded continuous function \(f:\mathcal{W}\to\mathbb{R}\). One can think of this as an asymptotic statement and ask if there is a finite sample version of this result that implies a tractable bound on \(\Delta_{f}(W_{n},W)\) that depends on \(n\), the distribution of \(W_{n},W,\) and \(f\) and converges to zero as \(n\to\infty\).
The classical central limit theorem implies under a variety of regularity conditions that \(W_{n}=n^{-1/2}\sum_{i=1}^{n}X_{i}\) for independent and identically distributed random variables \(X_{i}\) with mean zero and variance \(\Sigma\) converges in distribution to a mean zero Gaussian random variable \(W\) with variance \(\Sigma\). In this setting, there do exist some results that bound \(\Delta_{f}(W_{n},W)\) for a finite sample size \(n\). Bhattacharya and Rao (2010, Chapter 13) is a classical reference for this when the random variables \(X_{i}\) belong to the Euclidean space \(\mathbb{R}^{k}\) and \(f\) is an arbitrary Borel measurable function. Of course, \(\Delta_{f}(W_{n},W)\) cannot converge to zero for all Borel measurable functions and the bounds presented in Chapter 13 of Bhattacharya and Rao (2010) involve an oscillation function of \(f\) that implies a regularity condition on \(f\). Formally, with \(\|\cdot\|_{2}\) representing the Euclidean norm, define the oscillation function of \(f:\mathbb{R}^{d}\to\mathbb{R}\) at a point \(x\) with radius \(\varepsilon\) as
\[\omega_{f}(x;\varepsilon):=\sup\{|f(x)-f(y)|:\,\|x-y\|_{2}\leq\varepsilon\}.\]
One of the key quantities in the bound on \(\Delta_{f}(W_{n},W)\) is \(\mathbb{E}[\omega_{f}(W,\varepsilon_{n})]\) for some \(\varepsilon_{n}\) converging to zero; see Bhattacharya and Rao (2010, Corollary 11.2, Thms 13.2 - 13.3) for details. Unfortunately, the dependence on the dimension in these bounds is polynomial and hence, these results are not particularly useful for high-dimensional settings where the dimension can grow faster than the sample size.
Moving beyond the Euclidean space (\(\mathbb{R}^{k}\)), for smooth (differentiable) functions \(f\), one can bound \(\Delta_{f}(W_{n},W)\) even in Hilbert and Banach spaces. See, for example, Bentkus and Gotze (1993); Rachev and Yukich (1989) and references therein.
What is lacking in the literature, to our knowledge, is a bound on \(\Delta_{f}\) that is locally adaptive. This means that the bound holds for all Borel measurable functions but at the same time yields the correct rate bound if the function \(f\) is intrinsically low-dimensional or highly smooth. This is the main motivation for the current work. The idea we propose is that if the function \(f\) can be written as \(f(x)=\sum_{j=1}^{\infty}f_{j}(x)\) for a sequence of functions \(f_{j},j\geqslant 1\) with \(R_{J}=\sup_{x}|\sum_{j=J}^{\infty}f_{j}(x)|\to 0\) as \(J\to\infty\), then we get \(\Delta_{f}\leqslant\sum_{j=1}^{J}\Delta_{f_{j}}+R_{J}\). By minimizing the bound over all \(J\), one can bound \(\Delta_{f}\). The resulting bound need not be optimal in terms of the dependence on \(n\) with this approach, but they will often be adaptive. Moreover, most of the techniques discussed herein are applicable to non-ild random variables.
Organization.The remaining article is organized as follows. In Section 2, we summarize the results of Part I along with some more exploration of those results. In Section 3, we describe and review integral representations for functions under several regularity conditions. In Section 4, we discuss non-uniform Edgeworth-type expansions for sums of independent univariate random variables and apply these results to the integral representations. In Section 5, we summarize the article and mention potential future directions.
## 2 Preliminaries
In part I (Banerjee and Kuchibhotla, 2023),1 we have developed several simple results that can bound \(\Delta_{f}\). In this section, we recall those results briefly. For any Borel measurable function \(f:\mathcal{W}\to\mathbb{R}\),
Footnote 1: From now, Banerjee and Kuchibhotla (2023) is referred to as part I.
\[\Delta_{f}(W_{n},W)=\left|\int_{-\infty}^{\infty}[\mathbb{P}(W_{n}\in\mathcal{ U}_{f,t})-\mathbb{P}(W\in\mathcal{U}_{f,t})]\right|, \tag{1}\]
where \(\mathcal{U}_{f,t}=\{w\in\mathcal{W}:\,f(w)\geqslant t\}\) is the upper level set of \(f\). Equality (1) implies that one can control \(\Delta_{f}(W_{n},W)\) in terms of the difference between probabilities of the level sets. Differences between probabilities for sums of independent/dependent random variables/vectors can be controlled using traditional Berry-Esseen bounds. For example, the results of Bentkus (2003b, 2004), Yaroslavtseva (2008), Raic (2019), Gotze (1986), and Paulauskas and Rackauskas (1989), among others, provide such results for a large class of sets. (The last two references also contain results specific to lower-level sets of smooth functions.)
It is not difficult to find functions whose level sets belong to these favorable classes. Consider the following simple examples.
Convex sets.If \(f:\mathcal{W}\to\mathbb{R}\) is a quasiconcave function (i.e., \(f(\lambda w_{1}+(1-\lambda)w_{2})\geqslant f(w_{1})\wedge f(w_{2})\)), then \(\mathcal{U}_{f,t}\) is a convex set. In fact, this is the characterization of quasiconcave functions (Diewert et al., 1981). Michel (1978) provides bounds on \(\mathbb{P}(W_{n}\in A)-\mathbb{P}(W\in A)\) that depend on \(A\) and converge to zero as \(n\to\infty\), if \(W_{n}\) is an average of \(n\) independent random vectors in \(\mathbb{R}^{d}\) and \(W\) is a Gaussian random vector (\(W\sim N(\mathbb{E}[W_{n}],\mathrm{Var}(W_{n}))\)). A similar result is provided in Hipp (1979) if \(W_{n}\) is an average of mixing random vectors. Also, see Rotar' (1970), Sazonov (1974), Senatov (1982), Fomin (1983), and Jirak (2015) for more results of this flavor.
Euclidean balls.If \(f(w)=g(\|w-a\|_{2}),w\in\mathbb{R}^{d}\) for some non-increasing function \(g\), then the upper level sets of \(f\) are Euclidean balls centered at \(a\in\mathbb{R}^{d}\)2. Bogatyrev et al. (2006) provides bounds on \(\mathbb{P}(W_{n}\in A)-\mathbb{P}(W\in A)\) that depend on Euclidean ball \(A\) and converge to zero as \(n\to\infty\), if \(W_{n}\) is an average of \(n\) independent random variables in a Hilbert space and \(W\) is the corresponding Gaussian random variables; here \(\|\cdot\|_{2}\) represents the Hilbert space norm. Further results can be found in Ulyanov (1986), Yurinskii (1983), Tikhomirov (1994), Senatov (2011) and Paulauskas and Rackauskas (1989). The class of all functions with Euclidean balls centered at \(a\) as upper level sets is the same as the class of all functions of the form \(w\mapsto g(\|w-a\|_{2})\) for some non-increasing function \(g\).
Footnote 2: \(\|\cdot\|_{2}\) throughout represents the Euclidean norm
Half-spaces.If \(f(w)=g(\langle w,a\rangle-b),w\in\mathcal{W}\) for some inner product space \(\mathcal{W}\) and some non-increasing function \(g\), then the upper level sets of \(f\) are half-spaces (i.e., of the form \(\mathcal{U}_{f,t}:=\{w\in\mathbb{R}^{d}:\,\langle a,w\rangle\geqslant b+g^{-1} (t)\}\)). Bounds on the difference of probabilities for half-spaces can be obtained from univariate Berry-Esseen bounds even if \(W_{n},W\) are multivariate/Hilbert/Banach space valued random variables. This is because \(\mathbb{P}(W_{n}\in\mathcal{U}_{f,t})=\mathbb{P}(\langle a,W_{n}\rangle\geqslant b +g^{-1}(t))\) and hence, if \(W_{n}\) is an average of random variables in some measurable space, then \(\langle a,W_{n}\rangle\) is an average of univariate random variables depending on \(a\); although a simple fact, this was also mentioned in the remark following Corollary 3 of Paulauskas (1976). Such bounds for independent or dependent data can be found in the literature; see, for example, Michel (1978), Heinrich (1985), and Jirak (2015). In all these cases, one can also obtain asymptotic expansions to get precise bounds on \(\Delta_{f}\). We will present an analysis of this kind in the following sections.
The discussion above allows us to get bounds for functions whose level sets belong to a favorable class. This may not always be the case. For example, sum of two quasiconcave functions need not be quasiconcave and hence, the upper level sets of the sum of two quasiconcave functions need not be convex. However, we can use the simple fact that \(\Delta_{f_{1}+f_{2}}(W_{n},W)\leqslant\Delta_{f_{1}}(W_{n},W)+\Delta_{f_{2}}(W _{n},W)\) to get a bound. This relation need not be restricted to the sum of two functions but can be extended to the sum of uncountably many functions. Formally, we have the following result (proved in Appendix A).
**Proposition 2.1**.: _Suppose \(\{f_{\lambda}:\,\lambda\in\Lambda\}\) is a parametrized class of functions and \(\mu(\cdot)\) is a finite signed measure on \(\Lambda\). Then, for \(g(x)=\int_{\Lambda}f_{\lambda}(x)\mu(d\lambda)\) and for any random variables \(W_{n},W\), we have_
\[\Delta_{g}(W_{n},W)\ \leqslant\ \int_{\Lambda}|\Delta_{f_{\lambda}}(W_{n},W)|\, |\mu|(d\lambda),\]
_where \(|\mu|(\cdot)\) is the variation of \(\mu\)._
There are several simple applications of Proposition 2.1, even with discrete measures \(\mu\) supported on finite or countable sets. For example, it is well-known that any differentiable function with an \(L\)-Lipschitz derivative can be written as a difference between a convex function with \(2L\)-Lipschitz derivative and a quadratic function (Zlobec, 2006).
Proposition 2.1, in particular, applies to functions of the form \(f(x)=\sum_{j=1}^{\infty}\theta_{j}\phi_{j}(x)\) for some coefficients such that \(\sum_{j=1}^{\infty}|\theta_{j}|<\infty\). This is the classical case of basis expansion and for functions defined on Euclidean spaces, it is well-known that Fourier/spline/wavelet bases can approximate any function in \(L_{2}\), at least for functions with bounded support. In these cases, however, the coefficients are only implied to be square summable and not absolutely summable. For Haar bases (which is an example of wavelet bases), \(\phi_{j}(x)\) has level sets that are hyperrectangles, and for
hyperrectangles, several bounds for the difference of probabilities exist (Chernozhukov et al., 2023; Bong et al., 2022; Fang et al., 2023).
Proposition 2.1 plays a crucial role in this paper and we will present several applications of it where \(f_{\lambda}(\cdot)\) are functions whose level sets are either Euclidean balls or half-spaces. In the following section, we provide examples of integral representations of the type \(g(x)=\int_{\Lambda}f_{\lambda}(x)\mu(d\lambda)\) based on the Fourier transform of \(g\).
## 3 Integral Representations
In this section, we present some sufficient conditions under which functions can be represented as finite integrals of functions whose level sets belong to a favorable class (e.g., Euclidean balls or half-spaces).
### Norm balls
In this section, we use the approximation of functions \(f\) by integrals of radial functions (i.e., functions whose level sets are Euclidean balls) to bound \(\Delta_{f}(W_{n},W)\). It is a well-known fact that any function can be approximated by convolving the function with a mollifier (i.e., an infinitely differentiable function that approximates the Dirac delta). Formally, in the finite-dimensional Euclidean space \(\mathbb{R}^{d}\), any function \(f:\mathbb{R}^{d}\to\mathbb{R}\) can be approximated by \(f_{h}\) where \(f_{h}(x)=h^{-d}\int_{\mathbb{R}^{d}}K(x/h,y/h)f(y)dy\), for any function \(K(\cdot,\cdot)\). In particular, Proposition 4.3.31 of Gine and Nickl (2021) states that if \(\int_{\mathbb{R}^{d}}\sup_{v\in\mathbb{R}^{d}}|K(v,v-u)|du<\infty\) and \(\int_{\mathbb{R}^{d}}K(x,y)dy=1\) for all \(x\in\mathbb{R}^{d}\), then \(\|f_{h}-f\|_{\infty}=\sup_{x\in\mathbb{R}^{d}}|f_{h}(x)-f(x)|\) converges to zero. We can use this approximation result with Proposition 2.1 to get a bound for \(\Delta_{f}(W_{n},W)\). Note that
\[|\mathbb{E}[\![f(W_{n})]\!]-\mathbb{E}[\![f_{h}(W_{n})]\!]|\leq\mathbb{E}|f(W _{n})-f_{h}(W_{n})|\leq\|f_{h}-f\|_{\infty},\]
and
\[\mathbb{E}[\![f_{h}(W_{n})]\!]=\frac{1}{h^{d}}\int_{\mathbb{R}^{d}}\mathbb{E} \left[K\left(\frac{W_{n}}{h},\frac{y}{h}\right)\right]\!f(y)dy,\]
assuming that the right hand side exists. Therefore,
\[\Delta_{f}(W_{n},W)\leq 2\|f_{h}-f\|_{\infty}+\Delta_{f_{h}}(W_{n},W). \tag{2}\]
Because the right hand side depends on the kernel \(K(\cdot,\cdot)\) and bandwidth \(h\), but the left hand side does not, we get
\[\Delta_{f}(W_{n},W)\leq\inf_{K,h}\left\{2\|f_{h}-f\|_{\infty}+\Delta_{f_{h}}( W_{n},W)\right\}. \tag{3}\]
Proposition 4.3.33 of Gine and Nickl (2021) also states that if \(f\in C^{m}(\mathbb{R}^{d})\) (the space of all functions whose \(|m|\)-th derivative is bounded and is \(m-[m]\)-Holder continuous), then there exists (higher-order) kernel such that \(\|f_{h}-f\|_{\infty}\leq C_{f}h^{m}\); a higher-order kernel with order \(\ell\) means that \(\int_{\mathbb{R}^{d}}K(v,u+v)u^{\alpha}du=0\) for all \(v\in\mathbb{R}^{d}\) for every monomial \(u^{\alpha}\) with total degree \(|\alpha|\) less than \(\ell\). Such higher-order kernels can be constructed easily using a technique called twicing (Stuetzle and Mittal, 2006): if \(K\) is a kernel of order \(\ell\), then \(K_{2}=2K-K*K\) is a kernel of order \(2\ell\).3
Footnote 3: We use the notation ’\(*\)’ to denote convolution.
Calculating the infimum in (3) can be difficult in general. To illustrate the usefulness, we consider the Gaussian kernel to obtain a concrete bound, i.e., \(K(u,v)=(2\pi)^{-d/2}\exp(-\|u-v\|_{2}^{2}/2)\).
With this Gaussian kernel, we have
\[\mathbb{E}[f_{h}(W_{n})]=\frac{1}{h^{d}(2\pi)^{d/2}}\int_{\mathbb{R}^{d}}\mathbb{E }\left[\exp\left(-\frac{\|W_{n}-y\|_{2}^{2}}{2h}\right)\right]f(y)dy.\]
Applying (1), we get
\[\mathbb{E}\left[\exp\left(-\frac{\|W_{n}-y\|_{2}^{2}}{h}\right)\right]=\int_{0 }^{1}\mathbb{P}\left(\|W_{n}-y\|_{2}\leqslant\sqrt{2h\log(1/t)}\right)dt.\]
Therefore,
\[\Delta_{f_{h}}(W_{n},W)\leqslant\frac{(2\pi)^{-d/2}}{h^{d}}\int_{\mathbb{R}^{d }\times[0,1]}\left|\mathbb{P}\left(\frac{\|W_{n}-y\|_{2}}{\sqrt{2h\log(1/t)}} \leqslant 1\right)-\mathbb{P}\left(\frac{\|W-y\|_{2}}{\sqrt{2h\log(1/t)}} \leqslant 1\right)\right|f(y)dtdy.\]
This inequality allows us to control \(\Delta_{f_{h}}(W_{n},W)\) using Berry-Esseen bounds for Euclidean balls. The calculations presented in this section can be easily generalized to Hilbert space random variables. As an example, we now proceed to control the right hand side when \(W_{n}\) is a scaled average of \(n\) iid random vectors in \(\mathbb{R}^{d}\) for \(d\geqslant 6\). Following the results of Senatov (1992, 1993), we obtain that if \(W_{n}=n^{-1/2}\sum_{i=1}^{n}X_{i}\) for mean zero random vectors \(X_{i}\in\mathbb{R}^{d}\), then with \(r_{h}(y)=|\sqrt{2h\log(1/t)}-\|y\|_{2}|\),
\[\left|\mathbb{P}\left(\frac{\|W_{n}-y\|_{2}}{\sqrt{2h\log(1/t)}} \leqslant 1\right)-\mathbb{P}\left(\frac{\|W-y\|_{2}}{\sqrt{2h\log(1/t)}} \leqslant 1\right)\right|\] \[\leqslant\frac{C\beta_{3}/n^{1/2}}{1+r_{h}^{3}(y)/\sigma^{3}} \left\{\frac{(2h\log(1/t))^{3/2}}{\sigma_{1}\cdots\sigma_{6}}\exp\left(-\frac {cr_{h}^{2}(y)}{\sigma^{2}}\right)+\frac{1}{\sigma^{3}}+\frac{1}{\sqrt{\sigma _{1}\cdots\sigma_{6}}}\exp\left(-\frac{cr_{h}^{2}(y)}{\sigma^{2}}\right) \right\},\]
where \(\beta_{3}=\mathbb{E}[\|X\|_{2}^{3}]\), \(\sigma^{2}=\mathbb{E}[\|X\|_{2}^{2}]\), and \(\sigma_{1}^{2}\geqslant\sigma_{2}^{2}\geqslant\cdots\geqslant\sigma_{6}^{2}>0\) are the largest \(6\) eigenvalues of \(\Sigma=\mathbb{E}[XX^{\top}].\) Clearly, if \(\|y\|_{2}\geqslant 2\sqrt{2h\log(1/t)}\), then \(r_{h}(y)\geqslant\|y\|_{2}/2\) and hence, the difference between the probabilities can be bounded by
\[\frac{C\beta_{3}n^{-1/2}}{(1+\|y\|_{2}^{3}\mathbf{1}\{\|y\|_{2}\geqslant\sqrt{ 8h\log(1/t)}\}/(8\sigma^{3}))}\left\{\frac{(2h\log(1/t))^{3/2}}{(\sigma_{1} \cdots\sigma_{6})}+\frac{1}{\sigma^{3}}+\frac{1}{(\sigma_{1}\cdots\sigma_{6})^ {1/2}}\right\}.\]
Therefore, to bound \(\Delta_{f_{h}}(W_{n},W)\), it suffices to compute the integral of the right hand side weighted by \(f(y)\) over \(\mathbb{R}^{d}\times[0,1].\) A simple calculation (presented in Appendix B) yields
\[\Delta_{f_{h}}(W_{n},W)\] \[\leqslant\frac{C\beta_{3}}{n^{1/2}(2\pi)^{d/2}}\left[\frac{h^{3/ 2}}{(\sigma_{1}\cdots\sigma_{6})}+\frac{1}{\sigma^{3}}+\frac{1}{(\sigma_{1} \cdots\sigma_{6})^{1/2}}\right]\left(\int_{\mathbb{R}^{d}}\left\{\frac{8 \sigma^{3}}{8\sigma^{3}+h^{3}\|y\|_{2}^{3}}+\exp\left(-\frac{\|y\|_{2}^{2}}{1 6}\right)\right\}f(yh)dy\right).\]
As long as the integrals on the right hand side are finite (which holds, for example, if \(f\) belongs to Schwartz class), the right hand side converges to zero as \(n\to\infty\) for every \(h>0\). In particular, if we have \(g(x)=\int\!\exp(\|x-y\|_{2}^{2}/2h)f(y)dy/(\sqrt{2\pi}h)^{d}\), then \(\Delta_{g}(W_{n},W)=\Delta_{f_{h}}(W_{n},W)\) and so the bound above applies. The same bound also applies to higher-order twicing kernels starting with a Gaussian kernel, because the convolution of a Gaussian with itself is another Gaussian. Finally, one can extend the result above to obtain bounds for \(\Delta_{g}(W_{n},W)\) for functions \(g\) of the form \(g(x)=\int_{\mathbb{R}^{d}}\int_{0}^{\infty}h^{-d}K(\|x-y\|_{2}/h)f(y)\pi(h)dydh\), for some function \(\pi(\cdot)\); this follows by integrating the above bound with respect to \(\pi(h)\). For a class of functions that satisfy such a representation
for some choices of \(K(\cdot)\) and \(\pi(\cdot)\), see Girosi and Anzellotti (1992) and Girosi (1994). In general, for any function \(f\), we apply inequality (2) and minimize over \(h>0\).
We end this section with a discussion of the relation to reproducing kernel Hilbert spaces (RKHS). For any positive definite function \(K:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\)(Wainwright, 2019, Definition 12.6), the RKHS corresponding to \(K\) is defined as the unique Hilbert space \(\mathbb{H}_{K}=\{f:\,f(x)=\int_{\mathcal{X}}K(x,y)f(y)dy\;\forall\;x\in \mathcal{X}\}\). Here \(\mathcal{X}\) is an arbitrary metric space. For any function \(f\in\mathbb{H}_{K}\), the techniques developed in this section can be readily applied. In particular, we have
\[\Delta_{f}(W_{n},W) =\left|\int_{\mathcal{X}}\mathbb{E}[K(W_{n},y)-K(W,y)]f(y)dy \right|, \tag{4}\] \[\leqslant\int_{\mathcal{X}}|\mathbb{E}[K(W_{n},y)-K(W,y)]|\;|f(y) |dy.\]
The RKHS induced by the Gaussian kernel \(K(x,y)=\exp(-\|x-y\|_{2}^{2}/\gamma)\) is explicitly described in Minh (2010, Theorem 1). For all such functions, the bound \(\Delta_{f_{h}}(W_{n},W)\) can be used with a fixed \(h\); it is also known that the Gaussian RKHS is dense in \(L_{p}\) for any \(p\geqslant 1\); see Steinwart and Christmann (2008, Sec. 4.6 and Thm. 4.63). Furthermore, the classical smoothness class of Sobolev space \(W_{2}^{m}(\mathbb{R}^{d})\) is also an RKHS with \(m>d/2\)(Novak et al., 2018). The corresponding kernel can also be written as a radial function that involves Bessel functions of the third kind; see Schaback (2007, Theorem 9.9) for more details. Also, see De Vito et al. (2021) for extensions of RKHS defined on manifolds.
What is the main takeaway from the results in this section? We have provided a general recipe to obtain bounds for \(\Delta_{f}(W_{n},W)\) for general \(f\). If \(f\) is approximable by a smooth function, then one can get good bounds on \(\Delta_{f}(W_{n},W)\). In particular, when \(f\) belongs to an RKHS with radial kernel \(K(\cdot,\cdot)\), then the bounds obtained on \(\Delta_{f}(W_{n},W)\) do not have excessive dependence on the dimension and on the covariance matrix of the underlying random variables. In fact, the explicit dependence on the dimension \(d\) actually decreases with increasing dimension. The results also apply to functions defined on a Hilbert space.
### Half-spaces
In this section, we provide integral representations of functions \(f\) in terms of functions whose level sets are half-spaces. As mentioned before, in this case, we can obtain results for high-/infinite-dimensional random variables using results for univariate random variables. Following part I, we start with results from neural networks literature to provide sufficient conditions on functions for integral representations.
We start with the recent work of Klusowski and Barron (2018) and extend it to present integral representations for functions defined on \(\mathbb{R}^{d}\) (instead of \([-1,1]^{d}\) used in Klusowski and Barron (2018)). In this section, we restrict to functions \(f\) for which there is a Fourier representation of the form
\[f(x)=\int_{\mathbb{R}^{d}}e^{i\langle\omega,x\rangle}\widetilde{f}(\omega)d\omega,\]
for some complex function \(\widetilde{f}(\cdot)\). In what follows, we show that under certain weighted integrability assumptions on \(\widetilde{f}\), \(f\) satisfies an integral representation in terms of functions whose level sets are half-spaces. Theorem 2 of Klusowski and Barron (2018) proved that if \(f\) is defined on \([-1,1]^{d}\) and \(v_{f,2}=\int_{\mathbb{R}^{d}}\|\omega\|_{1}^{2}|\widetilde{f}(\omega)|d\omega<\infty\), then there exists a probability measure \(P\) on \(\{-1,1\}\times[0,1]\times\mathbb{R}^{d}\),
\[f(x)=f(0)+\langle x,\nabla f(0)\rangle-v\int_{\{-1,1\}\times[0,1]\times\mathbb{ R}^{d}}(z\langle a,x\rangle-t)_{+}\text{sgn}(\cos(\|\omega\|_{1}zt+b(\omega)))dP(z,t, \omega),\]
for all \(x\in[-1,1]^{d}\), where \(b(\omega)\) is defined by the relation \(\widetilde{f}(\omega)=e^{ib(\omega)}|\widetilde{f}(\omega)|\). The choice \(\|\cdot\|_{1}\)-norm here is rather arbitrary for the purpose of central limit theorems (but is well-motivated in the context of neural network learning). The predecessor work Barron (1993) actually uses \(\|\cdot\|_{2}\)-norm in place of the \(\|\cdot\|_{1}\)-norm and derives a similar representation when \(v_{f,1}^{(2)}=\int_{\mathbb{R}^{d}}\|\omega\|_{2}|\widetilde{f}(\omega)|d \omega<\infty\). The following theorem presents a minor extension of the results of Klusowski and Barron (2018) that does not require the specification of the norm or restrict the domain.
**Theorem 3.1**.: _Let \(f:\mathbb{R}^{d}\to\mathbb{R}\) be a function with Fourier transform \(\widetilde{f}:\mathbb{R}^{d}\to\mathbb{R}\). Then for all \(x\in\mathbb{R}^{d}\) such that_
\[\int_{\mathbb{R}^{d}}\min\left\{2|\langle\omega,x\rangle|,\,|\langle\omega,x \rangle|^{2}/2\right\}|\widetilde{f}(\omega)|d\omega\ <\ \infty, \tag{5}\]
_we have_
\[f(x)=f(0)+f^{\prime}(0)[x]-\int_{\mathbb{R}^{d}}\int_{0}^{\infty}\mathbb{E}_{ \varepsilon}[(\varepsilon\langle\omega,x\rangle-u)_{+}\cos(u+\varepsilon b( \omega))]|\widetilde{f}(\omega)|dud\omega,\]
_where \(b(\cdot)\) is defined via \(\widetilde{f}(\omega)=|\widetilde{f}(\omega)|e^{ib(\omega)}\) and \(\mathbb{E}_{\varepsilon}[g(\varepsilon,u,\omega)]=(g(1,u,\omega)+g(-1,u, \omega))/2\) for any function \(g\). Moreover, if for any two random variables \(W,W_{n}\in\mathbb{R}^{d}\) with \(\mathbb{E}[W]=\mathbb{E}[W_{n}]=0\),_
\[\int_{\mathbb{R}^{d}}\mathbb{E}[\min\{2|\langle\omega,W_{n} \rangle|,|\langle\omega,W_{n}\rangle|^{2}/2\}]|\widetilde{f}(\omega)|d\omega <\infty, \tag{6}\] \[\int_{\mathbb{R}^{d}}\mathbb{E}[\min\{2|\langle\omega,W\rangle|, |\langle\omega,W\rangle|^{2}/2\}]|\widetilde{f}(\omega)|d\omega <\infty,\]
_then_
\[\Delta_{f}(W,W_{n}) =\left|\int_{\mathbb{R}^{d}}\int_{0}^{\infty}\mathbb{E}_{ \varepsilon}[\mathbb{E}\{(\varepsilon\langle\omega,W_{n}\rangle-u)_{+}-( \varepsilon\langle\omega,W\rangle-u)_{+}\}\cos(u+\varepsilon b(\omega))]| \widetilde{f}(\omega)|dud\omega\right|, \tag{7}\] \[\leq\mathbb{E}_{\varepsilon}\left[\int_{\mathbb{R}^{d}}\int_{0}^ {\infty}|\mathbb{E}[(\varepsilon\langle\omega,W_{n}\rangle-u)_{+}-( \varepsilon\langle\omega,W\rangle-u)_{+}]|\widetilde{f}(\omega)|dud\omega \right].\]
Theorem 3.1 is stated to represent (or bound) \(\Delta_{f}(W_{n},W)\) in terms of \(\Delta_{g}(W_{n},W)\) where \(g(x)=\langle\omega,x\rangle\) for some \(\omega\in\mathbb{R}^{d}\). Because \(g(W_{n})\) is an average of univariate random variables if \(W_{n}\) is an average of random variables in \(\mathbb{R}^{d}\), we can control \(\Delta_{g}(W_{n},W)\) easily even for dependent random variables, that too without any explicit dimension dependence. The condition (5) stems from the inequality \(|e^{-iz}-iz-1|\leq\min\{|z|,|z|^{2}/2\}.\) Note that assumption (5) implies that the function \(f\) is differentiable at zero. One can also obtain a result similar to Theorem 3.1 with \((\varepsilon\langle\omega,x\rangle-u)_{+}\) replaced by \((\varepsilon\langle\omega,x\rangle-u)_{+}^{s}\) for \(s\geq 0\) by replacing the assumption (5) with \(\int_{\mathbb{R}^{d}}\min\{|\langle\omega,x\rangle|^{s},|\langle\omega,x \rangle|^{s+1}\}|\widetilde{f}(\omega)|d\omega<\infty\). This revised assumption implies that the function is \(s\)-times differentiable at zero.
In what follows, we provide further integral representations for functions under regularity conditions akin to (5). The following result is modeled after the result of Irie and Miyake (1988); also see Eqs. (19)-(21) of Siegel and Xu (2020).
**Theorem 3.2**.: _Suppose \(h:\mathbb{R}\to\mathbb{R}\) is an integrable function with Fourier transform \(\widetilde{h}(\cdot)\) such that \(\widetilde{h}(a)\neq 0\) for some \(a\in\mathbb{R}\). Then any function \(f:\mathbb{R}^{d}\to\mathbb{R}\) with an integrable Fourier transform \(\widetilde{f}(\cdot)\) can be represented as_
\[f(x)=\int_{\mathbb{R}^{d}}\int_{\mathbb{R}}\frac{1}{2\pi|\widetilde{h}(a)|}h \left(a^{-1}\langle\omega,x\rangle+u\right)|\widetilde{f}(\omega)|\cos(au+c_{h }-b(\omega))dbd\omega,\quad\text{for all}\quad x\in\mathbb{R}^{d},\]
_where \(\widetilde{h}(a)=|\widetilde{h}(a)|e^{ic_{h}}\), and \(\widetilde{f}(\omega)=|\widetilde{f}(\omega)|e^{ib(\omega)}\)._
Theorem 3.2 can be used in the same way as Theorem 3.1 to get bounds on \(\Delta_{f}\). In this result also, we only need to study \(\mathbb{E}[h(\langle\omega,W_{n}\rangle+u)-h(\langle\omega,W\rangle+u)]\) for fixed \(\omega\) and \(u\). If \(W_{n}\) is an average of random variables in \(\mathbb{R}^{d}\), then \(\langle\omega,W_{n}\rangle\) is also an average of univariate random variables and hence, univariate central limit theorems can be used to bound \(\Delta_{f}\) for arbitrary dimension \(d\geqslant 1\). A limitation of Theorem 3.2 is that the activation function \(h(\cdot)\) is required to be integrable and several commonly used functions such as logistic, ReLU, or Heaviside functions do not satisfy this condition. Interestingly, there is a simple way to rectify this limitation. Lemma 1 of Funahashi (1989) shows that for any non-constant, bounded, monotone increasing continuous function \(\phi:\mathbb{R}\to\mathbb{R}\), \(h(t)=\phi(t+\alpha)-\phi(t-\alpha)\) is an integrable function for any \(\alpha>0.\) Furthermore, there exists an \(a\in\mathbb{R}\) such that \(\widetilde{h}(a)\neq 0\). The advantage of using a monotonically increasing bounded activation function \(h\) is that the right side of (1) is a finite integral. A similar integral representation is also obtained in Makovoz (1998) under the assumption that \(\sup_{u:|u|_{2}=1}\int_{0}^{\infty}r^{d}|\widetilde{f}(ru)|dr<\infty\); see Remark 1 of Klusowski and Barron (2018).
Similar to the use of the Fourier transform, one can obtain integral representations of functions using other transforms such as the Radon transform. We present one such example below and leave a detailed study for future work. The following result is taken from Kainen et al. (2010). For any \(u\in\mathbb{R}^{d}\) such that \(\|u\|_{2}=1\) and \(b\in\mathbb{R}\), define
\[H^{-}_{u,b}=\{x\in\mathbb{R}^{d}:\,\langle u,x\rangle+b\leqslant 0\}.\]
The linear operator Laplacian \(\Delta\) is defined by \(\Delta g=\sum_{j=1}^{d}\partial^{2}g/\partial x_{j}^{2}\), for a twice differentiable function \(g:\mathbb{R}^{d}\to\mathbb{R}\). For a positive integer \(m\geqslant 1\), \(\Delta^{m}\) denotes the Laplacian iterated \(m\) times, while \(\Delta^{0}\) is the identity operator. Define \(k_{d}=2[(d+1)/2]\). Call a function \(f:\mathbb{R}^{d}\to\mathbb{R}\) to have controlled decay if \(f\) is \(k_{d}\)-times continuously differentiable and for each multi-index \(\alpha=(\alpha_{1},\ldots,\alpha_{d})\) with \(\sum_{j=1}^{d}|\alpha_{j}|\leqslant k_{d}\), there exists \(\varepsilon>0\), such that \(\lim_{|x|_{2}\to\infty}\partial^{\alpha}f(x)\|x\|_{2}^{|\alpha|+\varepsilon}=0\); this condition means that the derivatives of the order less than \(k_{d}\) converge to zero at infinity at least as fast as a polynomial. For any function \(f\) of controlled decay, define
\[w_{f}(u,b)=a_{d}\times\left\{\begin{aligned} &\int_{H^{-}_{u,b}}\Delta^{k_{d}/2}f(y)dy, &\text{if $d$ is odd},\\ &\int_{\mathbb{R}^{d}}\Delta^{k_{d}/2}f(y)\alpha(\langle u,y \rangle+b)dy,&\text{if $d$ is even},\end{aligned}\right.\]
where \(a_{d}=(-1)^{(d-1)/2}/(2(2\pi)^{d-1})\) if \(d\) is odd and \(a_{d}=(-1)^{(d-2)/2}/(2\pi)^{d}\) if \(d\) is even, and \(\alpha(t)=t\log(e/|t|)\) for \(t\neq 0\) with \(\alpha(0)=0\). With these notations at hand, Theorem 4.2 of Kainen et al. (2010) states that any function \(f\) of controlled decay satisfies
\[f(x)=\int_{S^{d-1}\times\mathbb{R}}w_{f}(u,b)\mathbf{1}\{\langle u,x\rangle+b \geqslant 0\}dudb, \tag{8}\]
where \(S^{d-1}=\{u\in\mathbb{R}^{d}:\,\|u\|_{2}=1\}\). For more representations related to Radon transform, see Petrosyan et al. (2020) and Abdeljawad and Grohs (2022).
Inequalities of the type (4) and (7) allow one to easily bound \(\Delta_{f}(W_{n},W)\) for a large class of functions (in fact, many of which are dense in \(L_{2}\)) easily in terms of \(\Delta_{g}(W_{n},W)\) for a small class of functions \(g\). (For (4), it suffices to bound \(\mathbb{E}[K(W_{n},y)-K(W,y)]\) for each \(y\) and for (7), it suffices to bound \(\mathbb{E}[(\varepsilon\langle\omega,W_{n}\rangle-u)_{+}-(\varepsilon\langle \omega,W\rangle-u)_{+}]\) for each \(\varepsilon\in\{-1,1\},u\in[0,\infty)\), and \(\omega\in\mathbb{R}^{d}\)). Moreover, the equalities in (4), (7), Theorem 3.2, and (8) also allow one to get precise asymptotics of \(\Delta_{f}(W_{n},W)\) via upper and lower bounds.
Expansions and Applications
In Section 3, we provided several integral representations for functions in terms of Euclidean balls and half-spaces. In this section, we review non-uniform expansions for the difference of probabilities for averages of univariate random variables and consider the application of these expansions for precise bounds on \(\Delta_{f}\). In the discussion to follow, we restrict ourselves to independent and identically distributed random variables. Extensions to non-independent and/or non-identically distributed random variables are definitely of interest but will be explored elsewhere.
Let \(W_{1},\ldots,W_{n}\) be a sequence of independent and identically distributed real-valued random variables with mean zero and variance \(\sigma^{2}>0\). Set \(v(b)=\mathbb{E}[e^{ibW_{1}}]\), and \(\beta_{3}=\mathbb{E}[|W_{1}|^{3}]\). Theorem 5.18 of Petrov (1995) (with \(k=3\)) implies the following result.
**Theorem 4.1**.: _Let \(Z\sim N(0,1)\). If \(\beta_{3}<\infty\), then there exists a constant \(C>0\) such that for all \(x\in\mathbb{R}\),_
\[\begin{split}&\left|\mathbb{P}\left(\frac{1}{\sqrt{n\sigma^{2}}} \sum_{i=1}^{n}W_{i}\leqslant x\right)-\mathbb{P}(Z\leqslant x)-\frac{(1-x^{2} )e^{-x^{2}/2}\mathbb{E}[W_{1}^{3}]}{6\sqrt{2\pi n}\sigma^{3}}\right|\\ &\leqslant C\frac{\mathbb{E}[|W_{1}|^{3}\mathbf{1}\{|W_{1}| \geqslant\sigma(1+|x|)n^{1/2}\}]}{\sigma^{3}n^{1/2}(1+|x|)^{3}}\\ &\quad+C\frac{\mathbb{E}[|W_{1}|^{4}\mathbf{1}\{|W_{1}|<\sigma(1 +|x|)n^{1/2}\}]}{\sigma^{4}n(1+|x|)^{4}}\\ &\quad+C\left(\sup_{|b|\geqslant\sigma^{2}/(12\beta_{3})}|v(b)|+ \frac{1}{2n}\right)^{n}\frac{n^{6}}{(1+|x|)^{4}}.\end{split} \tag{9}\]
The assumption of \(\beta_{3}<\infty\) implies that the right hand side converges to zero as \(n\to\infty\) as long as \(\sup_{|b|\geqslant\sigma^{2}/(12\beta_{3})}|v(b)|<1\), but cannot guarantee any specific rate of convergence. Stronger assumptions such as \(\mathbb{E}[|W_{1}|^{3+\gamma}]<\infty\) can allow one to obtain a specific decay rate (with respect to \(n\)) for the right hand side. In particular, if \(\mathbb{E}[|W_{1}|^{4}]<\infty\), then
\[\begin{split}&\left|\mathbb{P}\left(\frac{1}{\sqrt{n\sigma^{2}}} \sum_{i=1}^{n}W_{i}\leqslant x\right)-\mathbb{P}(Z\leqslant x)-\frac{(1-x^{2} )e^{-x^{2}/2}\mathbb{E}[W_{1}^{3}]}{6\sqrt{2\pi n}\sigma^{3}}\right|\\ &\quad\leqslant C\frac{\mathbb{E}[|W_{1}|^{4}]}{\sigma^{4}n(1+|x |)^{4}}+C\left(\sup_{|b|\geqslant\sigma^{2}/(12\beta_{3})}|v(b)|+\frac{1}{2n} \right)^{n}\frac{n^{6}}{(1+|x|)^{4}}.\end{split} \tag{10}\]
The advantage of the results of the type (9) or (10) is that they provide a precise description of the difference of probabilities along with a bound that depends on \(x\). These features will turn out to be very useful when applying them to the integral representations from the previous section.
### Application to ReLU functions
For any random variable \(U\), we have
\[\mathbb{E}[(U-t)_{+}]=\int_{0}^{\infty}\mathbb{P}((U-t)_{+}>s)=\int_{0}^{ \infty}\mathbb{P}(U>t+s)ds=\int_{0}^{\infty}(1-F_{U}(s+t))ds,\]
where \(F_{U}(\cdot)\) is the cumulative distribution function of \(U\). Therefore,
\[\Delta_{n,\mathrm{ReLU}}(t) :=\mathbb{E}\left(\frac{1}{\sqrt{n\sigma^{2}}}\sum_{i=1}^{n}W_{i}-t \right)_{+}-\mathbb{E}[(Z-t)_{+}]\] \[=\int_{0}^{\infty}\left[\mathbb{P}(Z\leqslant t+s)-\mathbb{P} \left(\frac{1}{\sqrt{n\sigma^{2}}}\sum_{i=1}^{n}W_{i}\leqslant t+s\right) \right]ds.\]
Inequality (10) now implies that
\[\left|\Delta_{n,\mathrm{ReLU}}(t)-\frac{\mathbb{E}[W_{1}^{3}]}{ 6\sqrt{2\pi n\sigma^{3}}}\int_{0}^{\infty}{(1-(t+s)^{2})e^{-(t+s)^{2}/2}ds}\right|\] \[\quad\leqslant C\left[\frac{\mathbb{E}[|W_{1}|^{4}]}{\sigma^{4}n }+n^{6}\left(\sup_{|b|\geqslant\sigma^{2}/(12\beta_{3})}|v(b)|+\frac{1}{2n} \right)^{n}\right]\int_{0}^{\infty}{\frac{1}{(1+|t+s|)^{4}}ds}.\]
A similar result can be obtained from Theorem 4.1. Simplifying the bound above, we have proved the following result.
**Proposition 4.2**.: _Under the notation above, we have_
\[\left|\Delta_{n,\mathrm{ReLU}}(t)+\frac{te^{-t^{2}/2}\mathbb{E}[W_{1}^{3}]}{6 \sqrt{2\pi n}\sigma^{3}}\right|\leqslant C\left[\frac{\mathbb{E}[|W_{1}|^{4}]} {\sigma^{4}n}+n^{6}\left(\sup_{|b|\geqslant\sigma^{2}/(12\beta_{3})}|v(b)|+ \frac{1}{2n}\right)^{n}\right]\kappa(t), \tag{11}\]
_where_
\[\kappa(t):=\left\{\begin{array}{ll}(3(1+t)^{3})^{-1},&\mbox{if }t\geqslant 0, \\ 2/3-(3(1-t)^{3})^{-1},&\mbox{if }t<0.\end{array}\right..\]
_Moreover,_
\[\left|\int_{0}^{\infty}\Delta_{n,\mathrm{ReLU}}(t)dt+\frac{ \mathbb{E}[W_{1}^{3}]}{(\mathbb{E}[|W_{1}|^{2}])^{3/2}}\frac{1}{6\sqrt{2\pi n }}\right| \tag{12}\] \[\quad\leqslant\frac{C}{6}\left[\frac{\mathbb{E}[|W_{1}|^{4}]}{ \sigma^{4}n}+n^{6}\left(\sup_{|b|\geqslant\sigma^{2}/(12\beta_{3})}|v(b)|+ \frac{1}{2n}\right)^{n}\right].\]
It is easy to see that \(\kappa(t)\to 0\) as \(t\to\infty\) and \(\kappa(t)\to 2/3\) as \(t\to-\infty\). Inequality (11) in Proposition 4.2 depends on the characteristic function \(v(\cdot)\) of the random variable \(W_{1}\). This can potentially be a sub-optimality of the bound. Borisov and Skilyagina (1996) proved a bound for \(|\mathbb{E}[f((n\sigma^{2})^{-1/2}\sum_{i=1}^{n}W_{i})]-\mathbb{E}[f(Z)]|\) for twice differentiable function \(f\) without any restriction on the characteristic function. We believe one can apply a similar technique to get a better bound for \(\Delta_{n,\mathrm{ReLU}}(t)\), but we leave this for future work.
Inequality (12) is useful in the context of Theorem 3.1, and in particular, (6). Furthermore, inequality (12) is also related to the bounds for ideal metrics. For example, \(1\)-Wasserstein distance (or ideal metric of order \(1\)) between two random variables \(U\) and \(V\) is given by
\[d_{1}^{\mathrm{Wass}}(U,V)=\sup_{f\in\mathcal{F}_{1}}|\mathbb{E}[f(U)]-\mathbb{ E}[f(V)]|=\int_{\mathbb{R}}|\mathbb{P}(U\leqslant s)-\mathbb{P}(V\leqslant s)|ds,\]
where \(\mathcal{F}_{1}:=\{f:\mathbb{R}\to\mathbb{R}\,|\,\sup_{x\neq y}|f(x)-f(y)|/|x- y|\leqslant 1\}\) is the class of all \(1\)-Lipschitz functions. Clearly, \(\Delta_{n,\mathrm{ReLU}}(t)\leqslant d_{1}^{\mathrm{Wass}}((n\sigma^{2})^{-1/ 2}\sum_{i=1}^{n}W_{i},\,Z)\) for all \(t\in\mathbb{R}\). Bounds on \(d_{1}^{\mathrm{Wass}}(\cdot,\cdot)\) for
scaled averages of independent/dependent random variables are referred to as \(L_{1}\) Berry-Esseen bounds (Erickson, 1973). Although \(L_{1}\) Berry-Esseen bounds are not as precise as (12), they may suffice for the purpose of obtaining bounds for \(\Delta_{f}(W_{n},W)\) using Theorem 3.2 or (8). Optimal order \(L_{1}\) Berry-Esseen bounds for the average of independent/dependent random variables can be found in Erickson (1974), Chen (1986), Chen and Shao (2004), Goldstein (2010), Van Dung et al. (2014), Sunklodas (2007), and Fan and Ma (2020). In particular, Theorems 3.2-3.4 of Bentkus (2003a) imply bounds for ideal metrics of order \(k\geqslant 1\) for sums of independent random variables, Theorems 1-2 of Sunklodas (2007) imply bounds for the ideal metric of order \(1\) for sums of strongly mixing (or \(\alpha\)-mixing) random variables, Theorems 4-5 of Van Dung et al. (2014) imply bounds for the ideal metric of order \(1\) for martingales. (Note that \(L_{1}\) Berry-Esseen bounds can be obtained from non-uniform Berry-Esseen bounds.) Inequality (12) suffices for using Theorem 3.1. Inequality (12) is related to ideal metrics of order \(2\); see Section 2.10 of Senatov (2011). The ideal metric of order \(2\) is given by
\[\zeta_{2}(U,V):=\sup_{f\in\mathcal{F}_{2}}|\mathbb{E}[f(U)]-\mathbb{E}[f(V)] |=\int_{\mathbb{R}}|\mathbb{E}[(U-t)_{+}]-\mathbb{E}[(V-t)_{+}]|\,dt,\]
where \(\mathcal{F}_{2}=\{f:\mathbb{R}\rightarrow\mathbb{R}\,|\,|f^{(2)}\|_{\infty} \leqslant 1\}\) is the class of all functions whose second derivative is uniformly bounded by \(1\).
What is interesting to note from Proposition 4.2 (11) is that if either \(\mathbb{E}[W_{1}^{3}]=0\) or \(t=0\), we get
\[|\Delta_{n,\mathrm{ReLU}}(t)|\ \leqslant\ C\left[\frac{\mathbb{E}[|W_{1}|^{4}]}{ \sigma^{4}n}+n^{6}\left(\sup_{|b|\geqslant\sigma^{2}/(12\beta_{3})}|v(b)|+ \frac{1}{2n}\right)^{n}\right]\kappa(t),\]
where the right hand side converges to zero at an \(n^{-1}\) rate as \(n\rightarrow\infty\). A simple implication is for the rate of convergence of moments. For example, note that \(|x|=(x)_{+}-(-x)_{+}\) and hence,
\[\left|\mathbb{E}\left|\frac{1}{\sqrt{n\sigma^{2}}}\sum_{i=1}^{n}W_{i}\right|- \mathbb{E}[|Z|]\right|\ \leqslant\ 2C\left[\frac{\mathbb{E}[|W_{1}|^{4}]}{\sigma^{4}n}+n^{6}\left(\sup_{|b| \geqslant\sigma^{2}/(12\beta_{3})}|v(b)|+\frac{1}{2n}\right)^{n}\right]\kappa (0).\]
Lemma 6.2 of Kainen et al. (2010) and Example 4.8(2) of Weinan and Wojtowytsch (2022) prove that
\[\|x\|_{2}\ =\ c_{d}\int_{S^{d-1}}(\langle a,x\rangle)_{+}\pi^{0}(a)da,\quad\text{ for all}\quad x\in\mathbb{R}^{d},\]
where \(\pi^{0}(\cdot)\) is the uniform measure on the unit sphere (\(S^{d-1}=\{\theta\in\mathbb{R}^{d}:\,\|\theta\|_{2}=1\}\)) and
\[c_{d}=\left(\int_{S^{d-1}}(\langle e_{1},w\rangle)_{+}\pi^{0}(w)dw\right)^{-1 }\ \asymp\ 2\sqrt{\pi d}.\]
Therefore, for mean zero independent and identically distributed random vectors \(X_{1},\ldots,X_{n}\in\mathbb{R}^{d}\), we have
\[\left|\mathbb{E}\left[\left\|\frac{1}{\sqrt{n}}\sum_{i=1}^{n}X_{i }\right\|_{2}\right]-\mathbb{E}[\|Z^{\prime}\|_{2}]\right|\] \[\ \ \ \ \ \leqslant 2C\kappa(0)c_{d}\int_{S^{d-1}}\left[\frac{ \mathbb{E}[|\langle a,X_{1}\rangle|^{4}]}{(\mathbb{E}[|\langle a,W_{1}\rangle |^{2}])^{3/2}n}+n^{6}(\mathbb{E}[|\langle a,X_{1}\rangle|^{2}])^{1/2}\left( \sup_{|b|\geqslant\gamma_{a}}|v_{a}(b)|+\frac{1}{2n}\right)^{n}\right]\pi^{0} (a)da,\]
where \(v_{a}(b)=\mathbb{E}[e^{i\langle a,X_{1}\rangle}]\) and \(\gamma_{a}=\mathbb{E}[|\langle a,X_{1}\rangle|^{2}]/(\mathbb{E}[|\langle a,X_{ 1}\rangle|^{3}])^{3/2}\). One can simply bound the integral on the right hand side by the supremum over all \(a\in S^{d-1}\). If the random vector
\(\mathbb{R}^{d}\) satisfies \(L_{4}\)-\(L_{2}\) moment equivalence (i.e., there exists a constant \(L\) such that \(\mathbb{E}[|\langle a,X_{1}\rangle|^{4}]\leqslant L(\mathbb{E}[|\langle a,X_{1} \rangle|^{2}])^{2}\) for all \(a\in S^{d-1}\)), then \(1/\gamma_{a}\leqslant L^{3}\|\Sigma\|_{op}^{1/2}\) which implies that
\[\sup_{|b|\geq\gamma_{a}}|v_{a}(b)|\leqslant\sup_{|b|\geq L^{-3}\|\Sigma\|_{op }^{-1/2}}|v_{a}(b)|,\]
and hence, we get
\[\begin{split}&\left|\mathbb{E}\left[\left|\frac{1}{\sqrt{n}}\sum_{i=1 }^{n}X_{i}\right|_{2}\right]-\mathbb{E}[\|Z^{\prime}\|_{2}]\right|\\ &\quad\leqslant 2C\kappa(0)c_{d}\|\Sigma\|_{op}^{1/2}\left[\frac{L}{ n}+n^{6}\int_{S^{d-1}}\left(\sup_{|b|\geq L^{-3}|\Sigma|_{op}^{-1/2}}|v_{a}(b)| +\frac{1}{2n}\right)^{\!\!n}\pi^{0}(a)da\right].\end{split} \tag{13}\]
The assumption of \(L_{4}\)-\(L_{2}\) moment equivalence is standard in robust covariance estimation as well as small ball property; see Oliveira (2016) and Mendelson and Zhivotovskiy (2020). One important class of distributions that satisfy \(L_{4}\)-\(L_{2}\) moment equivalence assumption is the class of log-concave distributions; see Remark 2.20 of Patil et al. (2022).
If the characteristic function \(v_{a}(b)\) is bounded away from \(1\) for almost all \(a\in S^{d-1}\), then the second term on the right hand side of (13) converges to zero exponentially in \(n\) and hence, we get that the difference of the expectations of \(\|\cdot\|_{2}\)-norms is of order \(c_{d}/n\asymp d^{1/2}/n\) as \(n\to\infty\). There are two interesting features of this result: (1) the rate of convergence is \(n^{-1}\), and (2) the right hand side converges to zero even with increasing dimension as long as \(d=o(n^{2})\). The rate of convergence of \(n^{-1}\) for the difference of probabilities of Euclidean balls was proved in Bentkus and Gotze (1996) and Gotze and Zaitsev (2014) for \(d\geqslant 5\). The dependence on dimension \(d\), however, is not known. To the best of our knowledge, inequality similar to (13) is unknown in the literature. It may be worth noting here that \(f(x)=\|x\|_{2}\) is not a smooth function; it is not differentiable at \(x=0\).
## 5 Discussion
We have discussed ways to bound the differences of expectations of averages of random variables using Berry-Esseen results for Euclidean balls or for univariate random variables. We have also summarized several integral representations of functions that can yield bounds that have none to minimal dependence on the dimension of the underlying random variables. Two interesting aspects of our bounds are that (1) they are equally applicable to both independent and dependent random variables and (2) they can be used with any limiting distribution (or any infinitely divisible distribution). All the results presented in this paper require some sort of smoothness on the functions expressed either by weighted integrability of the Fourier transform, or explicitly by the differentiability requirement. Obtaining bounds for \(\Delta_{f}(W_{n},W)\) for arbitrary Borel measurable function \(f\) is usually done by smoothing with a mollifier and it seems plausible that one can apply the results of this paper in that context.
Acknowledgments.This work is partially supported by NSF DMS-2113611.
|
2303.07246
|
Modelling self-consistently beyond General Relativity
|
The majority of extensions to General Relativity display mathematical
pathologies (higher derivatives, character change in equations that can be
classified within PDE theory, and even unclassifiable ones) that cause severe
difficulties to study them, especially in dynamical regimes. We present here an
approach that enables their consistent treatment and extraction of physical
consequences. We illustrate this method in the context of single and merging
black holes in a highly challenging beyond GR theory.
|
Ramiro Cayuso, Pau Figueras, Tiago França, Luis Lehner
|
2023-03-13T16:17:39Z
|
http://arxiv.org/abs/2303.07246v1
|
# Modelling self-consistently beyond General Relativity
###### Abstract
The majority of extensions to General Relativity display mathematical pathologies -higher derivatives, character change in equations that can be classified within PDE theory, and even unclassifiable ones- that cause severe difficulties to study them, especially in dynamical regimes. We present here an approach that enables their consistent treatment and extraction of physical consequences. We illustrate this method in the context of single and merging black holes in a highly challenging beyond GR theory.
Introduction.--The gravitational wave window provides exciting opportunities to further test General Relativity (GR), e.g., [1]. Especially in the context of compact binary mergers, gravitational waves produced by the strongest gravitational fields in highly dynamical settings arguably represent the best regime to explore deviations from GR, e.g., [2].
Such effort, the ability to extract consequences and propel theory forward, rely on at least having some understanding of the characteristics of potential departures, to search and interpret outcomes [3].
Unfortunately, the majority of proposed beyond GR theories have, at a formal level, mathematical pathologies which makes their understanding in general scenarios difficult.1 Such pathologies may include loss of uniqueness, a dynamical change of character in the equations of motion (e.g., from hyperbolic to elliptic) or, even worse, having equations of motion (EOMs) of unknown mathematical type (e.g., [5; 6; 7; 8; 9; 10; 11]). This, combined with the need to use computational simulations to study the (non-linear/dynamical) regime of interest, pose unique challenges. Of note is that the standard mathematical approach to analyse PDEs [12] -where the high-frequency limit is examined- cannot be applied as it is in such regime that the problems alluded above arise. Further, such regime is incompatible with the very assumptions made to formulate most GR extensions which rely on Effective Field Theory (EFT) arguments [13]. Faced with this problem, solid novel ideas must be pursued to understand potential solutions.
Footnote 1: For a rather broad sample of beyond GR proposals, discussed in the context of cosmology, see [4].
We report here on a technique to _fix_ the underlying equations of motion to an extent to which the viability of a given theory can be assessed.2 In particular, it allows exploring relevant theories within their regime of validity and, in particular, monitor whether the dynamics keeps the solution within it for cases of interest. This technique, partially explored in toy models [15; 16] and restricted settings (e.g., [17; 18]) is here developed for the general, and demanding scenario, of compact binary mergers. This requires further considerations not arising in the previously simplified regimes. Specifically, we present the first self-consistent study of both single and binary black hole (BH) merger in the context of an EFT of gravity where corrections to GR come through high powers (naturally argued for) in the curvature tensor leading to EOMs with a priori unclassifiable mathematical character.
Footnote 2: This approach is motivated in part by the Israel-Stewart formalism for viscous relativistic hydrodynamics [14], though in such case, higher order corrections are known and can be called for to motivate the strategy.
We adopt the following notation: Greek letters (\(\mu\), \(\nu\), \(\rho\),...) to denote full spacetime indices and Latin letters (\(i\), \(j\), \(k\),...) for the spatial ones. We use the mostly plus metric signature, and set \(c=1\).
Focusing on a specific theory.--While we could take any of a plethora of proposed beyond GR theories -almost all sharing the problems alluded to earlier-, for definiteness here we consider a specific extension to GR derived naturally from EFT arguments [19]. In this approach, high energy (i.e., above the cutoff scale) degrees of freedom are integrated out, and their effects are effectively accounted for through higher order operators acting on the lower energy ones. For the case of gravitational interactions, in vacuum assuming parity symmetry, and accounting for the simplest contribution, such an approach yields under natural assumptions:3
Footnote 3: Other operators at this (and even lower) orders can be considered, though without loss of generality with regards to our goals we ignore them here so as to not overly complicate the presentation.
\[I_{\text{eff}}=\frac{1}{16\pi G}\int d^{4}x\,\sqrt{-g}\left(R-\frac{1}{\Lambda ^{6}}\,\mathcal{C}^{2}+\cdots\right)\,, \tag{1}\]
where \(\mathcal{C}=R_{\alpha\beta\gamma\delta}R^{\alpha\beta\gamma\delta}\) and the coupling scale \(\Lambda\) has units of \([M_{S}]^{-1}\) for some scale \(M_{S}\). The equations of motion are \(G_{\mu\nu}=8\,\epsilon\,H_{\mu\nu}\), with \(G_{\mu\nu}\) the Einstein tensor,
|
2301.09412
|
Deep Learning Mental Health Dialogue System
|
Mental health counseling remains a major challenge in modern society due to
cost, stigma, fear, and unavailability. We posit that generative artificial
intelligence (AI) models designed for mental health counseling could help
improve outcomes by lowering barriers to access. To this end, we have developed
a deep learning (DL) dialogue system called Serena. The system consists of a
core generative model and post-processing algorithms. The core generative model
is a 2.7 billion parameter Seq2Seq Transformer fine-tuned on thousands of
transcripts of person-centered-therapy (PCT) sessions. The series of
post-processing algorithms detects contradictions, improves coherency, and
removes repetitive answers. Serena is implemented and deployed on
\url{https://serena.chat}, which currently offers limited free services. While
the dialogue system is capable of responding in a qualitatively empathetic and
engaging manner, occasionally it displays hallucination and long-term
incoherence. Overall, we demonstrate that a deep learning mental health
dialogue system has the potential to provide a low-cost and effective
complement to traditional human counselors with less barriers to access.
|
Lennart Brocki, George C. Dyer, Anna Gładka, Neo Christopher Chung
|
2023-01-23T13:10:23Z
|
http://arxiv.org/abs/2301.09412v1
|
# 6th International Workshop on Dialog Systems (IWDS)
###### Abstract
Mental health counseling remains a major challenge in modern society due to cost, stigma, fear, and unavailability. We posit that generative artificial intelligence (AI) models designed for mental health counseling could help improve outcomes by lowering barriers to access. To this end, we have developed a deep learning (DL) dialogue system called _Serena_. The system consists of a core generative model and post-processing algorithms. The core generative model is a 2.7 billion parameter Seq2Seq Transformer [26] fine-tuned on thousands of transcripts of person-centered-therapy (PCT) sessions. The series of post-processing algorithms detects contradictions, improves coherency, and removes repetitive answers. Serena is implemented and deployed on [https://serena.chat](https://serena.chat), which currently offers limited free services. While the dialogue system is capable of responding in a qualitatively sympathetic and engaging manner, occasionally it displays hallucination and long-term incoherence. Overall, we demonstrate that a deep learning mental health dialogue system has the potential to provide a low-cost and effective complement to traditional human counselors with less barriers to access.
Deep Learning, Artificial Intelligence, Transformers, Mental Health, Chatbot, Dialogue System
## I Introduction
The lack of widespread access to mental health counseling remains one of the biggest challenges in the world. It is estimated that 658 million people in the world suffer from some form of psychological distress and this number grew by 50% in the last 30 years [4]. Yet only 35% percent of people with mental health disorders receive mental health treatment [3], and less than 25% percent have ever "seen someone" [7]. Psychological counseling and therapy are helpful in treating anxiety, depression, obsessive complex disorder, personality disorders, eating disorders and a plethora of other conditions [14]. Around 48% of people experiencing a mental health crisis reported that talking with friends was helpful, however 56% of them ended up handling their problems alone [6]. We propose that a virtual mental health counselor based on generative deep learning models could substantially improve mental health outcomes for many user profiles. In this paper we will present our design and implementation of a deep learning dialogue system for psychological counseling.
Generative deep learning (DL) models may provide an answer to a simple yet tenacious question: how can we make mental health counseling more accessible? To effectively tackle the problem, we first need to consider why most people cannot or _do not want to_ access mental health counseling. The most obvious cause is the prohibitive cost of the type of regular, in-person counseling that is proven to be the most beneficial [11]. A similar obstacle is time. Those people who earn enough money to afford quality counseling may not, as a result, have enough time to dedicate to the process, which in addition to the actual sessions, requires scheduling, commuting, arranging for the care of children, etc. Finally, we have fear of counseling and perceived stigma [22].
We designed such a DL-based dialogue system called _Serena_ as a system that addresses as many of these factors as possible, with an emphasis on filling the gaps left by traditional, in-person counseling. Stated differently, the proposed system is not designed as a replacement for traditional therapy. Rather, we conceive it as: 1) a fallback for those who are strictly unable to engage in traditional therapy because of money or time; 2) a catalyst for helping people warm up to the idea of sharing their thoughts through the process of dialogue, which may result in them in setting up in-person sessions; 3) a tool for identifying therapy needs and measuring engagement with a virtual counseling model across a wide demographic, with the goal of improving quality and access to mental health resources globally.
## II Related Work
Broadly speaking, dialogue systems (a.k.a. chatbots) can be divided into two groups: those that primarily use artificial intelligence or generative processes on the one hand, and those that primarily use symbolic methods or hard coding on the other. The use of virtual dialogue systems in the context of
Fig. 1: Overview of the Serena dialogue system. A large generative model outputs candidate responses conditioned on a user prompt and three smaller, more specialized NLP models are used to reject unsuitable responses.
mental health counseling dates backs to the unveiling of the ELIZA program in 1964 [27], which used symbolic methods to deterministically process inputs and generate responses. Contemporary efforts such as Wysa, Woebot, and Joy use machine learning to process the user's input and to generate some dialogue, but the therapeutic suggestions are themselves constructed with symbolic methods [13]. Serena stands out as one of the few platforms that relies primarily on the generative approach for both dialogue simulation and the direction of the counseling session.
We can also differentiate mental health counseling systems according to the psychological methodologies they utilize. ELIZA was Joseph Weizenbaum's tongue-in-cheek replication of a stereo typical Rogerian therapist: _And how did that make you feel?_ Also called Person Centered Therapy (PCT), this methodology was chosen primarily because, with the technology of the time, it was relatively easy to mimic a Rogerian therapist by isolating keywords in the user prompt, and adding these keywords to a dictionary of predefined open-ended questions. Unclear or uncategorized keywords were handled with catch-all phrases. ELIZA is famous to this day because this crude approach had good results. Much to Weizenbaum's surprise, people actually enjoyed talking to his machine [20].
Many contemporary digital mental health dialogue systems, including Woebot, Joy, and Wysa, primarily or substantially use Cognitive Behavioral Therapy (CBT). In addition to the method's proven efficacy in addressing a wide range of mental health challenges [12], this therapy style lends itself well to being captured via symbolic methods, i.e. _If user reports feeling X, recommend Y_. Serena stands out from its contemporaries by its choice to provide PCT to its users. PCT is an effective means of treating a plethora of mental health ailments [19], and also provides the user with a totally open-ended and more life-like dialogue experience. We believe that the cutting edge of machine learning technology is especially well-suited to generating a person centered therapy session, that the user can direct in any way they choose. For example, it's possible to also interact with Serena outside of a therapeutic context simply by choosing another topic of conversation.
## III Methods and Models
Serena consists of multiple natural language processing (NLP) models and heuristics that work together sequentially to obtain the most suitable responses to the input messages (fig. 1). The first stage consists of a large Seq2Seq Transformer [26] model which generates a beam of candidate responses. In the second stage a number of smaller, more specialized Transformer-based models and several heuristic rules are applied to select the final response from the candidate list.
### Core Generative Model
The generative model at the core of Serena is based on the pre-trained generative model described in [25]. It is a 2.7 billion parameter Transformer architecture with 2 encoder layers, 24 decoder layers, 2560 dimensional embeddings, and 32 attention heads. The Transformer is a deep learning architecture that leverages the attention mechanism to provide contextual information for any position in the input sequence. This allows the Transformer to process the whole input sequence at once, in contrast to recurrent neural networks, which allows for a large increase in parallelization during training, increasingly making them the architecture of choice for NLP tasks. We are using the Transformer in the standard Seq2Seq setting, meaning that input sequences are transformed into output sequences. In our case we transform a user prompt into the model's response.
The model has been pre-trained on the Pushshift Reddit Dataset which includes 651 million submissions and 5.6 billion comments on Reddit between 2005 and 2019 [1]. The objective of the pre-training was to generate a comment conditioned on the full thread leading up to a particular comment.
We have fine-tuned this model on transcripts of counseling and psychotherapy sessions extracted from [17]. Since during pre-training the context and response length was truncated at a length of 128 tokens we only use such samples for fine-tuning with the same maximum length. Our resulting counseling and psychotherapy dataset consisted of 14,300 patient's prompt and counselor's answer pairs. To perform the fine-tuning, we make use of the ParlAI [18] platform using the default training parameters.
### Beam Search and Post-Processing
During decoding, ten candidate responses from the core generative model are processed and analyzed in a so-called beam search. Since we have observed that the quality of candidate responses can vary wildly, despite being similarly ranked by the generative model, we employ additional processing to choose the most suitable response. In particular, we are using three pre-trained Transformer-based models for the following tasks: (1) detecting contradictions, (2) recognizing toxic language and (3) obtaining semantic sentence embeddings to detect repetitive answers.
For the detection of contradictions we use a RoBERTa model [15] pre-trained [21] on several natural language inference datasets such as SNLI [2] and MNLI [29]. Given two input sentences the model predicts three categories: contradiction, neutral and entailment. We use this model to detect whether a) sentences in a single candidate response and b) the user prompt and a candidate response are contradictory.
Since our generative model has been pre-trained on a large collection of Reddit comments, it may have been exposed to toxic language. To recognize toxic speech and exclude it from the model's responses we use the pre-trained "unbiased" model made available in the Detoxify repository [10]. It is a RoBERTa model that has been trained on the Civil Comments [5] dataset, a large collection of annotated comments with classes such as threat, insult or obscene.
To obtain semantically meaningful sentence embeddings we are using a pre-trained SBERT model [23] and calculate the cosine-similarity to estimate how similar a candidate response
is to previous responses of the model. If the cosine-similarity is large, the response is considered to be repetitive and is rejected.
Additionally, we have curated a list of phrases that are undesirable and candidate responses containing them are avoided. Examples include phrases that are generally not helpful such as "I don't know what to say" or that are not considered to have any therapeutic value such as "you just have to get over it".
Finally, among the candidate responses that have not been rejected by any of the above post-processing steps we choose as the final response the one that has been assigned the highest probability by the generative model.
## IV Results
### Deployment
We have deployed the model using Google Kubernetes Engine (GKE) and it can be interacted with on our website1. Our implementation makes use of the ParIAI platform2, leveraging the abstractions it offers for setting up an interactive dialogue model. Our wbsite fetches responses from the model via REST API using FastAPI3. For deployment with GKE the model had to be containerized and is running on a single Nvidia T4 GPU.
Footnote 1: [https://serena.chat](https://serena.chat)
Footnote 2: [https://parl.ai](https://parl.ai)
Footnote 3: [https://fastapi.tiangolo.com](https://fastapi.tiangolo.com)
Our deployment contains a survey that users can fill in after they have interacted with the model for some time. Users are queried to rate the degree to which the model understands their messages and whether they find the generated responses engaging and helpful.
### Behaviors
Our dialogue model displays coherent understanding of the user's prompts and is capable of responding in a seemingly empathetic way (an example in fig. 2). The model is interacting with the user in an engaging way by asking relevant questions which encourages further introspection.
One of the main issues with Serena is that it often hallucinates knowledge about the user, which is a well-known problem with transformer-based generative models [16]. She will, for instance, claim that she has seen the user before or pretends to have detailed knowledge about their personal background. At the moment, we are trying to mitigate this issue by adding phrases indicative of such hallucinations to the aforementioned exclude list. It has been hypothesized that hallucinations arise due to noise in the data, such as information in the output which can not be found in the input, and we plan to explore a possible solution proposed in [8].
Another issue is that Serena tends to respond to the user's prompts only using questions. While this is desirable in terms of engaging the user in the conversation, early feedback from test users indicates that this behavior is perceived as annoying and even rude. Our current approach to this problem is to hard-code rules for choosing from the candidate responses such that the amount of generated questions is limited. This approach easily fails, however, since responses that are not questions are often missing in the candidate list. We plan to tackle this issue by carefully balancing the amount of questions and statements in the data used for fine-tuning the generative model.
## V Discussion
Recognizing that mental health care is often too expensive, too inconvenient, or too stigmatized, our team created a generative deep learning dialogue model that acts as a real-time companion and mental health counselor called _Serena_. As described above, the combination of a core generative model with targeted post-processing leverages the Transformer architecture's potential [26] to exhibit natural language understanding and processing. In contrast to most of the available therapy chatbots focusing on CBT, Serena is designed to provide PCT focusing on a user's self-reflection and self-actualization [24]. In addition, as a generative deep learning model, Serena may contain potential bias, incoherence, and distaste, despite heuristics and post-processing attempts to mitigate such responses. Nonetheless, our generative model has the potential to address the accessibility problem of mental health counseling.
Serena addresses the issue of cost because it is inherently less expensive to use than a human counselor [9].
As far as time is concerned, Serena also presents advantages compared to traditional counseling. Serena can counsel its users' wherever they choose, so it negates the need to
Fig. 2: Example dialogue from Serena.
commute or to make changes to the users daily routine and responsibilities.
Finally, Serena removes most of the fear and stigma associated with traditional mental health therapy methods. While human users are apt to anthropomorphize virtual dialogue interfaces [28], there is scant evidence that it brings along with it any of the fear of shyness that can arise from interaction with a human interlocutor.
In future works, we further hope to improve the model and the UX/UI through focus groups of psychologists and clients. With our built-in survey, we plan to substantiate our internal testing of the model's behavior with large-scale results from our survey. Our internal testing is based on our growing database of prompt and responses, which allows us to not only fine-tune our model, but to understand the needs and behaviors of our users. We also recognize the upmost important of respecting our users' privacy, and to this end we have encrypted all the user data generated on our platform, and we empower users to select exactly which data they chose to share with us. By protecting the privacy of our users, we give them the peace of mind to use Serena just as we intended: a trusted confidante always at their fingertips in a time of need.
## Acknowledgments
This research was carried out with the support of the Interdisciplinary Centre for Mathematical and Computational Modelling University of Warsaw (ICM UW) under computational allocation no GDM-3540; the NVIDIA Corporation's GPU grant; and the Google Cloud Research Innovators program. LB and NCC are partially supported by by National Science Centre (NCN) of Poland [2020/02/Y/ST6/00071], the CHISTERA grant [CHIST-ERA-19-XAI-007].
|
2302.10413
|
CADIS: Handling Cluster-skewed Non-IID Data in Federated Learning with
Clustered Aggregation and Knowledge DIStilled Regularization
|
Federated learning enables edge devices to train a global model
collaboratively without exposing their data. Despite achieving outstanding
advantages in computing efficiency and privacy protection, federated learning
faces a significant challenge when dealing with non-IID data, i.e., data
generated by clients that are typically not independent and identically
distributed. In this paper, we tackle a new type of Non-IID data, called
cluster-skewed non-IID, discovered in actual data sets. The cluster-skewed
non-IID is a phenomenon in which clients can be grouped into clusters with
similar data distributions. By performing an in-depth analysis of the behavior
of a classification model's penultimate layer, we introduce a metric that
quantifies the similarity between two clients' data distributions without
violating their privacy. We then propose an aggregation scheme that guarantees
equality between clusters. In addition, we offer a novel local training
regularization based on the knowledge-distillation technique that reduces the
overfitting problem at clients and dramatically boosts the training scheme's
performance. We theoretically prove the superiority of the proposed aggregation
over the benchmark FedAvg. Extensive experimental results on both standard
public datasets and our in-house real-world dataset demonstrate that the
proposed approach improves accuracy by up to 16% compared to the FedAvg
algorithm.
|
Nang Hung Nguyen, Duc Long Nguyen, Trong Bang Nguyen, Thanh-Hung Nguyen, Huy Hieu Pham, Truong Thao Nguyen, Phi Le Nguyen
|
2023-02-21T02:53:37Z
|
http://arxiv.org/abs/2302.10413v3
|
CADIS: Handling Cluster-skewed Non-IID Data in Federated Learning with Clustered Aggregation and Knowledge DIStilled Regularization
###### Abstract
Federated learning enables edge devices to train a global model collaboratively without exposing their data. Despite achieving outstanding advantages in computing efficiency and privacy protection, federated learning faces a significant challenge when dealing with non-IID data, i.e., data generated by clients that are typically not independent and identically distributed. In this paper, we tackle a new type of Non-IID data, called cluster-skewed non-IID, discovered in actual data sets. The cluster-skewed non-IID is a phenomenon in which clients can be grouped into clusters with similar data distributions. By performing an in-depth analysis of the behavior of a classification model's penultimate layer, we introduce a metric that quantifies the similarity between two clients' data distributions without violating their privacy. We then propose an aggregation scheme that guarantees equality between clusters. In addition, we offer a novel local training regularization based on the knowledge-distillation technique that reduces the overfitting problem at clients and dramatically boosts the training scheme's performance. We theoretically prove the superiority of the proposed aggregation over the benchmark FedAvg. Extensive experimental results on both standard public datasets and our in-house real-world dataset demonstrate that the proposed approach improves accuracy by up to 16% compared to the FedAvg algorithm.
Federated learning, non-IID data, clustering, knowledge distillation, regularization, aggregation.
+
Footnote †: Corresponding authors
## I Introduction
With the rise in popularity of mobile phones, wearable devices, and autonomous vehicles, the amount of data generated by edge devices is exploding [1]. With the emergence of Deep Learning (DL), edge devices bring endless possibilities for various tasks in modern society, such as traffic congestion prediction and environmental monitoring [2, 3]. In the conventional cloud-centric approach, the data from edge devices is gathered and processed at a centralized server [4]. This strategy, however, encounters several computational, communication, and storage-related constraints. Critically, the centralization strategy reveals unprecedented challenges in guaranteeing privacy, security, and regulatory compliance [5, 6, 7]. In such a context, Federated Learning (FL), a novel distributed learning paradigm, emerged as a viable solution, enabling distributed devices (clients) to train DL models cooperatively without disclosing their raw data [8]. FL prevents user data leakage and decreases server-side computation load. Each communication round in a standard FL starts with the server transmitting a global model to the clients. Each client then utilizes its own data to train the model locally and uploads the model parameters (or changes), rather than the raw data, to the FL server for aggregation. The server then combines local models to generate an updated version, which is subsequently transmitted to all clients for the next round. This training process terminates once the server receives a desirable model. Despite having notable benefits in terms of computing performance and privacy conservation, FL suffers from a significant challenge in dealing with heterogeneous data. In FL, data is generated independently at every device, resulting in highly skewed, non-independent-and-identically-distributed (non-IID) data across clients [11, 12]. Additionally, the data distribution of each client might not represent the global data distribution. In 2017, McMahan proposed a pioneer FL model named FedAvg [8], which employs SGD for training local models and averaging aggregation at the server-side. In this work, the authors also mentioned the non-IID issue and
Fig. 1: **Distribution of pill images collected from 100 real patients.** Patients with the same disease usually take similar pills. Data can be classified into three groups: Diabetes (red), Disorder (blue), and others (green).
argued that FedAvg is compatible with non-IID data. However, later on, other studies [13, 14]showed that non-IID data substantially impact the performance of FL models (including FedAvg). In particular, non-IID data may slow down the model convergence, destabilize local training at clients, and degrade model accuracy in consequence [15, 16, 17, 18, 19]. Numerous efforts have been devoted to overcoming the non-IID issue, which may be classified into two primary categories: (i) reduce the impact of non-IID data by optimizing the aggregation [10, 20, 21] or by optimizing the method to select the client each round [22, 23, 24] on the server-side, and (ii) enhancing training on the client side [9, 25, 26, 27, 28, 29]. However, current research on non-IID faces the two critical issues as follows.
First, most previous studies have focused only on the _non-identical distribution_ aspect, ignoring the _non-independent_ feature of the clients' data. In reality, the data collected from clients exhibit a substantial degree of clustering, with many clients having similar labels. For example, consider a pill-recognition FL system in which clients use their taken pill images to train the model (as illustrated in Figure 1). Users with the same disease usually have data belonging to identical classes. In other words, clients will be separated into disease-specific categories. In addition, common disease clusters will be considerably larger than other clusters. For example, the users are classified into three groups in Figure 1: diabetic patients, disorder, and others, with the cardinality of the diabetic group being much greater than the disorder group. To fill in this gap, in this work, besides considering common types of non-IID data as existing studies, we focus on a new non-IID data that exhibits _non-independent_ property of data across clients in this work. Specifically, we tackle non-IID data having _inter-client correlation_, i.e., clients sharing a common feature can have correlated data. We consider that the global distribution of data labels is not uniform, and data labels frequently are partitioned into clusters. The number of clients per group is varied (identified as _cluster-skewed non-IID_[30, 31]). For cluster-skewed data, utilizing the conventional aggregation strategies, which consider the roles of clients by assigning each client \(i\)'s local model a weight \(p_{i}\) depend on the intra-client properties 1, will lead to clusters with large numbers of clients dominating the others.
Footnote 1: FedAvg [8] and its variance methods assign the weighted based on the number of samples in each clients \(n_{i}\), i.e., \(p_{i}=\frac{1}{\sum\{n_{i}\}}\). Methods that enhances training on client side such as FedProx [9] give all the clients a same role with \(p_{i}=\frac{1}{\sum\{n_{i}\}}\). The approaches optimizing the aggregation adaptively assigned the weights, e.g., based on the client training accuracy and training frequency as in FedFA [10].
To confirm this hypothesis, we have performed a case-study experiment using cluster-skewed data and discovered that in the aggregation process, if we weight clients inversely with the cardinality of the cluster containing them, we can dramatically increase performance compared to vanilla FedAvg (as shown in Figure 2). However, it is crucial to determine how to cluster clients whose dataset is not publicly available. In light of this, we have an important observation that the penultimate layer might provide considerable insights into the training data distribution. Motivated by this fact, we design a novel mechanism to cluster clients based on the data extracted from the penultimate layer of their local models.
Second, the majority of existing approaches either optimize
Fig. 3: Overview of the proposed CADIS architecture.
Fig. 2: **A case-study on the effect of cluster-skew non-IID. (a): illustration of cluster-skew non-IID on MNIST dataset where client \(0-5^{th}\) has the same local data distribution that can be grouped into a cluster. (b)-(c) confusion matrix when testing the global model obtained after \(100\) training rounds, of FedAvg [8], FedProx [9], FedFA [10], and CADIS. The value of row \(i^{th}\), column \(j^{th}\) shows the rate where a samples of class \(j\) are predicted as class \(i\). Previous works introduce worse performance on the rare classes which belongs to small number of clients, e.g., class \(6-9^{th}\). By considering the cardinality of the cluster containing a given client when assign the weight in aggregation at server, CADIS improves the prediction performance in rare classes.**
server-side aggregation [10, 20, 21] or enhance client-side training efficiency [9, 25], which results in sub-optimal performance. Therefore, it is crucial to investigate a total solution that simultaneously solves the problem at both the client and server sides. We observe that, in reality, the quantity of data possessed by each client is rather small. In addition, due to the non-IID nature, the data distribution of each client does not correspond to the overall data distribution. Therefore, one of the critical dilemmas is that local model trained on the client side quickly over-fits after several epochs [32, 33]. To tackle this issue, we leverage the Knowledge Distillation paradigm and design a regularization term that aims to narrow the gap between the local and global models, thereby preventing the local model from falling into the local minimum.
Figure 3 depicts the overview of our proposed approach named CADIS (Clustered Aggregation and Knowledge Distilled Regularization), which consists of four steps: (1) Local training with the aid of KD-based regularization term; (2) Calculating the similarity of clients by utilizing the penultimate layer; (3) Clustering clients into groups; and (4) Aggregating local models using weighted averaging, with the weights determined based on clients' data size and clusters' cardinality. Our main contributions are as follows.
1. We perform a theoretical analysis of the penultimate layer to identify its relationship with the training data. Based on the insights retrieved from the penultimate layer, we offer an approach to quantify the similarity between clients, thereby grouping them into clusters.
2. We propose a server-side aggregation approach that adequately handles the cluster-skewed non-IID data. The proposed method is applicable to a wide range of non-IID data problems.
3. We provide a knowledge distillation-based regularization term that overcomes the overfitting in the local training process on the client-side.
4. To demonstrate the superiority of the proposed approach over the state-of-the-art, we conduct comprehensive experiments on the common datasets and our collected real dataset. The results show that our proposal improves the accuracy by up to 16% compared to the FedAvg.
## II CADIS - Federated Learning with Clustered Aggregation and Knowledge **DI**Stilled Regularization
The proposed CADIS framework consists of two main components: Cluster-based aggregation on the server and knowledge distillation-based regularization on the client side. Figure 3 shows the overview of our proposed approach named CADIS. In CADIS, the clients utilize SGD to train the model locally using a loss function composed of the cross-entropy loss and a knowledge distillation-based regularization term. Upon receiving the trained models from the clients, the server leverages information collected from the penultimate layer to assess the similarity between the clients. Specifically, the server maintains a so-called Q-matrix that records the clients' similarities, which are cumulatively updated over communication rounds. Given the similarity of the clients, the server groups them into clusters. Finally, it combines clients' local models using weighted averaging, where each client's weight is determined depending on the quantity of its data and the cardinality of its cluster.
In the following, we first give the details of the aggregation process in Section III. We then present the regularization term in Section IV. Section V evaluates the performance of CADIS and compares it to the-state-of-the-art, while the Section VI presents the related works for dealing with different type of non-IID distributions and different approach of cluster-based federated learning. Finally, Section VII concludes the paper.
## III Clustered Aggregation
In the following, we first present our proposed cluster-based aggregation formula in Section III-A and then go into the details of our clustering algorithm in Section III-B. Specifically, we introduce an analysis of what the penultimate layer may tell us about the training data distribution in III-B1. Motivated by this finding, we then propose a clustering algorithm based on the improvement of the penultimate layer as shown in Fig. 4. The main idea is to estimate the clients' data distribution similarity using the improvement of the penultimate layer ( III-B2) and then partition them according to their similarities (III-B3). Moreover, to speed up the convergence of the similarity matrix, we propose a transitive learning mechanism in Section III-C. Finally, we provide an analysis on the rate of our clustered aggregation compared to those of the FedAvg in Section III-D.
### _Aggregation Formula_
Let \(C_{1},...,C_{n}\) be the \(n\) clients. Suppose that \(C_{\tau_{1}},...,C_{\tau_{k}}\) (\(\tau_{1},...,\tau_{k}\in\{1,...,n\}\)) are the clients participating in the training process at the communication round \(t\). Upon completion of the local training process, these \(k\) clients transmit to the server the information of \(k\) trained local models, denoted by \(\omega_{\tau_{1}}^{t},...,\omega_{\tau_{k}}^{t}\). The server will partition \(k\) clients into \(m_{\tau}\) clusters using the algorithm provided in Section III-B. For each client \(\tau_{i}\), let \(M_{\tau_{i}}^{t}\) be the number of elements of the cluster containing \(\tau_{i}\) at round \(t\). The server then performs weighted aggregation, where client \(\tau_{i}\)'s weight, denoted by \(\alpha_{\tau_{i}}^{t}\), is defined as
\[\alpha_{\tau_{i}}^{t}=\frac{1}{M_{\tau_{i}}^{t}}\times\frac{n_{\tau_{i}}}{N}, \tag{1}\]
Fig. 4: **An illustration of our proposed clustering algorithm.** In each communication round, the server calculates every client pair’s similarity and updates the \(Q\)-matrix. After that, it partitions the clients into clusters based on their similarities.
where \(n_{\tau_{i}}\) is the number of samples owned by client \(\tau_{i}\), and \(N\) is the total samples of all clients. The intuition of this aggregation weight is as follows.
Clients in the same cluster are supposed to have similar training datasets, resulting in similar locally trained models. Let's consider the following scenario. Suppose cluster \(A\) has a large number of clients, say fifty, whereas cluster \(B\) has a small number of clients, say five. Due to the data similarity, the local training at clients in cluster \(A\) produces fifty similar models, and so do the clients in cluster \(B\). To facilitate the understanding, we refer to \(W_{A}\) and \(W_{B}\) as the ones representing the models of clients in cluster \(A\) and cluster \(B\), respectively. If we simply treat all clients equally and aggregate them, then model \(W_{A}\) will have a tenfold greater impact on the global model than model \(W_{B}\). To equalize the contribution across the clusters, we employ the first term in (1), which is inversely proportional to cluster cardinality. The second term in (1), inherited from FedAvg, is proportional to the number of samples of each client. This term enables clients with more data to contribute more to the global model since clients with more data will, in general, possess more knowledge. Finally, \(\alpha_{\tau_{i}}^{t}\) is normalized and applied to the client models' weights as follows
\[\omega_{g}^{t+1}=\sum_{i=1}^{\tau_{k}}\frac{\alpha_{\tau_{i}}^{t}}{\sum_{i=1} ^{\tau_{k}}\alpha_{\tau_{i}}^{t}}\times\omega_{i}^{t}. \tag{2}\]
### _Penultimate Layer-assisted Clustering Algorithm_
#### Iii-B1 Insights of the Penultimate Layer
Let us consider a typical deep neural network \(\mathcal{M}\) for a classification task, consisting of a feature extractor and a classifier that is trained using the cross entropy loss by SGD method. We assume that the classifier comprises of a dense layer, represented by \(W\), followed by a softmax layer. Here, we give the mathematical supports for the non-bias case because the maths can be easily extended by append a constant \(1\) to the sample vector \(x\). Suppose there are \(v\) classes, denoted by \(1,...,v\). We have the following observations.
**Proposition III.1**.: _Suppose \(W=[\textbf{w}_{1},...,\textbf{w}_{v}]\), where \(\textbf{w}_{i}\) is the \(i\)-th row of \(W\). Let \(x\) be a sample with the groundtruth label of \(j\), \(j\in\{1,...,v\}\), and \(y\in\mathbb{R}^{v}\) be the one-hot vector representing \(j\). After training the model \(\mathcal{M}\) with sample \((x,y)\), the values of all items in \(\textbf{w}_{j}\) increase while that of the other rows decrease._
Figure 5(a) illustrates an intuition for Proposition III.1. In the upper sub-figure, we trained the model with a sample belonging to class 8 and measured the change of the penultimate layer. It can be observed that the 8-th exhibits positive growth, whereas the remaining rows have negative values.
**Sketch Proof.** Let \(R=R(x)\in\mathbb{R}^{u}\) denote the representation of \(x\). For the sake of the arguments, we assume that all items \(R_{i}\) of \(R\) are non-negative (attainable with the most popular Sigmoid or ReLU activation functions). Let us denote by \(L(x)\in\mathbb{R}^{v}\) the logits of \(x\), then \(L(x)\) is defined as follows
\[L(x)=W\cdot R=\begin{bmatrix}R_{1}w_{11}+R_{2}w_{12}+\ldots+R_{u}w_{1u}\\ R_{1}w_{21}+R_{2}w_{22}+\ldots+R_{u}w_{2u}\\ \vdots\\ R_{1}w_{v1}+R_{2}w_{v2}+\ldots+R_{u}w_{vu}\end{bmatrix}. \tag{3}\]
Let \(p(x)\) be the prediction result which is the output of the softmax layer, then the probability of sample \(x\) being classified into class \(j\), i.e., \(p_{j}(x)\), is determined by the following formula
\[p_{j}(x)=\frac{e^{L_{j}}}{\sum_{i=1}^{v}e^{L_{i}}}. \tag{4}\]
The cross entropy loss concerning sample \((x,y)\) is given by
\[\mathcal{L}(p(x),y)=\sum_{i=0}^{v}y_{i}\log\bigg{(}\frac{1}{p_{i}(x)}\bigg{)}. \tag{5}\]
Let \(w_{rc}\) be the item at row \(r\) and column \(c\) of \(W\), then the gradient of the loss \(\mathcal{L}(p(x),y)\) with respect to \(w_{rc}\in W\) is
\[\frac{\partial\mathcal{L}}{\partial w_{rc}}=\sum_{i=1}^{v}\bigg{(}\frac{ \partial\mathcal{L}}{\partial p_{i}(x)}\cdot\bigg{(}\sum_{k=1}^{v}\frac{ \partial p_{i}(x)}{\partial L_{k}}\cdot\frac{\partial L_{k}}{\partial w_{rc}} \bigg{)}\bigg{)}. \tag{6}\]
We have
\[\frac{\partial\mathcal{L}}{\partial p_{i}(x)}=-\frac{1}{\ln 10}\frac{y_{i}}{p_{i} (x)}=\begin{cases}-\frac{1}{\ln 10}\frac{y_{j}}{p_{j}(x)}&\text{if }i=j\\ 0&\text{if }i\neq j\end{cases}, \tag{7}\]
\[\frac{\partial p_{j}(x)}{\partial L_{k}}=\begin{cases}p_{j}(x)(1-p_{k}(x))& \text{if }k=j\\ -p_{j}(x)p_{k}(x)&\text{otherwise}\end{cases}; \tag{8}\]
\[\frac{\partial L_{k}}{\partial w_{rc}}=\begin{cases}R_{c}&\text{if }k=r\\ 0&\text{otherwise}\end{cases}. \tag{9}\]
From (7, 8) and (9), we deduce that
\[\frac{\partial\mathcal{L}}{\partial w_{rc}}=\begin{cases}\frac{-1}{\ln 10}y_{j}(1-p_{j} (x))R_{c}&\text{if }r=j,\\ \frac{1}{\ln 10}y_{j}p_{r}(x)R_{c}&\text{otherwise}.\end{cases} \tag{10}\]
As \(y_{j}=1\), \(p_{i}(x)>0\) and \(R_{c}>0\) (\(\forall i,c\)), when applying the gradient descent, the values of the \(j\)-th row of \(W\) increase while those on all other rows decrease.
Proposition III.1 can be generalized (with slightly more work) to the case where multiple labels being trained during
Fig. 5: **The behavior of the penultimate layer. The rows corresponding to the untrained classes decrease.**
the training process. Figure 5(a) depicts an illustration for the general case. In the lower sub-figure, we trained the model with samples belonging to classes \(5\) and \(7\). As seen, only the rows \(5,7\) may contain positive values, while values of the remaining rows are strictly negative. From this proposition we come up to the following observation.
**Observation III.2**.: _By analyzing the improvement of the penultimate layer, we may identify whether the training data comprises samples from a particular class. Specifically, the training data consists of class \(j\)'s samples if and only if the improvement of the \(j\)-th row of the penultimate layer's matrix is not negative (i.e., at least one item in the \(j\)-th row gets higher after training)._
Figure 5(b) depicts our observation III.2 in the context of the real-world scenario. Specifically, we train three local models using three pill datasets, two of which contain images of pills taken by diabetic patients and the other by a normal user. The figure demonstrates that the improvement of the penultimate layers of the two diabetic patients is comparable, whereas that of the normal user is clearly different.
#### Iii-B2 Similarity Estimation
Let \(\mathcal{M}_{i}\), and \(\mathcal{M}_{j}\) be two models locally trained by client \(C_{i}\) and \(C_{j}\) using their respective datasets \(D_{i}\) and \(D_{j}\). We seek to estimate the similarities of the distributions of \(D_{i}\) and \(D_{j}\) by using the information obtained from the penultimate layers of \(\mathcal{M}_{i}\) and \(\mathcal{M}_{j}\). To ease the presentation, in the following, we use the term _similarity of client \(C_{i}\) and \(C_{j}\)_ to indicate the similarity between \(C_{i}\) and \(C_{j}\)'s data distributions. We encounter the following two significant challenges. First, in the FL training methodology, only a portion of clients engage in the training process during each communication round. Consequently, it is impossible to gather information on the penultimate layers of all clients concurrently. Second, we observe that the change in the penultimate layer throughout each communication round is negligible. It is thus impossible to determine similarity using the raw improvement of the penultimate layer.
To address the first issue, the server will maintain a so-called similarity matrix whose each item \(s_{ij}\) depicts the estimated similarity between client \(C_{i}\) and \(C_{j}\). In each communication round \(t\), for each pair of clients \((C_{i},C_{j})\) participating in that round, the server estimates the instance similarity \(s^{t}_{ij}\) of \(C_{i}\) and \(C_{j}\), which depicts the similarity of the training data of \(C_{i}\) and \(C_{j}\) at round \(t\). \(s^{t}_{ij}\) is defined by the following formula
\[s^{t}_{ij}=\frac{(W^{t}_{i}-W^{t}_{g})^{T}\cdot(W^{t}_{j}-W^{t}_{g})}{\|W^{t}_ {i}-W^{t}_{g}\|\|W^{t}_{j}-W^{t}_{g}\|}, \tag{11}\]
where \(W^{t}_{i}\) and \(W^{t}_{j}\) are the penultimate layers of \(\mathcal{M}_{i}\) and \(\mathcal{M}_{j}\) at round \(t\), while \(W^{t}_{g}\) is the penultimate layer of the global model that the server delivered to the clients at the beginning of round \(t\). Note that, \(W^{t}_{i}-W^{t}_{g}\) and \(W^{t}_{j}-W^{t}_{g}\) are the improvements of \(C_{i}\) and \(C_{j}\)'s local models' penultimate layers after training at round \(t\), respectively. Therefore, \(s^{t}_{ij}\) indicates the cosine similarity between the penultimate layers' improvements.
As the instance similarity \(s^{t}_{ij}\) may not accurately reflect the actual similarity between clients, we utilize \(s^{t}_{ij}\) to update the cumulative similarity \(s_{ij}\) in the similarity matrix to achieve accurate estimates. Specifically, \(s_{ij}\) is updated as
\[s_{ij}\leftarrow\frac{f^{t}_{ij}}{f^{t}_{ij}+1}s_{ij}+\frac{1}{f^{t}_{ij}+1}s^ {t}_{ij}, \tag{12}\]
where \(f^{t}_{ij}\) represents the total times \(C_{i}\) and \(C_{j}\) have participated in the same communication round up to round \(t\).
The second issue, namely the incremental improvement of the penultimate layer, results in the similarity value of all client pairs rapidly converging to \(1\). To this end, our solution is to use the min-max rescaling on the similarity matrix to obtain a so-called \(Q\)-matrix.
#### Iii-B3 Client Clustering
Given the \(Q\)-matrix at a communication round \(t\), the server uses a binary indicator \(u_{ij}\) to determine whether clients \(C_{i}\) and \(C_{j}\) belong to the same cluster as in Equation 13, where \(\varepsilon\) is updated upward after every communication round. Note that as \(q_{ij}\) is adjusted every round, \(u_{ij}\) is also updated over communication rounds, but it will converge after some rounds
\[u_{ij}=\begin{cases}1,&\text{if }q_{ij}\geq\varepsilon;\\ 0,&\text{otherwise.}\end{cases} \tag{13}\]
### _Enhancing the Similarity Matrix with Transitive Learning_
In the FL training methodology, in each round, there is only a portion of clients participating in the training process. Therefore, it requires significant time for the similarity matrix to converge. To speedup the convergence, we propose an algorithm to estimate the similarity of two arbitrary clients \(C_{i}\) and \(C_{j}\) via their similarities with other clients. We notice that cosine similarity possess a transitive characteristic which is reflected by the following theorem [34].
**Theorem III.3**.: _Let \(s_{x,y}\) denote the cosine similarity of two vectors \(x\) and \(y\). Given three arbitrary vectors \(x,y\) and \(z\), then their cosine similarities satisfy the following inequality_
\[s_{a,b}s_{b,c}-\sqrt{\left(1-s_{a,b}^{2}\right)\left(1-s_{b,c}^{2} \right)}\leq s_{a,c}\] \[\leq s_{a,b}s_{b,c}+\sqrt{\left(1-s_{a,b}^{2}\right)\left(1-s_{b,c }^{2}\right)}.\]
Motivated by this theorem, we utilize the Gaussian distribution with the mean of \(s_{ip}s_{jp}\) and deviation of \(\frac{\sqrt{(1-s_{ip}^{2})(1-s_{jp}^{2})}}{3}\), denoted as \(\mathcal{N}_{(s_{ip},s_{jp})}\), to estimate the value of \(s_{ij}\). Accordingly, for every client pair \((C_{i},C_{j})\) which does not co-occurence in a communication round \(t\), the server will find all clients \(C_{p}\) such that the deviation of \(\mathcal{N}_{(s_{ip},s_{jp})}\) is less than a threshold \(\gamma\), i.e., \(\frac{\sqrt{(1-s_{ip}^{2})(1-s_{jp}^{2})}}{\sqrt{(1-s_{ip}^{2})}}<\gamma\) (*). For each such a client \(C_{p}\), we denote by \(s_{ij,p}\) a random number following the distribution \(\mathcal{N}_{(s_{ip},s_{jp})}\). The final estimated value for \(s_{ij}^{t}\) is the average of \(s_{ij,p}\) for all \(p\) satisfying condition (*).
**Theorem III.4**.: _Let \(n\) be the total number of clients and \(k\) be the number of clients participating in a communication round \(t\). Then the number of clients participating in a communication round \(t\) is at least \(\frac{\sqrt{(1-s_{ip}^{2})(1-s_{jp}^{2})}}{3}\)._
round. Let \(\delta\) be the expected number of communication rounds needed to estimate the similarity of all client pairs. Then, \(\delta\leq 1+\sum_{i=k}^{n-1}\frac{\binom{n}{k}}{\binom{n}{k}-\binom{i}{i}}\)._
**Sketch Proof.** We denote \(S_{i}\)\((\forall i\in[0,n])\) as a random variable representing the number of communication rounds needed for all clients participate in the training at least one time, given \(i\) clients have already participated in training so far. Then, \(\delta\) equals the expected value of \(S_{0}\), i.e., \(E(S_{0})\). \(E(S_{0})\) can be determined recursively as follows
\[\begin{cases}&E(S_{0})=1+E(S_{k}),\\ &E(S_{i})=\sum_{j=0}^{i}a_{ij}(1+E(S_{k+j})),\forall i\\ &E(S_{n})=0,\end{cases} \tag{14}\]
where \(a_{ij}\) is the transitioning probability from state \(S_{i}\) to \(S_{k+j}\) defined by \(a_{ij}=\frac{\binom{n-i}{k+1}\times\binom{i}{i-i}}{\binom{n}{k}}\). We have \(\sum_{j=0}^{i}a_{ij}=1\), and \(E(S_{i})\geq E(S_{i+1})\) (\(\forall i\)). Moreover, when \(i\geq k\), we have \(a_{ij}=0\) (\(\forall j<i-k\)). Therefore, \(\sum_{j=0}^{i}a_{ij}=\sum_{j=i-k}^{i}a_{ij}=1\). Accordingly,
\[E(S_{i}) =1+\sum_{j=i-k+1}^{i}a_{ij}\times E(S_{k+j})+a_{i(i-k)}\times E(S_ {i})\] \[\leq 1+\sum_{j=i-k+1}^{i}a_{ij}\times E(S_{i+1})+a_{i(i-k)} \times E(S_{i})\] \[=1+(1-a_{i(i-k)})\times E(S_{i+1})+a_{i(i-k)}\times E(S_{i}).\]
It can be deduced that
\[E(S_{i}) \leq E(S_{i+1})+\frac{1}{1-a_{i(i-k)}}=E(S_{i+1})+\frac{\binom{n} {k}}{\binom{n}{k}-\binom{i}{k}}\] \[\Rightarrow E(S_{k}) \leq E(S_{n})+\sum_{i=k}^{n-1}\frac{\binom{n}{k}}{\binom{n}{k}- \binom{i}{k}}=0+\sum_{i=k}^{n-1}\frac{\binom{n}{k}}{\binom{n}{k}-\binom{i}{k}}\] \[\Rightarrow E(S_{0})=1+E(S_{k})\leq 1+\sum_{i=k}^{n-1}\frac{ \binom{n}{k}}{\binom{n}{k}-\binom{i}{k}}.\]
### _Convergence Analysis_
Finally, we have the following finding on the convergence rate of proposed clustered aggregation compared to FedAvg.
**Proposition III.5**.: _Once converged, the inference loss of the global model achieved by CADIS's aggregation process is smaller than those generated by FedAvg_
\[\mathcal{L}_{FedAvg}-\mathcal{L}_{CADIS}\geq 0, \tag{15}\]
where \(\mathcal{L}_{FedAvg}\) and \(\mathcal{L}_{CADIS}\) indicate, respectively, the losses of the converged global models derived by FedAvg and CADIS. Due to space constraints, in the following, we provide a sketch proof when there are clusters among the clients.
**Sketch Proof.** To simplify, we consider a FL with three clients \(C_{1},C_{2},C_{3}\), in which \(C_{1}\) and \(C_{2}\) belong to a cluster and \(C_{3}\) does not. As loss functions are usually convex ones and have at least one minimum, we consider the simple loss functions for \(C_{1},C_{2},C_{3}\) as \(f_{i}=a_{i}z^{2}+b_{i}z,(a_{i}>0,i=1,2,3)\). Let \(D_{i}\) be the dataset owned by \(C_{i}\). As \(C_{1}\) and \(C_{2}\) belong to the same cluster, we assume that \(C_{1}\) and \(C_{2}\) are similar. Let \(\mathcal{D}\) be the dataset whose distribution is identical with our targeted data, then \(\mathcal{D}\) can be form by taking a half of \(D_{1}\), \(D_{2}\) and all of \(D_{3}\). Accordingly, we can prove that, the optimal loss function when we train a global model with the single set of \(\mathcal{D}\) is given by
\[f^{*}=\frac{1}{4}(f_{1}+f_{2})+\frac{1}{2}f_{3}. \tag{16}\]
Let \(z_{i}^{t,E}\) (\(E\) is the number of training epochs) be the trained model that \(C_{i}\) sends to the server at the end of communication round \(t\), given the initial model \(z_{i}^{t,0}=z_{g}^{t}\). Then we have
\[z_{i}^{t,E}=z_{i}^{t,E-m}(1-2a_{i}\eta_{t})^{m}-\eta_{t}b_{i}\sum_{j=0}^{m-1} (1-2a_{i}\eta_{t})^{j}, \tag{17}\]
where \(\eta_{t}\) is the learning rate at the client-side in round \(t\).
**Aggregation by FeAgvg.** By aggregating local models using FeAgvg, we obtain the global model as
\[Z_{FedAvg}^{t} =\Bigg{(}\frac{2\phi_{1}+\phi_{3}}{3}\Bigg{)}^{t}Z_{FedAvg}^{0}\] \[-\Bigg{(}1-\bigg{(}\frac{2\phi_{1}+\phi_{3}}{3}\bigg{)}^{t} \Bigg{)}\frac{\frac{b_{1}}{a_{1}}(1-\phi_{1})+\frac{b_{3}}{a_{3}}\frac{1-\phi_{ 3}}{2}}{3-(2\phi_{1}+\phi_{3})}.\]
**Aggregation by CADIS.** When using CADIS to aggregate the local models, we obtain the following global model
\[Z_{CADIS}^{t} =\Bigg{(}\frac{\phi_{1}+\phi_{3}}{2}\Bigg{)}^{t}Z_{CADIS}^{0}\] \[-\Bigg{(}1-\bigg{(}\frac{\phi_{1}+\phi_{3}}{2}\bigg{)}^{t} \Bigg{)}\frac{\frac{b_{1}(1-\phi_{1})}{2a_{1}}+\frac{b_{3}(1-\phi_{3})}{2a_{3} }}{2-(\phi_{1}+\phi_{3})}.\]
where \(\phi_{i}=(1-2a_{i}\eta_{t})^{K}\). When the models converge, we have
\[Z_{FedAvg} =\lim_{t\to\infty}Z_{FedAvg}^{t}=\frac{\frac{b_{1}}{a_{1}}(1-\phi _{1})+\frac{b_{3}}{a_{3}}\frac{1-\phi_{3}}{2}}{3-(2\phi_{1}+\phi_{3})},\] \[Z_{CADIS} =\lim_{t\to\infty}Z_{CADIS}^{t}=\frac{\frac{b_{1}}{a_{1}}\frac{1- \phi_{1}}{2}+\frac{b_{3}(1-\phi_{3})}{2a_{3}}}{2-(\phi_{1}+\phi_{3})}.\]
By substituting the results above into (16) we obtain
\[\mathcal{L}_{FedAvg}-\mathcal{L}_{CADIS} =f^{*}(Z_{FedAvg})-f^{*}(Z_{CADIS})\] \[=\frac{1}{8}\frac{b_{1}^{2}}{a_{1}a_{3}}(a_{1}+a_{3})^{2}v_{1}v_{2},\]
where \(v_{1}=Q-P\) and \(v_{2}=\bigg{(}\frac{1}{a_{1}}+\frac{1}{a_{3}}\bigg{)}(Q+P)-\frac{2}{a_{3}}\); \(P=\frac{\phi_{1}-1}{\phi_{1}+\phi_{3}-2},Q=\frac{2\phi_{1}-2}{2\phi_{1}+\phi_{3}-3}\). By proving \(v_{1}>0\) and \(v_{2}>0\), we deduce that \(\mathcal{L}_{FedAvg}>\mathcal{L}_{CADIS}\).
## IV Knowledge Distillation-based Regularization
We have proposed a clustering strategy to balance the bias on the inter-client level. However, when there are no clusters amongst the clients, the performance of CADIS returns to that of FedAvg, which is sensitive toward intra-client heterogeneity. Therefore, as an extension to our proposal, we integrate a subtle regularization into the local training process to diminish the effect of data heterogeneity. To this end, we design a
regularization term inspired by the feature-based knowledge distillation technique [35]. This regularization term intuitively helps the clients to gain new knowledge from their local data without overwriting the previously learnt knowledge in the global model. As a result, the knowledge of the global model is accumulated throughout the federated training process.
We observe that the global model is an aggregation of multiple local models; as a result, it possesses more information and a higher generalizability. Therefore, we use the global model delivered by the server at the beginning of each round as a teacher, while the clients' local models serve as students. A client's regularization term is then defined by the Kullback-Leibler (KL) divergence between the representations generated by the client's local model and those obtained from the global model. Figure 6 illustrates the flow for a client calculate the regularization term. The details are as follows. Consider a client with the training dataset of \(X\), let \(R_{S}(X)\) be the representations generated by the locally trained model, and \(R_{T}(X)\) be the representation produced by global model delivered by the server. Instead of model the distribution of \(R_{S}(X)\) and \(R_{T}(X)\) directly, we try to model the pairwise interactions between their data samples, because as helps to describe the geometry of their respective feature spaces [36]. To accomplish this, we employ the joint probability density, which represents the likelihood that two data points are close together. These joint density probability functions can be easily estimated using Kernel Density Estimation (KDE) [37]. Let \(\mathcal{P}\) and \(\mathcal{Q}\) be the joint density probability functions corresponding to \(R_{S}(X)\) and \(R_{T}(X)\). Suppose \(p_{ij}\in\mathcal{P}\) denote the joint probability of \(x_{i}\) and \(x_{j}\) then \(p_{ij}\) can be estimated using KDE as \(p_{ij}=p_{i|j}p_{j}=\mathcal{K}_{h}(x_{i},x_{j})\), where \(\mathcal{K}_{h}(x,x_{i})=\mathcal{K}_{G}(x,x_{i},h)\) is a Gaussian kernel, with \(h\) is the bandwidth of the Gaussian bell. However, as stated in [38], it is often impossible to learn a model that can accurately reproduce the entire geometry of a complex teacher model. Therefore, the conditional probability distribution of the samples can be used instead of the joint probability density function as follows
\[p_{i|j}=\frac{\mathcal{K}_{h}(x_{i},x_{j})}{\sum_{k=1,k\neq j} \mathcal{K}_{h}(x_{k},x_{j})}\in[0,1]. \tag{18}\]
The similar process is applied to estimate the probability distribution of the global model. Finally, we use Kullback-Leibler (KL) divergence to calculate the difference between the two distributions \(\mathcal{P}\) and \(\mathcal{Q}\) by using the following formula
\[\mathcal{L}_{KD}=KL(\mathcal{Q}\parallel\mathcal{P})\approx\sum_{i=1}^{b}\sum_ {j=1,j\neq i}^{b}q_{j|i}\times\log\left(\frac{q_{j|i}}{p_{j|i}}\right), \tag{19}\]
where \(b\) is the batch size. Consequently, the final loss function for training at client \(C_{i}\) in round \(t\) is defined as
\[\mathcal{L} =\mathcal{L}_{CC}+\lambda\mathcal{L}_{KD}=\frac{1}{n_{i}}\sum_{ c=1}^{E}\sum_{u=1}^{n_{i}/b}\left\{\left(\text{CE}\left(X_{u},Y_{u}\right) \right|_{\omega_{i}^{t,e}}\right)\right.\] \[+\left.\lambda KL\left(\text{KDE}_{\omega_{g}^{t}}\left(X_{u} \right)\|\text{KDE}_{\omega_{i}^{t,e}}\left(X_{u}\right)\right)\right\}. \tag{20}\]
Here \(n_{i}\) is the cardinality of \(C_{i}\)'s dataset, \(E\) is the number of training epochs; \((X_{u},Y_{u})\) is training dataset of the \(u\)-th batch, where \(X_{u}\) depicts the image set, and \(Y_{u}\) denotes the corresponding labels; \(\lambda\geq 0\) is the weighting factor of the regularization term.
## V Experiments and Results
This section evaluates the performance of the proposed FL method, i.e., CADIS, against competing approaches for various FL scenarios. We show that CADIS is able to achieve higher performance and more stable convergence compared with state-of-the-art FL methods including FedAvg [8], FedProx [9], FedDyn [25], and FedFA [39], on various datasets and non-IID settings. In the following, we first introduce four image classification datasets used in our experiments including both standard conventional datasets and real-world medical imaging datasets. We also describe the setup for the experiments in Section V-A. We then report and compare the performance of the CADIS with state-of-the-art methods using the top-1 accuracy on the test datasets. (Section V-B). In all experiments, the SingleSet setting (centralized training at the server or training in the system of only one client)2 is used as the reference. Finally, in Section V-C, we conduct ablation studies to highlight some key properties of CADIS.
Footnote 2: Because Singleset trained on a single client (or server), it may be equivalent to training with an IID dataset.
### _Datasets and Experimental Settings_
To evaluate the robustness of the proposed method in a real-world setting, we collect a large-scale real-world pill image
classification dataset (due to the double-blind policy, we name our dataset as PILL). The dataset consists in total of \(10,042\) images from \(96\) patients, \(276\) diagnoses and \(94\) pills (classes). However, in our experiments, we use a subset of \(10\) clients constituted \(7\) clients diagnosed with diabetes and \(3\) clients from other diseases. We then annotate the data of selected clients to be our evaluated sub-dataset 3. The sub-dataset consists of \(15\) classes, e.g., the pill name, and \(7,084\) images. The dataset is then divided into two disjoint parts, where \(90\)% of images are used for training and the rest \(10\)% are used for testing.
Footnote 3: We have to evaluate on sub-dataset because of the lack of manpower to annotate the whole dataset at the time of submission.
To further evaluate the effectiveness of CADIS on bigger datasets, we use three benchmark imaging datasets with the same train/test sets as in previous work [8, 9, 25, 39], which are MNIST [40], CIFAR-10 [41], and CIFAR-100 [41]. We simulate data heterogeneity scenarios, i.e., cluster-skewed non-IID, by partitioning datasets and distributing the training samples among \(n=100\) clients. In this work, we target the sample-unbalanced multi-cluster (denoted as **MC**) non-IID, in which clients have the same label distribution belonging to the same cluster. We choose \(5\) clusters with the ratio of clients in the clusters are \(3:3:2:1:1\). The number of samples per client is unbalanced and each client has approximately \(20\%\) of labels (classes), e.g., 2 classes for CIFAR-10. We also further consider different data partition methods in SectionV-C. Figure 7 illustrates the class distribution of the PILL subset and CIFAR-10 dataset across clients with **MC** partition methods.
We train simple convolutional neural networks (CNNs) on MNIST as mentioned in [8]. Specifically, we train ResNet-9 [42] network on CIFAR-10, and CIFAR-100 dataset, and ResNet-18 [42] on PILL dataset. For all the experiments, we use SGD (stochastic gradient descent) as the local optimizer. We also set the local epochs \(E=5\), a learning rate of \(0.001\), and a local batch size \(b=8\). We evaluate with the system of 100 clients like prior work in FL [8, 9]. The number of participating clients at each communication round is \(k=3\) for PILL and \(k=10\to 50\) for other datasets. We also used the default hyper-parameters suggested by the original paper of each FL benchmark. Specifically, we set the proximal term \(\mu=0.01\) for the FedProx method. For FedFA, we set \(\alpha=1.0\) and \(\beta=0\) as suggested in [39]. For FedDyn, we use \(\alpha=0.5\)[25].
### _Experimental Results_
#### Iv-B1 Top-1 accuracy
Table I presents the results of the classification accuracy when comparing our proposed CADIS to the baseline methods on all the datasets with cluster-skewed non-IID. We report the best accuracy that each FL method reaches within \(500\) communication rounds. Specifically, CADIS achieves better accuracy than all other FL methods. For example, CADIS achieves an accuracy of \(79.71\)% on the PILL dataset. This result significantly outperforms the best benchmark FL methods, e.g., FedAvg, by \(8.7\)%. Compared to the second-best benchmark (marked in bold-blue text), our CADIS surpasses it by \(1.35\)%, \(1.60\)%, and \(0.41\)% top-1 accuracy in CIFAR-10, CIFAR-100, and MNIST datasets, respectively (\(k=10\) clients participating in each round). It is worth noting that the image classification tasks in MNIST dataset are simple such as the accuracy of all the methods is asymptotic to that of the SingleSet, leading no room for optimizing. As a result, CADIS is only slightly better than the baseline methods. This result emphasizes that our cluster-based weighted aggregation method can engage the clients from the'smaller' clusters (i.e., groups with a smaller cardinality) more effectively than the sample-quantity-based aggregation of FedAvg and the training-frequency-based aggregation of FedFA. It demonstrates our theoretical finding mentioned in Proposition III.5.
We next conduct a sensitivity study to quantify the impact of the number of participating clients \(k\) per communication round on accuracy. As shown in Table I, we change \(k\) from \(10\) up to \(50\) when training on the CIFAR-100 and MNIST datasets. We observe that varying the number of participating clients would slightly affect the top-1 accuracy but would not impact the relative result between CADIS and the baseline methods. As the result, the improvement in accuracy of CADIS over the other two baseline methods is consistently maintained.
#### Iv-B2 Convergence analysis
To demonstrate the effectiveness of CADIS in reducing local computation at clients, we provide the number of communication rounds to reach a target top-1 accuracy and speedup relative to the FedAvg (Table II. Overall, the convergence rate of CADIS is fastest in most of the evaluated cases except for the CIFAR-100 dataset. Specifically, to reach an accuracy of \(60\)% for the PILL dataset, CADIS requires only \(45\) communication rounds. FedAvg and FedProx spend \(1.6\times\) longer than CADIS. In addition, CADIS is equivalent to FedFA in the case of MNIST dataset, while it is slower than FedFA in the case of CIFAR-100. It is because CADIS requires some first communication rounds for the similarity matrix coverage (Theorem III.4), which leads to an incorrect clustering and slow down its coverage rate. However, it is worth noting that CADIS achieves higher top-1
accuracy than FedFA when converged.
### _Ablation studies_
#### V-C1 Robustness to the client datasets
In the previous subsection, we focus on the top-1 testing accuracy on an IID test dataset to estimate the goodness of the trained model over the global distribution. Because only a small portion of clients participate in the training process at each communication round, the aggregated global model at the server overly fits the sub-dataset of some clients, e.g., most-recently trained clients, or clients in the same cluster. To estimate the robustness of an FL method against clients, we consider the trend of top-1 accuracy by estimating the average accuracy obtained in the last \(10\) communication rounds (named \(10\)_-round averaging accuracy_ for short). As shown in Figure 8(a) (Left), the difference in the top-1 accuracy between two communication rounds of FedAvg and FedProx is non-trivial. The top-1 accuracy of CADIS oscillates with a smaller amplitude than those of FedAvg and FedProx. As the result, there is a clear gap of \(10\)-round averaging accuracy between CADIS and the FedAvg as shown in Figure 8(a) (Middle). Another interesting point is that although CADIS is equivalent to another baseline method for the MNIST dataset in terms of top-1 accuracy, the \(10\)-round averaging accuracy of CADIS outperforms those of the baseline significantly, e.g., \(95.3\)% as in CADIS versus around \(94.2\)% of FedAvg (Figure 8(a) (Right)). We also test the global model on the local sub-dataset of all the participating clients at the beginning of each communication round, i.e., do the inference pass at clients. The result in Figure 8(b) shows that CADIS has consistently higher average inference accuracy across clients with smaller variances than the baselines.
The results imply that the aggregated global model obtained by CADIS is more stable than the others and does not overfit clients' sub-datasets. It is expected because CADIS is designed with Knowledge Distillation-based Regularization for clients to avoid the local overfitting issue. Thus, we state that our CADIS could learn a well-balanced model between clients.
#### V-C2 Impact of the non-IID type
We study the robustness of our method with the different types of non-IID by considering both conventional label distribution skew (Pareto), and other patterns of cluster skew [31] (\(N=100\) and \(k=10\)).
* Pareto (denoted as **PA**): The number of images of a class among clients following a power law [9, 13].
* Sample-balanced single cluster (denoted as **BC**): a simple case of cluster skew with only one cluster and the number of samples per client does not change among clients. To measure the bias of the proposed model toward the cluster, we choose the number of clients inside the cluster significantly higher than the others, e.g, 60%.
* Sample-unbalanced single cluster (denoted as **UC**): Similar to the BC but the number of samples per client is unbalanced.
In Section V-B, we showed numerical data regarding the MC distribution, in which CADIS demonstrated an improvement in accuracy compared to other methods. A similar observa
Fig. 8: Stability comparison of top-1 test accuracy (%) and inference accuracy among all clients on the PILL, MNIST datasets. We omit the result of CIFAR-10 due to the space limitation. The results are plotted with the average-smoothed of every \(10\) communication rounds to have a better visualization.
tion was observed in the PA, BC, and UC data distribution (Table III). CADIS achieves the best top-1 accuracy in most of the experiments (the second-best top-1 in the remaining). For example, CADIS improves the top-1 accuracy by \(1.7\times\) and \(1.1\times\) in the case of the CIFAR-10 dataset, BC and UC, respectively. The result implies that CADIS has a good performance with different types of cluster-skewed non-IID while achieving acceptable performance with label distribution non-IID (equivalent to FedAvg).
### _Discussion_
#### V-D1 Impact of the transitive learning
We introduced transitive learning in Section III-C to speed up the convergence of the similarity matrix. We confirm that CADIS with transitive learning (Transitive) and without transitive learning (Standard) could reach the same accuracy in our experiment. However, transitive learning clusters the clients into groups faster than Standard. Figure 10 shows the MSE distance of the similarity matrix built up by two methods with the correct similarity matrix (ideal one). Transitive could coverage after \(40\) communication rounds while Standard needs approximate \(100\) rounds.
#### V-D2 Impact of hyper-parameters
In the experiments shown in Section V-B, we tune the similarity threshold \(\gamma\) and the factor \(\lambda\) and report the best result obtained. In this section, we discuss how the hyper-parameters impact the top-1 accuracy. Figure 10 show the results when we change both hyper-parameters of CADIS on the CIFAR-100 dataset, **MC** distribution. The result shows that both two hyper-parameter could lead to a change in accuracy. However, CADIS is much more sensitive to the similarity threshold \(\epsilon\). For example, CADIS[1, \(0.975\)] achieve \(26.6\)% while CADIS[1, \(0.9\)] reaches \(32.1\)%. In our experiment, the best similarity threshold also changes when we change the dataset, e.g., \(0.975\) for MNIST and \(0.9\) for CIFAR-10 and CIFAR-100.
#### V-D3 The generalization of proposition III.1
It is worth noting that the proof of proposition III.1 can be used regardless of the condition \(R\geq 0\). In the case where the values of \(R\) are not confined to the non-negative domain, one may deduce that the updates among rows of the penultimate layer will exhibit an inverse trend, depending on the label being trained. However, since our objective is to discover the labels underlying the training dataset, we only analyze the scenario in which the representation vector \(R\) is monotonically non-negative.
#### V-D4 Computational Overhead
We now estimate the computational overhead of the aggregation at the server of CADIS in comparison with FedAvg and the other methods. The result in Fig. 11 (Left) shows that the computation overhead at the server of the CADIS's clustering module is trivial, i.e., the time of CADIS is approximate those of FedAvg and FedProx. This is an expected result because CADIS clusters the clients based on the information of the penultimate layer whose size is quite small, e.g., \(256\times 100\) in the case of the ResNet-9 model and CIFAR-100 dataset.
For the computation overhead of local training at client, we estimate the relative performance (on average) of CADIS over those one of FedAvg using the same device setting, e.g., the GPU Force-GTX 3090. The result in Fig. 11 (Right) shows CADIS require \(1.37\times\) more computation at local than FedAvg (for performing the knowledge distillation regularization).
## VI Related Works
To tackle the statistical heterogeneity, i.e., non-IID, problem, many efforts focused on designing weighting strategies for aggregation at server [10, 20, 21]. The authors in [10] developed a weighed aggregation mechanism, in which the weight of a client's local model is the sum of the information entropy calculated from the accuracy of the local model and the number of the client has participated in the training. In [20], Wang et al., focused on the internal and externa conflict between the clients. The former indicates the unfairness among clients selected in the same round, whereas the latter represents a conflict between the assumed gradient of a client who has not been chosen and the global update. In order to accomplish this, they proposed a mechanism to eliminate conflicts before averaging the gradients. Alternatively, many studies improve the training algorithm at client side [25, 26, 9, 27]. In [9], the authors addressed data heterogeneity by adding a so-called proximal term to the loss function, which restricts local updates to be closer to the initial (global) model. The authors in [25] used an adaptive regularization term leveraging the cumulative gradients when training the local models. In [26], the authors investigated how to remedy the client drift induced by the heterogeneous data among clients in their local updates.
However, previous studies specifically take into account the label skew non-IID when each client has a fixed number of classes (label size imbalance [8, 43, 44, 45, 28]) or
Fig. 11: Average aggregation time at server Left) and relative performance (samples/s) of local training at client normalized to those of FedAvg (Right) over \(50\) communication rounds.
when the number of samples of a certain class is distributed to clients using the power-law or Dirichlet distribution (label distribution imbalance [13, 27, 39, 46]). Recently, some works consider the non-IID scenario which is more close to the real-world data such as the numbers of classes are often highly imbalanced [29], or following the cluster-skew non-IID distribution [30, 31]. Especially, cluster-skew is firstly introduced by [30] where there exists a data correlation between clients. Authors in [31] tackle this data distribution by adaptively assigning the weights for clients at aggregation by using Deep Reinforcement Learning. This work also focuses on **cluster-skew**. Unlike [31], we combine both the aggregation optimization approach (clustered aggregation) at the server side and the training enhancement approach at clients (knowledge distillation-based regularization) in this work.
**Cluster-based Federated Learning:** Recent works cluster the clients into groups where different groups of clients have different learning tasks [47, 48], or different computation/network resource [49, 43]. Other methods have been proposed to identify adversarial clients and remove them from the aggregation [48, 50] based on their cosine similarities. Recently [51, 52] proposed to use clustering to address the non-IID issue. It is worth noting that most of the previous cluster-based Federated Learning methods assume that all clients will participate in the clustering process or use the whole model for clustering which is unpractical in a real Federated Learning system. Our proposed CADIS can effectively cluster clients by using the information of the penultimate layer only.
## VII Conclusion
In this paper, we introduced for the first time a new type of non-IID data called cluster-skewed non-IID, in which clients can be grouped into distinct clusters with similar data distributions. We then provided a metric that quantifies the similarity between two clients' data distributions without violating their privacy, and then employed a novel aggregation scheme that guarantees equality between clusters. Moreover, we designed a local training regularization based on the knowledge-distillation technique that reduces the impact of overfitting on the clients' training process and dramatically boosts the trained model's performance. We performed the theoretical analysis to give the basis of our proposal and proved its superiority against a benchmark. Extensive experimental results on both standard public datasets and our own collected real pill image dataset demonstrated that our proposed method, CADIS, outperforms state-of-the-art. Notably, in the cluster-skewed scenario, our proposed FL framework, CADIS, improved top-1 accuracy by \(16\%\) compared to FegAvg and by up to \(8.7\%\) concerning other state-of-the-art approaches.
## VIII Acknowledgments
This work was funded by Vingroup Joint Stock Company (Vingroup JSC),Vingroup, and supported by Vingroup Innovation Foundation (VINIF) under project code VINIF.2021.DA00128. This work was supported by JSPS KAKENHI under Grant Number JP21K17751 and is based on results obtained from a project, JPNP20006, commissioned by the New Energy and Industrial Technology Development Organization (NEDO).
|
2301.06721
|
On Delay-Doppler Plane Orthogonal Pulse
|
In this paper, we analyze the recently discovered delay-Doppler plane
orthogonal pulse (DDOP), which is essential for delay-Doppler plane
multi-carrier modulation waveform. In particular, we introduce a local
orthogonality property of pulses corresponding to Weyl-Heisenberg (WH) subset
and justify the DDOP's existence, in contrast to global orthogonality
corresponding to WH set governed by the WH frame theory. Then, sufficient
conditions for locally-orthogonal pulses are presented and discussed. Based on
the analysis, we propose a general DDOP design. We also derive the frequency
domain representation of the DDOP, and compare the DDOP-based orthogonal
delay-Doppler division multiplexing (ODDM) modulation with other modulation
schemes, in terms of TF signal localization. Interestingly, we show perfect
local orthogonality property of the DDOP with respect to delay-Doppler
resolutions using its ambiguity function.
|
Hai Lin, Jinhong Yuan
|
2023-01-17T06:43:10Z
|
http://arxiv.org/abs/2301.06721v1
|
# On Delay-Doppler Plane Orthogonal Pulse
###### Abstract
In this paper, we analyze the recently discovered delay-Doppler plane orthogonal pulse (DDOP), which is essential for delay-Doppler plane multi-carrier modulation waveform. In particular, we introduce a _local orthogonality_ property of pulses corresponding to Weyl-Heisenberg (WH) subset and justify the DDOP's existence, in contrast to _global orthogonality_ corresponding to WH _set_ governed by the WH frame theory. Then, sufficient conditions for locally-orthogonal pulses are presented and discussed. Based on the analysis, we propose a general DDOP design. We also derive the frequency domain representation of the DDOP, and compare the DDOP-based orthogonal delay-Doppler division multiplexing (ODDM) modulation with other modulation schemes, in terms of TF signal localization. Interestingly, we show perfect local orthogonality property of the DDOP with respect to delay-Doppler resolutions using its ambiguity function.
## I Introduction
In digital communications, a modulation scheme usually requires a set of (bi)orthogonal _analog_ pulses or continuous time functions, each of which carries an information-bearing _digital_ symbol, to synthesize the signal waveform [1]. Therefore, the modulation process can be intuitively thought of as placing these pulses in the time-frequency (TF) plane, and the (bi)orthogonality can be achieved by placing them with proper TF distance. Such modulation schemes include single carrier (SC) modulation with temporally spaced pulses, and multi-carrier (MC) modulation whose pulses are spaced both temporally and spectrally. Meanwhile, for a communication system, a transmit signal always consists of a finite number of pulses and occupies a finite TF region in the TF plane, determining the signal's duration and bandwidth.
In the context of MC modulation, the pulses are typically generated by TF shifting a _prototype pulse_ in accordance with a frequency resolution \(\mathcal{F}\) and a time resolution \(\mathcal{T}\). The minimum TF distance among these pulses can be quantified by \(\mathcal{R}=\mathcal{T}\mathcal{F}\), called as joint TF resolution (JTFR) in this paper. The fundamental issue of designing an MC modulation scheme is to find the prototype pulse that can form (bi)orthogonal pulses with respect to \(\mathcal{T}\) and \(\mathcal{F}\). Conventionally, these TF-shifted pulses are considered as _Wely-Heisenberg_ (WH) or _Gabor_ function set [2, 3, 4]. According to the WH frame theory, the (bi)orthogonal WH function sets only exist for \(\mathcal{R}\geq 1\)[5, 6], and therefore most of orthogonal MC modulation schemes are designed with \(\mathcal{R}\geq 1\)[7, 8].
Recently, a delay-Doppler plane MC (DDMC) modulation named as the orthogonal delay-Doppler division multiplexing (ODDM) modulation, was proposed in [9, 10]. Considering that linear time-varying (LTV) channels in a stationary region that can be modelled as a delay-Doppler (DD) channel with a deterministic spreading function, the ODDM modulation employs a newly discovered DD plane orthogonal pulse (DDOP) to couple the modulated MC signal with the DD channel. It achieves superior performance by harvesting both time and frequency diversity, while it is shown in [9, 10] that the DDOP can form an orthogonal function set with respect to the DD plane resolutions. Because the DD plane's TF resolutions result in a JTFR \(\mathcal{R}_{\text{DD}}<1\), the DDOP seems inconsistent with current (bi)orthogonal pulses design principles. Although its orthogonality has been proved, a rational explanation for the DDOP's unique properties is still missing.
In this paper, we take an in-depth look into the DDOP and justify its existence. We introduce a _local orthogonality_ property and clarify that the DDOP only needs to satisfy local orthogonality, in contrast to _global orthogonality_ governed by the WH frame theory. Then, sufficient conditions for pulse to achieve local orthogonality are analyzed. Based on the analysis, we propose a general DDOP design. Our contributions can be summarized as follows:
* We point out that only local (bi)orthogonality in the finite TF region rather than global (bi)orthogonality in the whole TF plane is required by a modulation scheme. Accordingly, we show that a WH _subset_ rather than a WH set is required in the pulse design.
* We reformulate the (bi)orthogonal pulse design problem, based on the local (bi)orthogonality. We show that the DDOP forms a WH subset that satisfies the local orthogonality.
* We analyze the local orthogonality with respect to TF resolutions, and discuss the corresponding sufficient conditions. We reveal that for a limited number of subcarriers, surprisingly, there are _infinite_ pulses orthogonal with respect to \(\mathcal{F}\), as long as they are periodic functions with a specified period related to the number of subcarriers.
* By introducing cyclic prefix (CP) and cyclic suffix (CS) to achieve the specified periodicity, we propose a general DDOP design, which releases the duration constraint of square-root Nyquist (SRN) sub-pulses in our previously designed DDOP.
* We derive the frequency domain representation of the DDOP. Together with the DDOP's time domain representation, we illustrate the DDOP-based ODDM's TF signal localization, and schematically compare it with those of other modulation schemes. The ambiguity function shows perfect local orthogonality property of the DDOP with respect to delay-Doppler resolutions.
Notations: In this paper, \(\Pi_{\mathcal{T}}(t)\) stands for the rectangular
pulse with unit energy and support \([0,\mathfrak{T}]\). Given the number of subcarriers \(N\), \(a(t)\) denotes the SRN pulse for interval \(\mathcal{T}\), with energy \(\frac{1}{N}\) and support \([-T_{a}/2,T_{a}/2]\). \(\mathcal{A}_{g,\gamma}(\cdot)\) is the (cross)ambiguity function of \(g(t)\) and \(\gamma(t)\).
## II WH set based pulse design principles
Let us first introduce main parameters and their notations for an MC modulation in Table I. The transmit pulses in an MC modulation can be represented by the function set
\[(g,\mathcal{T},\mathcal{F})=\left\{g_{m,n}\right\}_{m,n\in\mathbb{Z}}, \tag{1}\]
where \(g_{m,n}\coloneqq g(t-m\mathcal{T})e^{j2\pi n\mathcal{F}(t-m\mathcal{T})}\) and \(g(t)\) is the prototype pulse. Similarly, we can form the receive pulses \((\gamma,\mathcal{T},\mathcal{F})\) using another prototype pulse \(\gamma(t)\) with the same TF resolutions. Note that because a time-limited signal cannot be strictly band-limited, the bandwidth of \(g(t)\), \(B_{g}\), is defined in an essential sense [11].
Given \(\mathcal{T}\) and \(\mathcal{F}\), the fundamental issue of an MC modulation is to find \(g(t)\) and \(\gamma(t)\) satisfying the orthogonal condition of
\[\langle g_{m,n},g_{\dot{m},\dot{n}}\rangle=\delta(m-\dot{m})\delta(n-\dot{n}), \tag{2}\]
or the biorthogonal condition of
\[\langle g_{m,n},\gamma_{\dot{m},\dot{n}}\rangle=\delta(m-\dot{m})\delta(n-\dot {n}). \tag{3}\]
By considering the TF plane as a 2D phase space, the function set in (1) forms a discrete lattice "sampling" the phase space [2, 12], where the "sampling" resolution is the JTR \(\mathcal{R}\). Then, the function set in (1) can be treated as a WH set. According to the WH frame theory, the existence of (bi)orthogonal WH set depends on the "sampling" resolution and can be summarized as [2, 12, 13, 14, 15, 2, 5, 12, 15]:
* Critical sampling (\(\mathcal{R}=1\)) : Orthogonal WH sets exist. However, they have either infinite time or frequency energy spread according to the Balian-Low theory [16], and therefore are not TF well-localized.
* Undercritical sampling (\(\mathcal{R}>1\)) : TF well-localized orthogonal or biorthogonal WH sets exist, if \(\mathcal{R}\) is sufficiently larger than \(1\).
* Overcritical sampling (\(\mathcal{R}<1\)) : Neither orthogonal nor biorthogonal WH sets exist.
With the transmit pulses in (1), the transmit waveform of an MC modulation can be represented as
\[x(t)=\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}X_{m,n}g(t-m\mathcal{T})e^{j2\pi n \mathcal{F}(t-m\mathcal{T})}, \tag{4}\]
where \(X_{m,n}\)'s are the information-bearing digital symbols.
## III ODDM modulation
In the design of modulation schemes, the primary concern is the dispersive effect of the channel. A doubly-selective wireless channel with both time and frequency dispersion is usually considered as a LTV system, and represented by its time-varying channel impulse response (TV-CIR) or DD spread function [17].
### _DD channel model_
Since the transmit signal is band- and time-limited, we always apply an appropriate bandpass filtering and a subsequent sampling at the receiver. As a result, we observe an equivalent channel that is the band- and time-limited version of the physical channel. Let the sampling rate and duration be \(W_{0}\) and \(T_{0}\), respectively. The equivalent DD channel can be written as [17]
\[h(\tau,\nu)=\sum_{p=1}^{P}h_{p}\delta(\tau-\tau_{p})\delta(\nu-\nu_{p}), \tag{5}\]
with \(\tau_{p}=\frac{l_{p}}{W_{0}}\), \(\nu_{p}=\frac{k_{p}}{T_{0}}\), \(l_{p},k_{p}\in\mathbb{Z}\), where \(\frac{1}{W_{0}}\) and \(\frac{1}{T_{0}}\) are the delay and Doppler resolutions, respectively.
### _ODDM modulation and DDOP_
To couple the MC signal with the DD channel in (5), the ODDM matches its signal resolutions to the delay and Doppler resolutions, namely set \(\mathcal{T}=\frac{1}{W_{0}}\) and \(\mathcal{F}=\frac{1}{T_{0}}\), respectively. Note that for an ODDM signal, we have \(W_{0}=\frac{M}{T}\) and \(T_{0}=NT\). Then, an ODDM frame without the frame-wise CP can be written as [10]
\[x(t)=\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}X_{m,n}u\left(t-m\frac{T}{M}\right)e^{j2 \pi n\frac{1}{NT}(t-m\frac{T}{M})}, \tag{6}\]
where \(u(t)\) is the DDOP given by
\[u(t)=\sum_{\dot{n}=0}^{N-1}a(t-\dot{n}T). \tag{7}\]
As shown in Fig. 1, the duration of \(a(t)\) in \(u(t)\) is \(T_{a}=2Q\frac{T}{M}\). When \(2Q\ll M\) and therefore \(T_{a}\ll T\), it has been proved in [10] that \(u(t)\) satisfies the orthogonal property
\[\mathcal{A}_{u,u}\left(\bar{m}\frac{T}{M},\bar{n}\frac{1}{NT}\right)=\delta( \bar{m})\delta(\bar{n}), \tag{8}\]
Fig. 1: \(u(t)\), the transmit pulse of ODDM modulation.
for \(|\bar{m}|\leq M-1\) and \(|\bar{n}|\leq N-1\). Because the corresponding JTFR of \(\mathcal{R}_{\text{DD}}=\frac{T}{M}\times\frac{1}{NT}=\frac{1}{MN}\ll 1\) does not allow the existence of (bi)orthogonal WH set, a natural question arises: How to explain the existing DDOP in [10] and whether is there any general DDOP design principle?
## IV Global and Local (Bi)Orthogonality
From (8), one can see that this orthogonality is regarding \(M\) symbols with \(N\) subcarriers, and therefore it only applies to a part of TF plane. Since an MC modulation has a limited number of symbols and subcarriers, the orthogonality within this signal bandwidth and duration is sufficient for an MC modulation. As a result, we can reformulate its pulse design problem, and introduce a concept of local orthogonality.
### _Global and local (bi)orthogonality_
Analogous to (2) and (3), the (bi)orthogonal pulse design problem taking the limited number of symbols and subcarriers into account is to find WH subsets \((g,\mathcal{T},\mathcal{F},M,N)\) and \((\gamma,\mathcal{T},\mathcal{F},M,N)\) that satisfy the orthogonal condition of
\[\langle g_{m,n},g_{\dot{m},\dot{n}}\rangle=\delta(m-\dot{m})\delta(n-\dot{n}), \ m,\dot{m}\in\mathbb{Z}_{m},n,\dot{n}\in\mathbb{Z}_{N}, \tag{9}\]
or the biorthogonal condition of
\[\langle g_{m,n},\gamma_{\dot{m},\dot{n}}\rangle=\delta(m-\dot{m})\delta(n-\dot {n}),\ m,\dot{m}\in\mathbb{Z}_{m},n,\dot{n}\in\mathbb{Z}_{N}, \tag{10}\]
where
\[\mathbb{Z}_{M}=\{0,1,\cdots,M-1\},\ \mathbb{Z}_{N}=\{0,1,\cdots,N-1\}. \tag{11}\]
We call (9) and (10) the local orthogonal condition and local biorthogonal condition, respectively. Because of
\[\langle g_{m,n},g_{\dot{m},\dot{n}}\rangle=\mathcal{A}_{g,g}(\bar{m}\mathcal{T },\bar{n}\mathcal{F})e^{j2\pi n\bar{m}\mathcal{F}\mathcal{T}}, \tag{12}\]
where \(\bar{m}=\dot{m}-m\) and \(\bar{n}=\dot{n}-n\), the local orthogonal condition in (9) is equivalent to
\[\mathcal{A}_{g,g}(\bar{m}\mathcal{T},\bar{n}\mathcal{F})=\delta(\bar{m}) \delta(\bar{n}), \tag{13}\]
for \(|\bar{m}|\leq M-1,|\bar{n}|\leq N-1\). Similar result can be obtained for the local biorthogonal condition in (10).
It is noteworthy that the WH frame theory based results regarding (bi)orthogonal WH sets are rigorously correct. Since the WH set is a time-frequency analysis tool for functions in \(L^{2}(\mathbb{R})\), it considers the whole TF plane where \(m,n\in\mathbb{Z}\), and corresponds to the signal without the limitation of bandwidth and duration. To make this possible, given \(\mathcal{T}\) and \(\mathcal{F}\), \(g(t)\) is independent of the number of symbols \(M\) and the number of subcarriers \(N\), to be shifted over the whole TF plane. In other words, to achieve the global (bi)orthogonality in (2) and (3), \(g(t)\) is parameterized only by \(\mathcal{T}\) and/or \(\mathcal{F}\).
On the other hand, for MC modulation, a WH subset that satisfies the local (bi)orthogonality in (9) and (10) is sufficient. Obviously, \(g(t)\) that achieves the global (bi)orthogonality can form a such WH subset. However, what we really need is just a WH subset, and it is not necessarily bounded by the WH frame theory for the WH set. In fact, the pulses parameterized by not only \(\mathcal{T}\) and \(\mathcal{F}\) but also \(M\) and \(N\), can achieve the local orthogonality. An example is the DDOP in (7).
### _Orthogonality with respect to \(\mathcal{F}\)_
Let us consider a fixed \(m\) in \(g_{m,n}\), and investigate the orthogonality with respect to the frequency resolution \(\mathcal{F}\). We want to find \(g(t)\) that can achieve the orthogonality among \(g(t-m\mathcal{T})e^{j2\pi n\mathcal{F}(t-m\mathcal{T})}\) with a given \(m\) but variable \(n\), where \(0\leq t\leq T_{g}\) and \(T_{g}=\mathfrak{T}=1/\mathcal{F}\). Without loss of generality, let \(m=0\). We can obtain the following results:
1. Unbounded \(n\) (\(n\in\mathbb{Z}\)): \(g(t)\) is the rectangular pulse \(\Pi_{\mathfrak{T}}(t)\), which is independent of \(N\).
2. Bounded \(n\) (\(|n|\leq N-1\)): We have the following lemma:
**Lemma 1**.: _When \(g(t)\) is a periodic function with period \(\frac{\mathfrak{T}}{N}\) for \(0\leq t\leq T_{g}\) and \(T_{g}=\mathfrak{T}\), it satisfies the orthogonal property that_
\[\mathcal{A}_{g,g}\left(0,n\mathcal{F}\right)=\delta(n), \tag{14}\]
_for \(|n|\leq N-1\)._
Proof:: Since the period of \(g(t)\) is \(\frac{\mathfrak{T}}{N}\), \(g(t)\) can be written as
\[g(t)=g\left(t+\dot{n}\frac{\mathfrak{T}}{N}\right),\ \ 0\leq\dot{n}\leq N-1. \tag{15}\]
for \(0\leq t<\frac{\mathfrak{T}}{N}\). Then, bearing in mind that \(\mathfrak{T}=1/\mathcal{F}\), we have
\[\mathcal{A}_{g,g}(0,n\mathcal{F})\] \[=\int_{0}^{T_{g}}g(t)g^{*}(t)e^{-j2\pi n\mathcal{F}t}dt,\] \[=\sum_{\dot{n}=0}^{N-1}\int_{\dot{n}\frac{\mathfrak{T}}{N}}^{( \dot{n}+1)\frac{\mathfrak{T}}{N}}g(t)g^{*}(t)e^{-j2\pi n\mathcal{F}t}dt,\] \[=\sum_{\dot{n}=0}^{N-1}e^{-j2\pi\frac{n\dot{n}}{N}}\int_{0}^{ \frac{\mathfrak{T}}{N}}g(t)g^{*}(t)e^{-j2\pi n\mathcal{F}t}dt,\] \[=\delta(n), \tag{16}\]
for \(|n|\leq N-1\), which completes the proof.
Lemma 1 indicates that once there is a constraint imposed on the number of subcarriers, there are _infinite_ pulses that can satisfy the orthogonality with respect to \(\mathcal{F}\). In particular, _regardless of \(B_{g}\)_, \(g(t)\) can achieve the orthogonality among \(N\) subcarriers with a subcarrier spacing \(\mathcal{F}\), as long as it is an aforementioned periodic function. An example of such \(g(t)\) for \(N=4\) is shown in Fig. 2.
It is noteworthy that in contrast to (F1) where \(B_{g}\) is proportional to \(\mathcal{F}\), (F2) _decouples \(B_{g}\)_ and \(\mathcal{F}\), and consequently allows pulses with much wider bandwidth to achieve orthogonality among \(N\) subcarriers. On the other hand, to avoid the intersymbol interference (ISI) and achieve the orthogonality
among MC symbols time-multiplexed by \(\mathcal{T}\), we need \(B_{g}\) to be comparable to \(\frac{1}{T}\). The decoupling of \(\mathcal{F}\) and \(B_{g}\) in (F2) actually paves a way to design orthogonal pulse with respect to _independent TF resolutions_.
### _Orthogonality with respect to \(\mathcal{T}\)_
Similarly, we can consider a fixed \(n\) in \(g_{m,n}\), and investigate the orthogonality with respect to the time resolution \(\mathcal{T}\). Our target now is to find \(g(t)\) that can achieve the orthogonality among \(g(t-m\mathcal{T})e^{j2\pi n\mathcal{F}(t-m\mathcal{T})}\) with a fixed \(n\) but different \(m\). When \(n\neq 0\), we have the following straightforward answer with _isolated_ pulses/sub-pulses:
1. Unbounded \(m\) (\(m\in\mathbb{Z}\)) : \(g(t)\) can be any function with duration \(T_{g}\leq\mathcal{T}\), which is independent of \(M\).
2. Bounded \(m\) (\(|m|\leq M-1\)) : \(g(t)\) consists of \(\dot{N}>1\) sub-pulses \(b_{\dot{n}}(t),0\leq\dot{n}\leq N-1\), where these sub-pulses are temporally spaced by \(M\mathcal{T}\) and each sub-pulse has a duration of \(T_{b_{\dot{n}}}\leq\mathcal{T}\).
Meanwhile, when \(n=0\), we have another answer with _overlapped_ pulse/sub-pulses:
1. Unbounded \(m\) (\(m\in\mathbb{Z}\)) : SRN pulse for symbol interval \(\mathcal{T}\), which is also independent of \(M\).
2. Bounded \(m\) (\(|m|\leq M-1\)) : \(g(t)\) consists of \(\dot{N}>1\) SRN sub-pulses for symbol interval \(\mathcal{T}\), where these sub-pulses are temporally spaced by \(M\mathcal{T}\). The SRN sub-pulse can have any duration.
It is interesting to note that \(g(t)\) in (T4) actually can form a periodic function that satisfies (F2), when \(\dot{N}\) is large enough.
## V General DDOP design
Recall that the orthogonal property of the DDOP in (8) is subject to a duration constraint of SRN sub-pulse, given by \(T_{a}\ll T\). In practice, it is desirable to relax such constraint to enable flexible design. In this section, we propose a general DDOP design, where the SRN sub-pulse's duration constraint is released.
Let \(\dot{N}=N\) and \(\mathcal{T}=\frac{T}{M}\), \(g(t)\) in (T4) becomes the DDOP \(u(t)\) in (8), except the unbounded \(T_{a}\). From (F2), we know that for the frequency resolution \(\mathcal{F}=\frac{1}{NT}\), the key to achieve the orthogonality among \(N\) subcarriers is to form a periodic function with period \(\frac{1}{N\mathcal{F}}=T\). This observation inspires us to use \(u_{c}(t)\), a cyclically extended version of \(u(t)\), as the transmit pulse, while the receive pulse is still \(u(t)\). Furthermore, because \(\mathcal{A}_{u_{c},u}(m\frac{T}{M},n\frac{1}{NT})\) is calculated between \(u_{c}(t)\) and \(u(t-m\frac{T}{M})e^{j2\pi\frac{n}{NT}(t-m\frac{T}{M})}\), the problem becomes how can we let \(u_{c}(t)\) have the specified periodicity within the range of \(u(t-m\frac{T}{M})e^{j2\pi\frac{n}{M}(t-m\frac{T}{M})}\) for \(|m|\leq M-1\). We have the following lemma:
**Lemma 2**.: _Let \(u(t)\) consist of \(N\) SRN pulses \(a_{T/M,N}(t)\) temporally spaced by \(T\), it satisfies the orthogonal property that_
\[\mathcal{A}_{u_{c},u}\left(m\frac{T}{M},n\frac{1}{NT}\right)=\delta(m)\delta (n), \tag{17}\]
_for \(|m|\leq M-1\) and \(|n|\leq N-1\), where \(u_{c}(t)\) is a cyclically extended version of \(u(t)\) that is a periodic function with period \(T\) during \(-(M-1)\frac{T}{M}\leq t\leq(MN-1)\frac{T}{M}+T_{a}\)._
Proof.: Let us first check the periodicity of \(u_{c}(t)\) within the range of \(-(M-1)\frac{T}{M}\leq t\leq(MN-1)\frac{T}{M}+T_{a}\), which correspond to the start of the first sub-pulse of \(u(t+(M-1)\frac{T}{M}\) and the end of the last sub-pulse of \(u(t-(M-1)\frac{T}{M})\), respectively. From (7), we can divide \(u(t)\) into \(N\) segments, where \(u(t)=\sum_{n=0}^{N-1}u_{n}(t)\) and the \(n\)th segment is given by \(u_{n}(t)=u(t)\) for \(mT\leq t<(n+1)T\).
Let \(D=\lceil T_{a}/T\rceil\). If \(D=1\), we have \(u_{n}(t)=a(t-nT)\), which implies that the periodicity within \(-(M-1)\frac{T}{M}\leq t\leq(MN-1)\frac{T}{M}+T_{a}\) can be obtained by cyclically extending \(u(t)\) to \(u_{c}(t)=\sum_{n=-1}^{N}a(t-nT)\). Similarly, when \(D>1\), the periodicity can be obtained by further extending to
\[u_{c}(t)=\sum_{n=-D}^{N-1+D}a(t-nT). \tag{18}\]
Two examples of \(u_{c}(t)\) with \(D=1,2\) are shown in Fig. 3 and Fig. 4, respectively, where the first sub-pulse of \(u(t+(M-1)\frac{T}{M})\) and the last sub-pulse of \(u(t-(M-1)\frac{T}{M})\) are also plotted with dashed lines.
Next, let us verify the ambiguity functions. Due to the aforementioned periodicity of \(u_{c}(t)\), we have
\[u_{c}(t)=u_{c}(t+\dot{n}T),\ \ 0\leq\dot{n}\leq N-1, \tag{19}\]
for \(m\frac{T}{M}\leq t\leq m\frac{T}{M}+T_{u}\), where \(|m|\leq M-1\) and \(T_{u}=(N-1)T+T_{a}\). Then, using (19), the ambiguity function between \(u_{c}(t)\) and \(u(t)\) for \(|n|\leq N-1\) and \(|m|\leq M-1\) can be calculated similarly to (16), and given by
\[\mathcal{A}_{u_{c},u}(m\frac{T}{M},n\frac{1}{NT})\] \[=\int_{m\frac{T}{M}}^{m\frac{T}{M}+T_{u}}u_{c}(t)u^{*}(t-m\frac{T} {M})e^{-j2\pi n\frac{1}{NT}(t-m\frac{T}{M})}dt,\] \[=\delta(n)\delta(m). \tag{20}\]
(20) completes the proof.
Lemma 2 indicates that the constraint of \(T_{a}\) in \(u(t)\) can be removed. Once the appropriate CP and CS are added in accordance with (18), the desired local orthogonality can be achieved as well. As a result, generally the transmit pulse of ODDM modulation is \(u_{c}(t)\), where the extension parameter \(D=\lceil T_{a}/T\rceil=\lceil 2Q/M\rceil\). When \(M\gg 2Q\), we have \(2Q/M\approx 0\). Then, as proved in [10], the ODDM can just employ the DDOP \(u(t)\) without cyclic extension (\(D=0\)).
## VI TF signal localization and numerical results
### _Frequency domain representation of DDOP_
The frequency domain representation plays an important role in the analysis of pulse. In the following, we will derive \(U(f)\), the frequency domain representation of \(u(t)\).
It is well-known that the frequency domain representation of an impulse train \(\dot{u}(t)=\sum_{n=-\infty}^{\infty}\delta(t-nT),\) is a Fourier series and also can be written as an impulse train in frequency domain \(\dot{U}(f)=\frac{1}{T}\sum_{n=-\infty}^{\infty}\delta(f-\frac{n}{f}).\) It is interesting to observe that the DDOP can be obtained from \(\dot{u}(t)\) by applying a rectangular window \(\Pi_{NT}\left(t+\frac{T}{2}\right)\) followed by a \(a(t)\) based filtering. Then, we have
\[u\left(t+\frac{T_{a}}{2}\right)=\left(\dot{u}(t)\times\Pi_{NT}\left(t+\frac{T }{2}\right)\right)\star a(t), \tag{21}\]
where \(\star\) denotes the convolution. Since the multiplication and convolution in time domain correspond to the convolution and multiplication in frequency domain, respectively, we have
\[U(f) =e^{-j2\pi f\frac{T_{a}}{2}}A(f)\left(\dot{U}(f)\star e^{-j2\pi f \frac{(N-1)T}{2}}\operatorname{Sinc}(fNT)\right),\] \[=\frac{e^{-j2\pi f\tilde{T}}}{T}A(f)\sum_{n=-\infty}^{\infty}e^{ j2\pi\frac{n(N-1)}{2}}\operatorname{Sinc}(fNT-nN), \tag{22}\]
where \(\tilde{T}=(T_{a}+(N-1)T)/2\) and \(A(f)\) is the Fourier transform of \(a(t)\). Without loss of generality, let \(M\) be an even number. Then, the shape of \(|U(f)|\) in plotted in Fig. 5, where the shape of \(|\operatorname{Sinc}(fNT-nN)|\) is truncated for the purpose of display. Now, it becomes clear that \(\operatorname{Sinc}(fNT-nN)\) and \(A(f)\) correspond to the orthogonality with respect to \(\mathcal{F}=\frac{1}{NT}\) and \(\mathcal{T}=\frac{T}{M}\), respectively.
### _TF signal localization comparison_
For the TF region bounded by the sampling rate and duration of \(W_{0}=\frac{M}{T}\) and \(T_{0}=NT\), the corresponding degrees of freedom (DoF) of the signal is around \(W_{0}T_{0}=MN\). Then, an MC modulation scheme employs \(MN\) orthogonal pulses corresponding to its TF resolutions to transmit \(MN\) digital symbols, resulting in its own TF localization structure.
With \(u(t)\) in (7) and \(U(f)\) in (22), like that of OFDM in [7], the TF signal localization structure of ODDM modulation can be schematically illustrated in Fig. 6, where those of other modulation waveforms are also given for comparison. It can be observed that :
1. For SC modulation, which is a time-division multiplexing (TDM) scheme, the \(MN\) digital symbols are conveyed by \(MN\) SRN pulses for symbol interval \(\frac{T}{M}\). The pulses are overlapped only in time domain.
2. For frequency-division multiplexing (FDM) scheme, an example is the OFDM modulation with frequency resolution \(\frac{1}{NT^{*}}\), where \(MN\) digital symbols are conveyed by \(MN\) rectangular pulses \(\Pi_{NT}(t)\) modulated by \(MN\) subcarriers, respectively. The pulses are overlapped only in frequency domain.
3. For the conventional OFDM modulation with frequency resolution \(\frac{1}{T}\) and time resolution \(T\), \(MN\) digital symbols are conveyed by \(N\) OFDM symbols, where each OFDM symbols has \(M\) rectangular pulses \(\Pi_{T}(t)\) modulated by \(M\) subcarriers, respectively. Since \(N\) OFDM symbols are isolated in time domain, these pulses also are overlapped only in frequency domain.
4. For ODDM modulation with frequency resolution \(\frac{1}{NT}\) and time resolution \(\frac{T}{M}\), \(MN\) digital symbols are conveyed by \(M\) pulse trains \(u(t)\) modulated by \(N\) subcarriers, respectively. These pulses are overlapped in both time and frequency domains to achieve the local orthogonality with respect to \(\frac{T}{M}\) and \(\frac{1}{NT}\).
### _Numerical results_
Now, we present the numerical results for the ambiguity function of the DDOP. A three-dimensional plot of the ambiguity function in (17) is shown in Fig. 7, where \(\mathcal{F}=\frac{1}{NT}\), \(\mathcal{T}=\frac{T}{M}\) with \(M=32\), \(N=8\). \(a(t)\) is a root raised cosine pulse with roll-off factor \(\rho=0.1\) and \(Q=20\). Because \(D=2\) for this parameter setting, we adopt the general DDOP design. The corresponding 2D plot of \(\left|\mathcal{A}_{u_{c},u}\left(m\frac{T}{M},n\frac{1}{NT}\right)\right|\) with \(n=0\) is also given in Fig. 8. One can see that with appropriate CP and CS, the DDOP can achieve the local orthogonality within \(|m|\leq M-1\) and \(|n|\leq N-1\). For \(|m|\geq M\) or \(|n|\geq N\), the ambiguity function repeats with time period \(T\) and frequency period \(\frac{1}{T}\), if we further extend the CP and CS.
The elegant TF localization of ODDM schemes shown in Fig. 6 demonstrates that every information symbol is evenly distributed over its TF region. Thus, it is flexible for allocating TF resources for multi-user communications system design. In addition, the perfect local orthogonality of the DDOP's ambiguity function with respect to DD resolutions, shown in Figs. 7 and 8, can be exploited for design integrated sensing and communication (ISAC) systems. We will investigate these topics in our future work.
and ambiguity function. We clarified the DDOP's local orthogonality and justified its existence as a WH subset, without violating the WH frame theory which governs the global orthogonality corresponding to the WH set. Several sufficient conditions for locally-orthogonal pulses were presented, and a general DDOP design was proposed by introducing CP and CS to the DDOP. We derived the DDOP's frequency domain representation, and compared the DDOP-based ODDM modulation with other modulation schemes, in terms of TF signal localization. We demonstrated the perfect local orthogonality of DDOP with respect to DD resolutions by its ambiguity function.
|
2305.05907
|
Evidence of Inter-state Coordination amongst State-backed Information
Operations
|
Since 2018, Twitter has steadily released into the public domain content
discovered on the platform and believed to be associated with information
operations originating from more than a dozen state-backed organizations.
Leveraging this dataset, we explore inter-state coordination amongst
state-backed information operations and find evidence of intentional, strategic
interaction amongst thirteen different states, separate and distinct from
within-state operations. We find that coordinated, inter-state information
operations attract greater engagement than baseline information operations and
appear to come online in service to specific aims. We explore these ideas in
depth through two case studies on the coordination between Cuba and Venezuela,
and between Russia and Iran.
|
Xinyu Wang, Jiayi Li, Eesha Srivatsavaya, Sarah Rajtmajer
|
2023-05-10T05:22:40Z
|
http://arxiv.org/abs/2305.05907v1
|
# Evidence of Inter-state Coordination amongst State-backed Information Operations
###### Abstract
Since 2018, Twitter has steadily released into the public domain content discovered on the platform and believed to be associated with information operations originating from more than a dozen state-backed organizations. Leveraging this dataset, we explore inter-state coordination amongst state-backed information operations and find evidence of intentional, strategic interaction amongst thirteen different states, separate and distinct from within-state operations. We find that coordinated, inter-state information operations attract greater engagement than baseline information operations and appear to come online in service to specific aims. We explore these ideas in depth through two case studies on the coordination between Cuba and Venezuela, and between Russia and Iran.
## Introduction
The current reach of social media platforms and their affordances for cheap and easy content dissemination, profiling, and targeting, have established social media as a primary avenue for information operations - efforts to manipulate public opinion by intentionally altering the information environment [1]. A substantial literature has emerged studying the tactics and strategies of information operations, particularly on Twitter, where data has been widely available [2, 3, 4]. These studies have focused on campaigns attributed to individual states or state-backed organizations. To the best of our knowledge, no prior work has looked at collaboration amongst states in these efforts. Yet, examples of international collaboration for the dissemination of propaganda date back to the first and second World Wars [5, 6, 7, 8]. Our research explores evidence of inter-state coordination amongst state-backed information campaigns, operationalized through the following two research questions.
**RQ1**: Do state-backed information campaigns operating on Twitter collaborate across states? If so, what distinguishes these efforts from internal information operations in terms of design, deployment, and impact?
**RQ2**: Can we categorize strategic and tactical mechanisms underlying inter-state information operations? E.g., specific roles of individual accounts in support of collusion?
We extract interaction networks amongst thirteen state-backed campaigns operating on Twitter between 2011 and 2021, perform static and dynamic pairwise analyses of observed activity, and highlight the varied structure of inter-state information operations through two case studies. Our findings indicate that inter-state operations attract greater engagement than intra-state operations. We suggest that the strategies employed by state-backed information operations serve to create and maintain a desirable information habitat, e.g., by engaging in ambient affiliation through common hashtags [9], initiating network expansion for increased exposure [10], and referencing controversial topics to gain attention [11]. Our findings represent the first insights into tactics and strategies underlying global cooperation and collusion amongst states in strategic information operations deployed through social media.
## Related Work
### Strategic information operations
Starbird et al. [1] use the term _strategic information operations_ (IO) to refer to efforts by individuals and groups, including state and non-state actors, to manipulate public opinion and change how people perceive events in the world by intentionally altering the information environment. Tracing their roots to TV and radio propaganda in the 20th century, and in some variation even much earlier [12], the modern digital age and in particular today's social media landscape have enabled these efforts with unparalleled efficiency and have raised a unique set of questions around response [13].
A primary subset of information operations aims to disseminate inaccurate, incorrect, or misleading information, so-called _disinformation_. Disinformation operations have been strongly associated with political campaigns [14], focused primarily on social media manipulation of public opinion through bot accounts and paid workers [15, 16, 17, 18]. In attempt to combat disinformation, scholars have focused efforts on developing technical solutions for automated detection of multimodal disinformation, e.g.,
fact-checking claims in text, identifying falsified images, and detecting deception through speech and facial expression in video [19, 20, 21, 22, 23].
While the vast majority of studies reporting on information operations focus on individual state efforts, see e.g., [24, 25, 26], or events, e.g., presidential elections [27, 28, 29], a few have presented a global overview of information operations and therefore are most similar to our work here. Bradshaw and Howard's 2018 report provides an inventory of organized social media manipulation campaigns in 70 countries [16]. Niblock et al. have compiled comprehensive summary statistics and visualizations of all publicly-shared state-backed information operations on Twitter [14]. Our study likewise provides global analyses, but is unique in its focus on inter-state coordination.
### Information operations as collaborative work
Recent work in the Human-Computer Interaction (HCI) literature has highlighted the participatory nature of online information operations and interpreted the dissemination of manipulated information via the lens of online communities [1]. Psychological theories, e.g., distributed cognition [30], offer theoretical bases for examining how social environments, such as mainstream social media platforms, impact collective behaviors among social ties and facilitate disinformation [31]. In particular, sociotechnical systems allow information operations to target, integrate with, and leverage the activities of online crowds - resulting in a combination of orchestrated and organic elements and behaviors. Schoch et al. [32] find that online political astroturfing consistently leaves similar patterns of coordination across distinct geographical settings and historical periods. The collaboration amongst state-backed accounts we describe in this work appears to be orchestrated. However, the audience with whom they engage and acts of implicit coordination with native users, e.g., via hashtagging, embedded URLs, is critical to our work.
### Idea habitats
Many mechanisms have been proposed to model the diffusion of misinformation and to expose environmental factors that drive false beliefs [33, 34]. Current studies capture a number of psychological, demographic, and environmental variables contributing to the acquisition and spread of misinformation [34, 35, 36, 37]. Work in cognitive science has suggested that the ways and extent to which individuals recall and distribute information depends on collections of environmental cues, so-called idea habitats [38]. These cues may include social and political context, linguistic characteristics, and topics of conversation. Common practices such as audience segmentation and micro-targeting represent efforts to nurture habitats receptive to particular narratives. Suitable habitats support self-reinforcing information flows, independent of content validity [39]. In fact, studies have shown that false news spreads further, faster, and more broadly than legitimate news, a phenomenon which has magnified challenges to combat mis- and disinformation in recent years [40]. Our work dovetails with fundamental notions of idea habitat. We examine how accounts involved in information operations cooperate to establish context conducive to the distribution of misleading information.
### Role analysis in social networks
Prior work has analysed the different roles of actors/nodes in social network graphs using both graph structure-based and content-based approaches [41]. Structural role analysis is often used for influence maximization tasks [42, 43] while content-based role analyses have found use in modeling the growth of online communities [41, 44, 45]. Our work uses a hybrid approach to define roles in inter-state information operations integrating network metrics and content-based analyses.
### Dataset
Since October 2018, Twitter has made public the tweets, media, and account-related information of users presumed to be involved in state-linked information operations, provided through the Twitter Moderation Research Consortium (TMRC). The TMRC suggests these users are engaged in _manipulation that can be reliably attributed to a government or state-linked actor_[46]. We aggregate all account activity shared by the TMRC between 2007 and 2021. In total, this represents 23 state-backed information operations consisting of the full activity of 84,262 distinct accounts and approximately 120 million archived tweets.
During preprocessing, we made the following modifications to the complete dataset. We combined accounts designated by Twitter as linked to Egypt and UAE. Twitter's documentation subsequent to the release of these accounts indicated that much of their activity was attributed to an operation managed out of both countries targeting Qatar and Iran with messaging supportive of the Saudi government [47]. We did not include data from one release in March 2020 which Twitter attributed to "Egypt, UAE and Saudia Arabia" because attribution to a single country was not possible. This omitted subset of the data was relatively small (5350 accounts, 6.3% of the total dataset). We did not find evidence of inter-state activity in content originating from Armenia, Bangladesh, Thailand, Tanzania, Mexico, Catalonia, Ghana, Nigeria, or Spain. These nine countries are therefore included in our dataset and analyses but not represented in the results, which focus on inter-state coordination. In sum, the number of tweets represented by these nine countries accounts for less than 0.2% of the data. Dataset statistics are further detailed in the analyses below.
### RQ1: Inter-state activity
We use the terms "coordination" and "coordinated operations" to characterize purposeful collaboration in service to shared objectives. Informed by explanations of the dataset provided by the TMRC upon each data release, our analyses make the following assumptions: 1. All activities associated with accounts tagged by Twitter as participating in information operations are part of those operations; 2. All pairwise interactions between state-backed influence operation actors are coordinated information operations/platform manipulation. Evidence of inter-state coordination is informed by static and dynamic network and content analyses across state-linked accounts.
### Inter-state interaction network
We build a global inter-state interaction network amongst state-linked accounts (Figure 0(a)). Nodes represent accounts and directed edges represent retweets, replies, mentions, and quotes, between 2011 and 2021. Node color corresponds to country and edge color matches source node. We observe two predominant substructures within the network. The first is a radiating pattern, consisting of one or a few central nodes with high out-degree centrality (e.g., Figure 0(b)(i)). This motif appears for countries with either dominating in-degree centralization or out-degree centralization. Central nodes function as either content creators or self-promoters surrounded by a substantial number of followers and amplifiers to disseminate content and establish new social ties. The contrasting motif is balanced with similar in- and out-degree (e.g., Figure 0(b)(ii)). Figure 0(c) distills the inter-state interaction network via aggregation by country. Node size is proportional to log-scaled number of associated accounts. Edge width is proportional to log-scaled number of interactions in each pair-wise coordination and edge color matches source node. We note that Cuba, Serbia, and Ecuador appear to use retweets and replies to connect with other states for network expansion and content promotion. Whereas, Venezuela, Russia, Turkey, and Iran are predominantly the target of interactions. Accounts linked to these countries disseminate relatively more original content. Indonesia, Egypt & UAE, China, and Saudi Arabia exhibit more balanced structures. We calculate the reciprocity of each state in the weighted network, decomposing dyadic fluxes into a fully reciprocated component and a fully non-reciprocated component [48]. Reciprocity levels are 0.944, 0.924, 0.844, and 0.777 for Indonesia, Egypt & UAE, China, and Saudi Arabia, respectively. With the exception of Honduras, where we observe a comparable number of interactions as source and target, 94.9% of global outgoing interactions are directed to Russia, and 88.3% of global in-coming interactions originate from Ecuador (reciprocity: 0.0079).
### Temporal analysis of inter- and intra-state activity
Figure 2 shows the cumulative inter- and intra-state interaction counts over time, along with labeled interaction peaks for each of the 13 states. We observe that initial inter-state activity lags behind intra-state activity (avg. lag approximately 2 years). Notably, a majority of inter-state interactions reach peak activity synchronously in late 2017 and 2018. Comparing inter- and intra-state interactions for each state individually, we find that 10 out of the 13 countries in our dataset have substantially different temporal patterns. That is, in most cases, inter-state operations do not occur concurrently with intra-state operations.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
**Country** & **Inter-state** & \begin{tabular}{c} **Inter-state** \\ **actors** \\ **(source)** \\ \end{tabular} & \begin{tabular}{c} **Inter-state** \\ **interactions** \\ **(target)** \\ \end{tabular} & \begin{tabular}{c} **Intra-state** \\ **interactions** \\ **interactions** \\ \end{tabular} &
\begin{tabular}{c} **External** \\ **interactions \&** **interactions \&** **actors \&** **factors** \\ **isolates** \\ \hline Cuba(CU) & 181 & 6,649 & 1,027 & 742,654 & 4,055,338 & 526 & 4,805,668 \\ \hline Venezuela(VE) & 114 & 1,521 & 6,131 & 219,973 & 10,368,229 & 2,261 & 10,595,854 \\ \hline Iran(IR) & 569 & 1,528 & 3,275 & 905,384 & 9,524,551 & 7,025 & 10,434,738 \\ \hline Russia(RU) & 143 & 356 & 5,227 & 352,943 & 4,124,305 & 1,741 & 4,482,831 \\ \hline Serbia(RS) & 254 & 3,267 & 1 & 5,750,528 & 7,968,305 & 8,558 & 13,722,101 \\ \hline Turkey(TR) & 72 & 5 & 359 & 2,600,704 & 12,750,040 & 7,340 & 15,351,108 \\ \hline Indonesia(ID) & 40 & 2,683 & 2,539 & 28,439 & 2,669,174 & 795 & 2,702,835 \\ \hline Honduras(HN) & 23 & 1,064 & 1,059 & 37,529 & 1,126,426 & 3,104 & 1,166,078 \\ \hline Ecuador(EC) & 87 & 973 & 0 & 23,456 & 675,811 & 1,019 & 700,240 \\ \hline China(CN) & 176 & 4,251 & 4,691 & 294,616 & 13,964,665 & 31,119 & 14,268,223 \\ \hline Saudi Arabia(SA) & 649 & 8,650 & 6,813 & 225,499 & 32,047,515 & 5,968 & 32,288,477 \\ \hline Egypt \& UAE(EU) & 570 & 5,187 & 5,017 & 706,047 & 8,764,523 & 7,060 & 9,480,774 \\ \hline Uganda(UG) & 3 & 5 & 0 & 62,857 & 461,219 & 418 & 524,081 \\ \hline \end{tabular}
\end{table}
Table 1: Count of inter- and intra-state interactions by country. Inter-state: source and target nodes associated with _different_ state-linked operations; Intra-state: source and target nodes associated with the same state-linked operation; External & isolates: target node is not identified by Twitter as a state-linked actor, or content does not retweet/mention other actors. _Note: External interactions and isolates do not inform the analyses provided in this work; the count is provided for context._
Rather, they represent what appears to be a separate strategic operation. Seven peaks in inter-state and intra-state activities occur more than one year apart, three occur three to twelve months apart, two peaks occur within three months of one another, and one occurs simultaneously within the same month).
### Measuring engagement with inter-state activity
Observing that one aim of inter-state coordination appears to be increased visibility, we measure differences in engagement statistics between inter- and intra-state activity using a two sample T-test. We perform _a priori_ power analysis to determine the minimum sample size, set the significance level \(\alpha\) to 0.05, power to 0.8, and Cohen's d effect size to 0.2. We obtain a minimum effective sample size of 394 to perform target statistical testing. Using this threshold value, we filter out two states (Turkey and Uganda) with less than 394 inter-state interactions. Given the significantly smaller fraction of quotes in the dataset (the number of quote tweets in the dataset is less than the effective sample size), we select likes, retweets, and replies as three indices for comparative study. We perform a Welch's T test to determine if observed differences are significant. Table 2 gives these results. In 7 of 11 states, inter-state interactions receive more likes on average than intra-state interactions, 5 of which are significant. Similar trends hold for retweets and replies. We observe that countries like Venezuela, China, Indonesia, and Egypt and UAE which are more active in inter-state coordination also apply tactics discussed later (RQ2).
We additionally perform engagement comparisons between inter-state interactions and state-backed accounts' interactions with external accounts, i.e., accounts not tagged by Twitter as state-backed actors, presumed "normal" accounts). As the data is imbalanced (see Table 1), we randomly sample external interactions matching the observed number of inter-state interactions. With a similar level of variance, we perform a regular two-sample T-test. Results are provided in Table S3 of Supplementary Material. We observe a general pattern of more likes and retweets associated with external accounts. This is expected as external accounts have greater visibility, e.g., news outlets, and likes and retweet counts are derived from the original post. However, number of replies associated with inter-state interactions is substantially _greater_ than those associated with external interactions, indicating success of inter-state coordination to prompt meaningful engagement (e.g., Cuba, China, and Indonesia).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
**Country** & \begin{tabular}{c} **Avg** \\ **like** \\ \end{tabular} & **p** & **T** & **d** & \begin{tabular}{c} **Avg** \\ **retweet** \\ \end{tabular} & **p** & **T** & **d** &
\begin{tabular}{c} **Avg** \\ **reply** \\ \end{tabular} & **p** & **T** & **d** \\ \hline VE(Inter) & 0.401 & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{9.218} & \multirow{2}{*}{0.566**} & \multirow{2}{*}{**0.440**} & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{4.737} & \multirow{2}{*}{0.096} & 0.128 & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{6.038} & \multirow{2}{*}{0.478*} \\ \cline{1-1} \cline{5-12} VE(Intra) & 0.039 & & & & & & & & & & & & \\ \hline \hline CU(Inter) & 0.539 & \multirow{2}{*}{0.169} & \multirow{2}{*}{1.375} & \multirow{2}{*}{0.057} & \multirow{2}{*}{0.366} & \multirow{2}{*}{0.794} & \multirow{2}{*}{0.261} & \multirow{2}{*}{0.011} & 0.125 & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{8.420} & \multirow{2}{*}{0.314*} \\ \cline{1-1} \cline{5-12} CU(Intra) & 0.351 & & & & & & & & & & & & \\ \hline \hline CN(Inter) & 0.035 & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{4.438} & \multirow{2}{*}{0.063} & \multirow{2}{*}{0.058} & \multirow{2}{*}{**0.001**} & \multirow{2}{*}{6.608} & \multirow{2}{*}{0.027} & 0.351 & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{31.720} & \multirow{2}{*}{0.902***} \\ \cline{1-1} \cline{5-12} CN(Intra) & 0.016 & & & & & & & & & & & & \\ \hline \hline IR(Inter) & 0.056 & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{-6.142} & \multirow{2}{*}{-0.034} & \multirow{2}{*}{0.052} & \multirow{2}{*}{0.044} & \multirow{2}{*}{0.668} & \multirow{2}{*}{0.429} & \multirow{2}{*}{0.007} & 0.101 & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{6.937} & \multirow{2}{*}{0.304*} \\ \cline{1-1} \cline{5-12} IR(Intra) & 0.159 & & & & & & & & & & & & & \\ \hline \hline RS(Inter) & 0.000 & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{-125.492} & \multirow{2}{*}{-0.053} & \multirow{2}{*}{0.000} & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{-97.538} & \multirow{2}{*}{-0.041} & \multirow{2}{*}{0.000} & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{-192.692} & \multirow{2}{*}{-0.082} \\ \cline{1-1} \cline{5-12} RS(Intra) & 0.012 & & & & & & & & & & & & \\ \hline \hline RU(Inter) & 0.007 & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{-4.570} & \multirow{2}{*}{-0.010} & \multirow{2}{*}{0.000} & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{-10.009} & \multirow{2}{*}{-0.017} & \multirow{2}{*}{0.018} & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{-1.938} & \multirow{2}{*}{-0.084} \\ \cline{1-1} \cline{5-12} RU(Intra) & 0.034 & & & & & & & & & & & & & \\ \hline \hline ID(Inter) & 0.034 & \multirow{2}{*}{**0.003**} & \multirow{2}{*}{2.981} & \multirow{2}{*}{0.040} & \multirow{2}{*}{0.072} & \multirow{2}{*}{**0.019} & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{6.476} & \multirow{2}{*}{0.111} & \multirow{2}{*}{0.559} & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{33.782} & \multirow{2}{*}{1.425***} \\ \cline{1-1} \cline{5-12} ID(Intra) & 0.020 & & & & & & & & & & & & & \\ \hline \hline SA(Inter) & 0.037 & \multirow{2}{*}{**0.039**} & \multirow{2}{*}{2.064} & \multirow{2}{*}{0.012} & \multirow{2}{*}{0.068} & \multirow{2}{*}{0.039} & \multirow{2}{*}{0.138} & \multirow{2}{*}{1.482} & \multirow{2}{*}{0.007} & \multirow{2}{*}{0.034} & \multirow{2}{*}{**0.000**} & \multirow{2}{*}{-4.935} & \multirow{2}{*}{-0.027} \\ \cline{1-1} \cline{5-12} SA(Intra) & 0.026 & & & & & & & & & & & & & & \\ \hline \hline EC(Inter) & 0.020 & \multirow{2}{*}{0.995} & \multirow{2}{*}{-0.007} & \multirow{2}{*}{0.000} & \multirow{2}{*}{0.000} & \multirow{2}{*}{0.007} & \multirow{2}{*}{0.040} & \multirow{2}{*}{-2.059} & \multirow{2}{*}{-0.035} & \multirow{2}{*}{0.000} & \multirow{2}{*}{-6.803} & \multirow{2}{*}{-0.046} \\ \cline{1-1} \cline{5-12} EC(Intra) & 0.020 & & & & & & & & & & & & & & \\ \hline \hline HN(Inter) & 0.450 & \multirow{2}{*}{0.611} & \multirow{2}{*}{0.509} & \multirow{2}{*}{0.004} & \multirow{2}{*}{0.619} & \multirow{2}{*}{0.000} & \multirow{2}{*}{0
### RQ2: Strategic and tactical mechanisms
We study the strategic use of network structure and shared content in service to inter-state coordination. These are explored in detail through two case studies - coordination between Cuba and Venezuela and between Russia and Iran. These examples are selected to highlight the diversity of structural and functional activity we observe across the dataset. In the case of Cuba and Venezuela, we observe relatively bi-directional interaction; both countries serve as source and target of coordinated activity. We also observe administrators playing distinct roles in the campaign. Russia and Iran's coordinated operations, on the other hand, are at a larger scale and structurally very different.
### Ambient Affiliation
Implicit association among social network actors is facilitated through hashtagging, a phenomenon which has been studied in the sociolinguistics literature as _ambient affiliation_[9]. These indirect interactions enhance visibility of users' discourse through search [49]. The social role of hashtagging is to facilitate the establishment of ad hoc social interaction groupings or subcommunities, which constitute a temporal habitat for information operations. Hashtagging has been employed and proven effective across platforms, from "influencers" and organizations to disinformation operations, for acquiring followers and increasing exposure [1, 50, 51, 52]. We suggest that inter-state ambient affiliation is used by information operations to create idea habitats conducive to information spread.
We construct the inter-state hashtag network (Figure 3(a)). Nodes in the network represent accounts that both engage in hashtagging and are involved in inter-state coordination; Undirected edges indicate use of common hashtags by these accounts. There are \(1,014\) nodes and \(61,010\) edges in the network (density = \(0.594\); diameter = \(11\)). The largest connect component contains \(861\) nodes and \(60,510\) edges (density = \(0.0817\)). Use of ambient affiliation is variable across operations (Figure 3(b)). While pervasive within Cuban operations, Russia, Serbia, and Turkey use this tactic only negligibly despite having large-scale operations. Notably, engagement in ambient affiliation appears correlated with greater engagement (see Table 2, e.g., we see greater engagement with content from Cuba and Honduras than from Russia and Serbia.)
Globally, we identify \(1,148\) unique hashtags that occur within inter-state activity a total of \(33,119\) times. Figure 4 lists and categorized the hashtags which appear in at least \(200\) inter-state interactions. We observe pervasive, intentional exploitation of political controversy within inter-state hashtagging behavior. In the case of several prominent collaborative operations, a majority of inter-state interactions target specific political events (e.g. Honduras and Iran: 2017 Honduras' Election Crisis; Iran and Russia: 2016 U.S election; Iran and Venezuela: 2017 Venezuelan Protests). Other coordination activities incorporate media outlets associated mostly with unsubstantiated news.
In addition, we observe the use of hashtagging for network expansion, e.g., #syts, #openfollow, #siguemeytesigo (follow me and I'll follow you). This tactic appears to take one of two forms: (1) explicitly requesting followers; and (2) using mentions and tags for penetration into new communities.
### Taxonomizing roles within inter-state operations
Within inter-state operations, we observe that different users/accounts appear to have different patterns of behavior. We contextualize these differences through the lens of role analysis, defining primary roles as follows:
* **Administrator.** Manages operations of the information campaign. Administrators self-identify as group leaders through profile information and shared content.
* **Influencer.** A hub of the operation with high in-degree centrality. Typically, an influencer is the source of the information who exploits fake news sites and may have multiple similar accounts in the network to avoid takedown. We define users with in-degree greater than \(10\) as the influencers in the network.
* **Promoter.** Primarily promotes content for enhanced visibility and engagement. We define users with out-degree greater than \(10\) as promoters in the network.
* **Broker.** A gatekeeper, connecting multiple communities/organizations with relatively high in-degree and out-degree centrality. We identify users who meet criteria for both promoter and influencer as brokers.
* **Follower.** An actor with minor (observable) impact within the operation. Users that are not identified within aforementioned roles are categorized as followers.
We leverage this taxonomy in the case studies which follow. The two case studies are selected for their diversity with respect to structure and content.
### Case Study 1: Coordination between state-linked accounts from Cuba and Venezuela
#### Network structure
We construct the inter-state interaction network between Cuban and Venezuelan state-linked accounts. We observe \(6,469\) interactions between \(56\) Cuban and \(62\) Venezuelan accounts. Notably, the network has two well-connected clusters connected by a single edge (see Figure 5(a). We observe a relatively balanced network structure between Cuba and Venezuela where the interactive pattern is bidirectional \((5,464/6,649\) of Cuba's out-degree interactions point to Venezuela, and \(1,005/1,027\) Venezuela's out-degree interactions point to Cuba). In each cluster, a small subset of Cuban and Venezuelan nodes dominate activity while remaining nodes connect with them through retweets and mentions. This network structure is a trademark of Cuba's inter-state operations. We observe that Cuba's intra-state interaction network, by contrast, has greater connectivity and more uniform in-/out-degree sequences.
#### Content analysis
Notably, Cuban and Venezuelan information operations appear to center around structured _teams_, and each team has a self-identified administrator.
#### Cluster 1
In Figure 5(b), we dive deeply into the inter-state operation structure. Within Cluster 1, we identify five representative nodes, two attributed to the Venezuelan campaign and three from Cuba. Representative nodes are selected as those with the most within-cluster interactions. The two team leaders are also identified though their user profiles and account descriptions. V1 is a Venezuelan node with high out-degree centrality, and the two Cuban nodes with which V1 frequently interacts are designated C1 and C2.
Actors C1 and C2 are administrators in TeamGoal and TeamPussicats, respectively. The majority of unilateral interactions from V1 to C1 and C2 take the form of direct retweets and retweets from others that mention C1 or C2. Manual content analyses suggest that the primary objective of these two Cuban actors is to promote their team and its members through establishing unique sets of emojis and hashtags that symbolize their team identities and consistent mentioning of the team leaders. By retweeting C1 and C2 as well as other members of their teams, V1 serves as a promoter of their content. Venezuelan node V2 has bilateral coordination with both C1 and C2. The majority of V2's activities are replies to users or tweets with direct mentions. These two sets of tweets focus mostly on promoting members of teams including C1 and C2 and others, e.g., GhostBand Team, Orgasmas Team, Incognitos Team, CodiceRasta Team, ElBunker, and Team Dioses_Miticos. V2 acts as a broker between the aforementioned groups, facilitating communication and collaboration between the teams and the team members.
#### Cluster 2
Within Cluster 2, we identify node V3 as an influencer. V3 primarily distributes fake news via the use of URLs and hashtags that receive considerable engagement (\(65.26\%\) of total tweets from Cuba are connected to V3). We note that numerous accounts that closely resemble this suspended account still exist in the current social network to avoid being taken down by Twitter, as shown in Cluster 2 of Figure 5(b). V3 is linked to C4, C5, and several other Cuban promoter accounts, through both explicit (retweeting) and implicit interactions (common hashtags).
Role analysis of accounts within the inter-state Cuban-Venezuelan operations (see Supplementary Material) suggests that, broadly speaking, Venezuelan accounts act as influencers, while Cuban accounts primarily promote content shared by Venezuelan accounts.
#### URL analysis
We collect account profile information and tweets of all accounts engaged in inter-state operations between Cuba and Venezuela. In total, there are 16 accounts (\(13.56\%\) of actors from both countries) whose profiles contain URLs. Further, we identify 1,913 unique URLs within their tweets, occurring over 2,553 interactions (\(39.47\%\) of total interactions) (see Supplementary Material). Invalid URLs that include broken links and cannot be manually identified by name are removed. Then, we manually verify the status of each link, categorizing each as active if the link is still functioning and inactive if the content has been removed or the account has been set to private. A substantial number of URLs are invalid (\(70.10\%\)), indicating that most were temporary. We observe that the majority of valid URLs in profiles redirect to accounts on other social networking sites like Facebook, Instagram, and YouTube (\(9.27\%\) of total valid URLs, \(37.74\%\) of which are still active), and blogs with little or no regulations (\(28.85\%\) of total valid URLs, \(100\%\) of which are still active). URLs within tweets often direct to politically-oriented news (\(57.00\%\) of total valid URLs), e.g., Telesurtv. Some also point to social media content management applications such as Twitter to bypass the character limit and Twitter for picture archiving (still accessible after the account has been taken down), as well as to manage followers from social media, likely for open follow practices, e.g., Tuitil.
### Case Study 2: Coordination between state-linked accounts from Russia and Iran
#### Dynamic network structure
We construct a dynamic view of inter-state operations between Russia and Iran through four interaction networks, see Figure 6(a). We aggregate all interactions prior to 2016 and after 2018, since the majority of interactions occur between 2016 and 2017;
2016 and 2017 are represented as snapshots. A total of 54 Russian accounts and 329 Iran accounts are represented in the network. We observe several clusters that contain one Russian node and multiple Iranian nodes, in a radiating pattern. The radiating network topology is well-suited for executing strategic aims such as news media distribution and social network expansion.
Russian and Iranian accounts are dynamically involved in inter-state coordination globally (see Supplementary Material). These statistics are calculated over year-long network snapshots, beginning in 2011. We observe temporal uniformity for Iranian inter-state information operations; specifically, accounts reach out-degree interaction peak at around the same time (see Supplementary Material). This may suggest that inter-state operations were/are conducted concurrently. Although Iran has more incoming than outgoing interactions (see Table 1), Iran also exhibits particularly many outgoing interactions with Russia. Upon closer inspection, we observe that the reason behind Iran's high in-degree centrality is that several central news outlets, e.g., Iranian state-controlled Hispantv, is frequently and consistently retweeted by accounts linked to Cuba, Venezuela, and Honduras (99.01%, 99.58%, and 99.7% of the total tweets, respectively). Thus, Iran may be passively engaged in those interactions. However, in its coordination with Russia, Iranian actors actively disseminate content created by Russian actors, e.g., targeting the U.S. 2016 election.
#### Content analysis
Figure 6(b) explores the primary structure of inter-state operations between Russia and Iran. We observe that Iranian accounts actively initiate interaction with Russian actors; approximately 96.81% of activity in the inter-state interaction network are retweets of Russian accounts, the majority of which are affiliated with the Internet Research Agency (IRA). Manual analyses of tweets and profile descriptions indicate that content is primarily focused on political and politicized topics. This is in line with prior work examining Russian information operations targeting the 2016 U.S. election [53, 54]. For instance, the profile description of the central node R2 is:
Unofficial Twitter of Tennessee Republicans. Covering breaking news, national politics, foreign policy and more.
#MAGA #2A
In addition to U.S. election politics, inter-state activity also focuses on racial issues (e.g., R1 and R3). Iranian actors participate in these processes by disseminating material previously posted by Russian actors with high centrality, and connecting all three satellite communities.
As in Case Study 1, we identify account roles in the Russia-Iran inter-state campaign (see Supplementary Material). Russia and Iran differ significantly in terms of number of influencers and followers; Russian actors serve primarily as source content for the Iranian community.
#### URL analysis
Also as in Case Study 1, we collect profile information and tweets from all accounts engaged in inter-state operations between Russia and Iran. In total, there are just 7 accounts (1.83% of actors from both countries) that contain URLs within their profile descriptions. We identify 35 unique URLs within shared tweets over 77 total interactions (15.28% of total interactions). We categorize these URLs by type/platform and active status (see Supplementary Material). Among valid URLs, we see extensive reference to international political news from the United States, the United Kingdom, and Iraq (31.43%). While the majority of linked social media accounts are now defunct (75% for profile URLs and 100% for content URLs) URLs to most news outlets remain active and complement tweet content (68.75% for content URLs).
## Discussion and Conclusions
Rapid and accurate detection of information operations remains a hard problem for social media platforms globally. Evidence that state-linked operations may collaborate or collude to improve the efficacy of their campaigns adds complexity to that challenge. Our work provides such evidence, and therefore informs ongoing efforts to detect and mitigate the impact of information operations deployed through social media. We have highlighted some recurring strategies and tactics employed by inter-state information operations on Twitter. We have observed that a substantial subset of coordinated inter-state activity can be identified as supportive of explicit aims, e.g., targeting high-stakes political events or seeking additional visibility. Regardless of motivation, it appears that inter-state activities are carried out separately from intra-state operations, resulting in a distinctive information ecosystem, or idea habitat. Relatedly, we discover that a majority of inter-state operations exploit ambient affiliation through hashtagging, and that individual accounts in the network may be tasked with distinct roles in some operations architectures. Overarchingly, our findings suggest that information operations represent collaborative work, not only at the individual level but also at the state level. Notably, our analyses also reveal that country size is not necessarily a determinant of the scale of observed inter-state activity. Smaller countries demonstrate the ability to engage in systematic coordination, strategically expanding their internal operations.
The scope of the current work is constrained in several ways. Our analyses make assumptions about the accuracy of identified accounts and about their activity, e.g., that all account activity serves ongoing operations. We assume that substantial observed interaction indicates strategic coordination. Ultimately, the emergence of inter-state coordination and the ways in which observed activities are moderated through planning remains an open question. Furthermore, the extent to which current insights can be leveraged for the advancement of automated approaches for detection of inter-state information operations will likely depend on the uniqueness of inter-state interaction patterns vs., e.g., standard content promotion strategies employed by traditional organizational accounts seeking visibility and influence. Future work would benefit from designing studies to compare these phenomena.
The inter-state activity we uncover here is likely a very small segment of much larger-scale, dynamic, cross-platform, multi-media, partially-observable coordinated operations. Our findings raise many more questions regarding the offline interactions which underlie observed activity, the ways in which they are facilitated, and the broader political agenda which they serve. The answers to these questions will be distinct across countries and over time. Our work therefore highlights the critical role of policy and international relations in this space. It also suggests that whether and how states cooperate to respond to information operations on social media will require a transdisciplinary research and policy agenda bringing together computational and social scientists, policy makers, and stakeholders.
|
2306.03826
|
Thick fluid disks around binary black holes
|
A model of a thick fluid disk around a binary black hole is considered. A
binary black hole is described by the Majumdar-Papapetrou solution. The
hydrodynamic equations in this metric are written out. Exact analytical
solutions are presented. Generalization to the case of a toroidal magnetic
field is carried out.
|
S. V. Chernov
|
2023-06-06T16:13:31Z
|
http://arxiv.org/abs/2306.03826v1
|
# Thick fluid disks around binary black holes
###### Abstract
A model of a thick fluid disk around a binary black hole is considered. A binary black hole is described by the Majumdar-Papapetrou solution. The hydrodynamic equations in this metric are written out. Exact analytical solutions are presented. Generalization to the case of a toroidal magnetic field is carried out.
## I Introduction
Accretion disks around single black holes have important astrophysical significance. On the other hand, On September 14, 2015, the Laser Interferometer Gravitational-Wave Observatory detected a gravitational-wave signal from the source GW150914 - thereby confirming the existence of binary black holes [1]. The masses of such black holes are small, only a few tens of solar masses. In addition, binary supermassive black holes can exist in the center of active galactic nuclei [2]. One of the main examples is the OJ 287 system. It is believed that this system is a binary black hole with masses of 18 billion and 125 million solar masses [3]. The orbital period of such a system is 12 years and it is assumed that an accretion disk rotates around a more massive black hole, which is penetrated by a less massive black hole. Such systems can be formed due to the merger of two or more galaxies with massive black holes [2]. Thus, accretion disks may also exist around binary black holes, the study of which is of astrophysical interest.
In this paper, thick accretion disks around binary black holes are investigated. For simplicity, it is assumed that double black holes are described by the Majumdar-Papapetrou metric. Chapter 2 describes the metric and its main properties, which are used in this work. In Chapter 3, the hydrodynamic equation in this metric is written out and exact analytical solutions are constructed. In chapter 4, a generalization is constructed for the case of a toroidal magnetic field. Chapter 5 provides a conclusion.
In the paper, we use the geometrical units, G=c=1.
## II Basic equations
In this paper we will consider the simplified model of a binary black hole, which is described by the Majumdar-Papapetrou solution [4; 5]. The solution has the form [4; 5]
\[ds^{2}=-\Omega^{-2}dt^{2}+\Omega^{2}(dx^{2}+dy^{2}+dz^{2}), \tag{1}\]
where \(\Omega=1+\sum\limits_{i}\frac{m_{i}}{r_{i}}\), "i" is a number of black holes,
\[r_{i}=\sqrt{(x-x_{i})^{2}+(y-y_{i})^{2}+(z-z_{i})^{2}},\]
\(x_{i},y_{i},z_{i}\) - are the coordinates location and \(m_{i}\) - is the mass of black holes. For the case when one black hole is considered, this metric passes into the metric of the extreme Reissner-Nordstrom black hole. Here we consider the case of two black hole, when \(i=2\) and without loss of generality, we will assume that black holes are located on the z axis with coordinates (0,0,1) and (0,0,-1) respectively.
Let's rewrite the metric (1) in a cylindrical coordinate system. To do this, we introduce cylindrical coordinates
\[x = r\cos\phi,\] \[y = r\sin\phi,\] \[z = z, \tag{2}\]
in which the metric (1) will be written as
\[ds^{2}=-\Omega^{-2}dt^{2}+\Omega^{2}(dr^{2}+r^{2}d\phi^{2}+dz^{2}), \tag{3}\]
where \(\Omega=1+\sum\limits_{i=1}^{2}\frac{m_{i}}{r_{i}}\), \(r_{i}=\sqrt{r^{2}+(z-z_{i})^{2}}\). This metric has two Killing vectors, \(\xi_{(t)}=\partial/\partial t\) and \(\xi_{(\phi)}=\partial/\partial\phi\) which generate time shifts and rotations around the axis of symmetry \(z\).
We will also need a metric determinant that is equal to \(\sqrt{-g}=r\Omega^{2}\).
## III Solutions of hydrodynamic equations
In this section we consider a perfect fluid in a stationary, axisymmetric binary black holes where the self-gravity of the fluid is ignored. This means that the following conditions are satisfied: \(\partial/\partial t=0\) and \(\partial/\partial\phi=0\) and all values depend on two variables, r and z. It is also assumed that the fluid rotates only around the axis of symmetry z. This means that the radial and z components of the four-velocities are equal to zero, \(u^{z}=u^{r}=0\) and only the temporal and azimuthal components are different from zero, \(u^{t}\neq 0\), \(u^{\phi}\neq 0\). It is not difficult to see that under such conditions the continuity equation is performed identically. The four-velocity of the fluid satisfies
the condition, \(g_{\alpha\beta}u^{\alpha}u^{\beta}=-1\), which in our case will be rewritten as
\[u^{t2}-r^{2}\Omega^{4}u^{\phi 2}=\Omega^{2}. \tag{4}\]
From the conservation laws \(T^{\alpha\beta}_{\ \ ;\beta}=0\) for the energy-momentum tensor of an ideal fluid
\[T^{\alpha\beta}=(P+\rho)u^{\alpha}u^{\beta}+Pg^{\alpha\beta} \tag{5}\]
only two components (r,z) will remain nonzero, which can be written as
\[\frac{1}{p+\rho}\frac{\partial P}{\partial a}=\frac{1}{\Omega}\frac{\partial \Omega}{\partial a}+\frac{u^{\phi 2}}{2\Omega^{2}}\frac{\partial}{\partial a}(r^{2} \Omega^{4}), \tag{6}\]
where notation "a" takes two values \(a=r\) or \(z\) and \(P\) - is the pressure, \(\rho\) - is the energy density.
To integrate equation (6), we need to express the pressure and energy density in terms of enthalpy, \(h\). Assuming that the entropy, \(s\), is constant along the fluid flow line, then it is easy to obtain that
\[\frac{dP}{P+\rho}=\frac{dh}{h}. \tag{7}\]
Thus, we have a set of equations (4) and (6), for the solution of which we need to impose additional constrain (see, [6; 7]).
### Fishbone-Moncrief solution
One of such constrain was proposed in the work [6]. They assumed that the value \(l=u_{\phi}u^{t}\) remains constant. Then using (4) you can get
\[u^{\phi 2}=\frac{-1+\sqrt{1+\frac{4l^{2}}{r^{2}\Omega^{4}}}}{2r^{2}\Omega^{2}} \tag{8}\]
and rewrite the equation (6) in the form
\[\frac{1}{h}\frac{\partial h}{\partial a}=\frac{1}{\Omega}\frac{\partial \Omega}{\partial a}+\frac{-1+\sqrt{1+\frac{4l^{2}}{r^{2}\Omega^{4}}}}{4r^{2} \Omega^{4}}\frac{\partial}{\partial a}(r^{2}\Omega^{4}). \tag{9}\]
It is easy to integrate the above equation (9). As a result, we get
\[\ln(h)=\frac{1}{4}\ln\left(1+\frac{2l^{2}}{r^{2}\Omega^{4}}+\sqrt {1+\frac{4l^{2}}{r^{2}\Omega^{4}}}\right)+\] \[+\ln\Omega-\frac{1}{2}\sqrt{1+\frac{4l^{2}}{r^{2}\Omega^{4}}}-\ln (h_{c}), \tag{10}\]
where it is convenient to determine the integration constant in the plane (z=0) perpendicular to the axis connecting the black holes in the point (\(z=0\), \(r=r_{b}\))
\[\ln(h_{c})=\ln\Omega_{z=0,r=r_{b}}-\frac{1}{2}\sqrt{1+\frac{4l^{ 2}}{r_{b}^{2}\Omega_{z=0,r=r_{b}}^{4}}}+\] \[+\frac{1}{4}\ln\left(1+\frac{2l^{2}}{r_{b}^{2}\Omega_{z=0,r=r_{b }}^{4}}+\sqrt{1+\frac{4l^{2}}{r_{b}^{2}\Omega_{z=0,r=r_{b}}^{4}}}\right). \tag{11}\]
This solution (10) is a general solution of the equations (4) and (6). The figures (1) and (2) show examples of the contours of the logarithm of enthalpy \(\ln(h)\) for a disk around two black holes. In these figures, negative radii correspond to azimuthal angles \(\phi+\pi\). The boundary conditions were determined at the point (\(z=0\), \(r_{b}=0.1\)). The constant value \(l\) was chosen equal to \(l^{2}=0.1\).
### Kozlowski et.al. solution
Another constrain was suggested in the paper [7]. They assumed that the value \(l=-u_{\phi}/u_{t}\) remains constant. Then it follows from the equation (4) that \(\phi\)-component of the four-velocity is equal to
\[u^{\phi 2}=\frac{l^{2}}{r^{2}\Omega^{2}(r^{2}\Omega^{4}-l^{2})}. \tag{12}\]
Figure 2: Contours \(\ln(h)=0;0.05;1\) are shown by solid curves. Black holes are located at (0,-1) and (0,1). The masses of black holes are equal, \(m_{1}=1\) and \(m_{2}=0.1\), respectively.
Figure 1: Contours \(\ln(h)=0;0.05;1\) are shown by solid curves. Black holes are located at (0,-1) and (0,1). The masses of black holes are equal, \(m_{1}=1\) and \(m_{2}=1\).
It follows from this expression (12) that the condition \(r^{2}\Omega^{4}>l^{2}\) must be fulfilled, which cannot be fulfilled near the points \(r\approx 0\), \(z\neq\pm 1\). Therefore, in this case, the solution will not completely cover the space-time around the binary black hole. Formally, in this case, we can also fully integrate these equations, (6). Using the equations (7) and (12), we can rewrite the equation (6) as
\[\frac{1}{h}\frac{\partial h}{\partial a}=\frac{1}{\Omega}\frac{\partial\Omega }{\partial a}+\frac{l^{2}}{2r^{2}\Omega^{4}(r^{2}\Omega^{4}-l^{2})}\frac{ \partial}{\partial a}(r^{2}\Omega^{4}). \tag{13}\]
After integration we get
\[\ln(h)=\ln\Omega+\frac{1}{2}\ln\left(1-\frac{l^{2}}{r^{2}\Omega^{4}}\right)- \ln(h_{c}). \tag{14}\]
where the integration constant is defined in the same way as in the previous case
\[\ln(h_{c})=\ln\Omega_{z=0,r=r_{b}}+\frac{1}{2}\ln\left(1-\frac{l^{2}}{r^{2}_{ b}\Omega^{4}_{z=0,r=r_{b}}}\right). \tag{15}\]
### Another exact solution
By making various assumptions, other exact analytical solutions can be obtained. For example, assuming that the value \(l=u_{\phi}u^{\phi}\) remains constant, using expressions (6) and (7) we obtain the desired equation
\[\frac{1}{h}\frac{\partial h}{\partial a}=\frac{1}{\Omega}\frac{\partial \Omega}{\partial a}+\frac{l}{2r^{2}\Omega^{4}}\frac{\partial}{\partial a}(r^ {2}\Omega^{4}). \tag{16}\]
After integration, we get an exact analytical solution
\[\ln(h)=\ln\Omega+\frac{l}{2}\ln(r^{2}\Omega^{4})-\ln(h_{c}) \tag{17}\]
with the integration constant equal to
\[\ln(h_{c})=\ln\Omega_{z=0,r=r_{b}}+\frac{l}{2}\ln(r^{2}_{b}\Omega^{4}_{z=0,r= r_{b}}). \tag{18}\]
The figures (3) and (4) show examples of the distribution of the contours of the constant enthalpy \(\ln(h)=-0.1;0;0.1;0.5\) for various parameters of the problem. In the figure (3), the mass of black holes is equal to, \(m_{1}=m_{2}=1\), and in the figure (4) the mass of black holes is equal to, \(m_{1}=1\), \(m_{2}=0.1\). The value \(l\) was chosen equal to, \(l=0.1\).
## IV Solutions with a toroidal magnetic field
Following the work of [8], we can generalize the above solutions to the case of the presence of a toroidal magnetic field. A set of ideal relativistic MHD equations are
\[g_{\alpha\beta}u^{\alpha}u^{\beta}=-1,\] \[T^{\alpha\beta}_{\ \ ;\beta}=0,\] \[(nu^{a})_{;\alpha}=0,\] \[F_{\mu\nu,\lambda}+F_{\nu\lambda,\mu}+F_{\lambda\mu,\nu}=0, \tag{19}\]
where \(n\) is the baryon number density, \(F_{\mu\nu}\) is the electromagnetic field tensor and
\[T^{\alpha\beta}=(P+\rho+b^{2})u^{\alpha}u^{\beta}+(P+\frac{b^{2}}{2})g^{ \alpha\beta}-b^{\alpha}b^{\beta}, \tag{20}\]
where \(b^{\alpha}\) is the four-vector of magnetic field [8]. Also it's suppose that the flow is stationary, \(\partial/\partial t\), and axisymmetric, \(\partial/\partial\phi\), the velocity of fluid and magnetic field have only toroidal component, \(u^{r}=u^{\theta}=0\), \(b^{r}=b^{\theta}=0\). Then the third and forth equations (19) are satisfied identically. The r and z component of second equation (19) are rewritting in the form
\[\frac{1}{\sqrt{-g}}\frac{\partial}{\partial a}\left(\sqrt{-g}\left(p+\frac{1 }{2}b^{2}\right)\right)=\frac{1}{2}g_{\gamma\delta,a}T^{\gamma\delta}. \tag{21}\]
Figure 3: Contours of the logarithm of the constant enthalpy \(\ln(h)=-0.1;0;0.1;0.5\) are shown by solid curves. Black holes are located at (0,-1) and (0,1). The mass of black holes are equal, \(m_{1}=m_{2}=1\).
Figure 4: Contours of the logarithm of the constant enthalpy \(\ln(h)=-0.1;0;0.1;0.5\) are shown by solid curves. Black holes are located at (0,-1) and (0,1). The mass of black holes are equal, \(m_{1}=1\) and \(m_{2}=0.1\).
If we substitute metric coefficients (3) and the energy-momentum tensor (20) into the equation (21), then after tedious transformations we can get an expression of the form
\[\frac{\partial}{\partial a}(p+\frac{b^{2}}{2})=\frac{(p+\rho+b^{2} )u^{t2}-b^{t2}}{\Omega^{3}}\frac{\partial\Omega}{\partial a}+\] \[+\frac{(p+\rho+b^{2})u^{\phi 2}-b^{\phi 2}}{2}\frac{\partial}{ \partial a}(r^{2}\Omega^{2}). \tag{22}\]
In order to integrate this equation, it is necessary to have an equation of state \(p=k\rho^{\gamma}\) and to associate enthalpy \(P+\rho\) with magnetic pressure \(b^{2}\). Following the work [8], we assume that these quantities are connected in a linear way of the form
\[\beta=\frac{P+\rho}{b^{2}}. \tag{23}\]
For simplicity, let's assume that the parameter, \(\beta\), is a constant. Using the parameters \(\beta\), we obtain a final equation that describes the distribution of a fluid with a toroidal magnetic field around binary black holes.
\[\frac{\beta}{1+\beta}\frac{1}{p+\rho}\frac{\partial p}{\partial a }+\frac{1}{2(1+\beta)b^{2}}\frac{\partial b^{2}}{\partial a}=\frac{1}{\Omega} \frac{\partial\Omega}{\partial a}+\] \[+\frac{u^{\phi 2}}{2\Omega^{2}}\frac{\beta}{1+\beta}\frac{ \partial}{\partial a}(r^{2}\Omega^{4})-\frac{1}{2}\frac{1}{1+\beta}\frac{1}{r ^{2}\Omega^{2}}\frac{\partial}{\partial a}(r^{2}\Omega^{2}). \tag{24}\]
We explicitly integrate this equation below by applying various constrains to the four-velocity of the fluid.
### Fishbone-Moncrief solution
Assuming that the constraints of the form, \(l=u_{\phi}u^{t}\), as in the work [6] are fulfilled and using the equation (8) from equation (24), we obtain an exact analytical solution for the distribution of fluid in a toroidal magnetic field around a double black hole.
\[\frac{\beta}{1+\beta}\frac{\gamma}{\gamma-1}\ln(1+k\rho^{\gamma- 1})+\frac{1}{2(1+\beta)}\ln(b^{2})=\] \[=\ln(\Omega)+\frac{\beta}{4(1+\beta)}\ln(1+\frac{2l^{2}}{r^{2} \Omega^{4}}+\sqrt{1+\frac{4l^{2}}{r^{2}\Omega^{4}}})-\] \[-\frac{\beta}{2(1+\beta)}\sqrt{1+\frac{4l^{2}}{r^{2}\Omega^{4}}} -\frac{\ln(r^{2}\Omega^{2})}{2(1+\beta)}-\ln(h_{c}). \tag{25}\]
The integration constant is defined at a point (\(z=0\), \(r=r_{b}\)) in the same way as in the chapter (III.1). In the limiting case, when the parameter \(\beta\rightarrow\infty\) tends to infinity, we get the solution (10). The figure (5) show examples of contours of the left side of the equation (25) for values \(-0.2;-0.1;0.25\) for the following parameters: \(\beta=2\), \(m_{1}=m_{2}=1\), \(l^{2}=0.1\), \(r_{b}=0.1\).
If we take the mass of the second black hole equal \(m_{2}=0.1\), then the contours of the left side of the equation for the same parameters of the problem, \(m_{1}=1\), \(l^{2}=0.1\), \(r_{b}=0.1\) will change and take the form as in the figure (6).
Comparing Figures (1) and (5) or (2) and (6), it can be seen that the toroidal magnetic field can greatly change the behavior of the contours and the qualitative picture of the distribution of fluid near binary black holes.
### Another exact solution
Finally, consider the case when the constrain is described by the relation \(l=u_{\phi}u^{\phi}\), see (III.3). Then the solution of the equation (24) is easy to obtain. As a re
Figure 5: Contours of the left side of the equation (25) for values \(0;0.25;1.0\) are shown by solid curves. Black holes are located at (0,-1) and (0,1). The mass of black holes are equal, \(m_{1}=m_{2}=1.0\).
Figure 6: Contours of the left side of the equation (25) for values \(-0.2;-0.1;0.25\) are shown by solid curves. Black holes are located at (0,-1) and (0,1). The mass of black holes are equal, \(m_{1}=1.0\) and \(m_{2}=0.1\).
sult, we get
\[\frac{\beta}{1+\beta}\frac{\gamma}{\gamma-1}\ln(1+k\rho^{\gamma-1})+ \frac{1}{2(1+\beta)}\ln(b^{2})=\\ =\ln(\Omega)+\frac{\beta l}{1+\beta}\ln(r\Omega^{2})-\frac{\ln(r \Omega)}{1+\beta}-\ln(h_{c}), \tag{26}\]
where the integration constant, \(\ln(h_{c})\), is defined similarly to the previous cases.
In the figures (7) and (8) you can see the contours of the left side of the equation (26) for the following parameters: \(m_{1}=m_{2}=1\), \(l=0.1\)\(\beta=3\) - for fig. (7) and \(m_{1}=1\), \(m_{2}=0.1\), \(l=0.1\)\(\beta=2\) for fig. (8). Comparing Figures (3) (4) with Figures (7) (8), one can see a strong difference associated with the absence of a thick disk between black holes in the presence of a strong toroidal magnetic field.
## V Conclusions
In this paper, we generalized the relativistic theory of thick accretion disks to the case of binary black holes. Black holes were described by the Majumdar-Papapetrou solution, in which the mass of a black hole is compensated by an electric charge. A method for constructing a thick accretion disk in the hydrodynamic case and with the presence of a toroidal magnetic field was described. Exact analytical solutions are written out. These solutions can be used in numerical MHD calculations of the evolution of thick disks around binary black holes as the initial conditions of the problem.
|
2304.01848
|
Beam Alignment with an Intelligent Reflecting Surface for Integrated
Sensing and Communication
|
In a typical communication system, in order to maintain a desired SNR level,
initial beam alignment (BA) must be established prior to data transmission. In
a setup where a Base Station (BS) Tx sends data via a digitally modulated
waveform, we propose a User Equipment (UE) enhanced with a Hybrid Intelligent
Reflecting Surface (HIRS) to aid beam alignment. A novel multi-slot estimation
scheme is developed that alleviates the restrictions imposed by the Hybrid
Digital-Analog (HDA) architecture of the HIRS and the BS. To demonstrate the
effectiveness of the proposed BA scheme, we derive the CRLB of the parameter
estimation scheme and provide numerical results.
|
Florian Muhr, Lorenzo Zaniboni, Saeid K. Dehkordi, Fernando Pedraza Nieto, Giuseppe Caire
|
2023-04-04T14:57:45Z
|
http://arxiv.org/abs/2304.01848v1
|
# Beam Alignment with an Intelligent Reflecting Surface for Integrated Sensing and Communication
###### Abstract
In a typical communication system, in order to maintain a desired signal-to-noise ratio (SNR) level, initial beam alignment (BA) must be established prior to data transmission. In a setup where a base station (BS) transmitter (Tx) sends data via a digitally modulated waveform, we propose an user equipment (UE) enhanced with an hybrid-intelligent reflective surface (HIRS) to aid beam alignment. A novel multi-slot estimation scheme is developed that alleviates the restrictions imposed by the hybrid digital-analog (HDA) architecture of the HIRS and the BS. To demonstrate the effectiveness of the proposed BA scheme, we derive the Cramer-Rao lower bound (CRLB) of the parameter estimation scheme and provide numerical results.
Beam Alignment, Intelligent Reflecting Surfaces, Integrated Sensing and Communication, Wireless Systems
## I Introduction
Integrated sensing and communication is emerging as a key component of beyond-5G and 6G wireless systems [1]. The increasing demand for higher data rates has led to considering millimeter wave (mmWave) communications with its large frequency bandwidths. These frequencies exhibit high isotropic path loss so that a large beamforming gain is required, which can be achieved by using large antenna arrays and aligning the directional beams of the UE and BS. However, sampling broadband signals of many antennas is in general expensive, which motivates the use of HDA architectures [2] at the BS and UE to reduce hardware cost. We propose to equip a UE with a hybrid-intelligent reflective surface (HIRS) to aid beam alignment (BA). In such a setup, the intelligent reflecting surface (IRS) array is physically mounted on the UE and enables integrated sensing and communication (ISAC). The majority of recent studies have focused on positioning intelligent surfaces between the BS and UE in a fixed manner, where they serve as configurable reflectors to modify the propagation environment. The main objective is to have the IRS either extend the range, increase the rank of the channel matrix [3], or enhance the (radar-) sensing capability [4]. UEs equipped with IRS have recently been studied in [5], where the authors suggest to install large IRS arrays on vehicles to improve the sensing of automotive users. The work of [6] has investigated the use of _Simultaneously Transmitting and Reflecting_ IRS, where the incident wireless signal is divided into transmitted and reflected signals passing into both sides of the space surrounding the surface. The authors of [7] introduce the concept of HIRS, which enables metasurfaces to reflect the impinging signal in a controllable manner, while simultaneously sensing a portion of it. In this work we also adopt such an IRS architecture. Note that this architecture differs from that in [6] in that the former re-transmits a portion of the impinging wave via another set of antenna elements.
The main contribution of this work is to present a scheme where the BS and a _mobile_ UE that is equipped with such HIRS, perform parameter estimation at each end. The HDA architecture has a limited number of RF chains at both entities which prohibits conventional multiple-input multiple-output (MIMO) processing and calls for design of RF domain reduction matrices. Taking this into account, we develop a multi-slot scheme where the reduction matrices achieve a trade-off between exploring the beam-space and high beamforming gain. The contributions of this work are summarized below:
* We propose to use an HIRS equipped UE to assist the initial BA procedure for highly directional beamforming applications.
* To meet the constraints of the HDA architecture of the arrays at both ends of the system, we propose a novel multi-slot sensing strategy for UE parameter estimation.
* We provide numerical results to demonstrate the effective gain resulting from increasing the physical size of the HIRS array.
#### Notation
We adopt the following standard notation. \((\cdot)^{*}\) and \((\cdot)^{\mathsf{T}}\) denote the complex conjugate and transpose operations, respectively. \((\cdot)^{\mathsf{H}}\) denotes the Hermitian (conjugate and transpose) operation. \(|x|\) denotes the absolute value of \(x\) if \(x\in\mathbb{R}\), while \(|\mathcal{X}|\) denotes the cardinality of a set \(\mathcal{X}\). \(\|\mathbf{x}\|_{2}\) denotes the \(\ell_{2}\)-norm of a complex or real vector \(\mathbf{x}\). \(\mathbf{I}_{m}\) denotes the \(m\times m\) identity matrix and \([n]=\{1,\ldots,n\}\) the set of positive integers.
## II System Model
Consider a BS and a UE that is equipped with an HIRS [7], as depicted in Fig. 1. The BS has \(N_{\mathrm{a}}\) antennas and \(N_{\mathrm{rf}}\) radio frequency (RF) chains, while the HIRS at the UE side has \(L_{\mathrm{a}}\) antennas, namely the \(L_{\mathrm{a}}\) surface elements of the HIRS, and \(L_{\mathrm{rf}}\) RF chains. The UE is connected to the IRS controller that performs BA. An HIRS can sense a portion of the incoming signal and reflect the remaining part in a controllable direction [8]. For an incident signal \(\mathbf{x}\in\mathbb{C}^{L_{\mathrm{a}}}\), the reflection and sensing
signals are \(\mathbf{\Phi}^{\mathsf{H}}\mathbf{x}\) and \(\mathbf{D}^{\mathsf{H}}\mathbf{x}\), respectively, where
\[\mathbf{\Phi} =\text{diag}\left(\beta_{1}e^{j\psi_{1}},\ldots,\beta_{l}e^{j\psi_ {l_{1}}},\ldots,\beta_{L_{\text{e}}}e^{j\psi_{L_{\text{a}}}}\right) \tag{1}\] \[\mathbf{D} =\text{diag}\left(\bar{\beta}_{1}e^{j\rho_{1}},\ldots,\bar{\beta }_{l}e^{j\rho_{l}},\ldots,\bar{\beta}_{L_{\text{e}}}e^{j\rho_{L_{\text{a}}}}\right) \tag{2}\]
are \(L_{\text{a}}\times L_{\text{a}}\) complex reflection and sensing matrices, and where for \(l\in[L_{\text{a}}]\) the parameter \(\beta_{l}\in[0,1]\) is the amplitude of the reflection coefficients, \(\psi_{l}\in[-\pi,\pi]\) is the tunable phase shift of the reflected signal, \(\rho_{l}\in[-\pi,\pi]\) is the tunable phase shift for the sensed signal and \(\bar{\beta}_{l}=1-\beta_{l}\).
We assume that the phase shifts can be compensated at the combining stage of the UE and thus we set \(\rho_{l}=0\ \forall\ l\) in (2). For simplicity, we choose \(\beta=\beta_{1}=\beta_{2}=\cdots=\beta_{L_{\text{a}}}\) so that
\[\mathbf{\Phi}(\beta,\psi) =\beta\ \text{diag}\left(e^{j\psi_{1}},\ldots,e^{j\psi_{L_{\text{a}} }}\right)\in\mathbb{C}^{L_{\text{a}}\times L_{\text{a}}} \tag{3}\] \[\mathbf{D}(\beta) =(1-\beta)\ \mathbf{I}_{L_{\text{a}}}\in\mathbb{C}^{L_{\text{a}} \times L_{\text{a}}} \tag{4}\]
### _Channel Model_
Suppose the BS and the UE are equipped with uniform linear arrays (ULAs) with half-wavelength spacings (i.e. \(\lambda_{c}/2\)) between the antenna elements. The array response vectors at the BS and UE are denoted by
\[[\mathbf{a}(\theta)]_{m} =e^{j\pi(m-1)\sin(\theta)}\quad m\in[N_{\text{a}}] \tag{5}\] \[[\mathbf{b}(\phi)]_{l} =e^{j\pi(l-1)\sin(\phi)}\quad l\in[L_{\text{a}}], \tag{6}\]
where \(\theta\in[-\frac{\pi}{2},\frac{\pi}{2}]\) is the angle of arrival (AoA) or angle of departure (AoD) at the BS, and \(\phi\in[-\frac{\pi}{2},\frac{\pi}{2}]\) is the AoA or AoD at the UE.
For the downlink (DL) and uplink (UL) transmission, a linear time-varying line-of-sight (LOS) channel is considered. In the delay-Doppler domain, the DL and UL channels are
\[\mathbf{H}^{\text{dl}}(\tau,\nu) =h^{\text{dl}}\mathbf{b}(\phi)\mathbf{a}^{\mathsf{T}}(\theta) \delta(\tau-\tau_{0}/2)\delta(\nu-\nu_{0}/2)\in\mathbb{C}^{L_{\text{a}}\times N _{\text{a}}} \tag{7}\] \[\mathbf{H}^{\text{ul}}(\tau,\nu) =h^{\text{ul}}\mathbf{a}(\theta)\mathbf{b}^{\mathsf{T}}(\phi) \delta(\tau-\tau_{0}/2)\delta(\nu-\nu_{0}/2)\in\mathbb{C}^{N_{\text{a}}\times L _{\text{a}}} \tag{8}\]
where \(h^{\text{dl}}\) and \(h^{\text{ul}}\) are the attenuation coefficients, \(\tau_{0}\) is the two-way delay and \(\nu_{0}\) is the two-way Doppler shift.
The overall two-way channel \(\mathbf{H}_{i}(\tau,\nu)\in\mathbb{C}^{N_{\text{a}}\times N_{\text{a}}}\) in the \(i\)-th slot can be written as a two-dimensional convolution as
\[\mathbf{H}_{i}(\tau,\nu) =\mathbf{H}^{\text{ul}}(\tau,\nu)\ast\mathbf{\Phi}_{i}^{\mathsf{H} }\mathbf{H}^{\text{dl}}(\tau,\nu)\] \[=h^{\text{dl}}h^{\text{ul}}\mathbf{a}(\theta)\mathbf{b}^{\mathsf{ T}}(\phi)\mathbf{\Phi}_{i}^{\mathsf{H}}\mathbf{b}(\phi)\mathbf{a}^{\mathsf{T}}( \theta)\delta(\tau-\tau_{0})\delta(\nu-\nu_{0})\] \[=h(\mathbf{\Phi}_{i})\mathbf{a}(\theta)\mathbf{a}^{\mathsf{T}}( \theta)\delta(\tau-\tau_{0})\delta(\nu-\nu_{0}), \tag{9}\]
where \(\mathbf{\Phi}_{i}\) is the reflection matrix of the IRS configured by the UE in the \(i\)-th slot and
\[h(\mathbf{\Phi}_{i})\coloneqq h^{\text{dl}}h^{\text{ul}}\mathbf{b}^{\mathsf{T}}( \phi)\mathbf{\Phi}_{i}^{\mathsf{H}}\mathbf{b}(\phi). \tag{10}\]
is the two-way channel coefficient.
### _Orthogonal Frequency-Division Multiplexing Signaling_
We consider multi carrier modulation with orthogonal frequency-division multiplexing (OFDM). To avoid inter-symbol interference (ISI) between OFDM symbols, each symbol is preceded by a cyclic prefix (CP) of duration \(T_{\text{cp}}\), resulting in an overall symbol duration of \(T_{o}=T+T_{\text{cp}}\). The OFDM modulated signal in the \(i\)-th slot is thus
\[s_{i}(t)=\sum_{n,m}x_{i}[n,m]\text{rect}\left(\tfrac{t-nT_{\text{c}}}{T_{o}} \right)e^{j2\pi m\Delta f(t-T_{\text{cp}}-nT_{o})} \tag{11}\]
with average power constraint
\[\mathbb{E}[|x_{i}[n,m]|^{2}]=P_{\text{t}},\quad\forall(i,n,m)\]
We assume that pilot symbols are transmitted from the BS for the entire BA duration. For simplicity, we consider a single stream DL transmission such that we can express the beamformed transmitted signal as
\[\mathbf{s}_{i}(t)=\mathbf{f}\sum_{n,m}x_{i}[n,m]\text{rect}\left(\tfrac{t-nT_{ \text{c}}}{T_{o}}\right)e^{j2\pi m\Delta f(t-T_{\text{cp}}-nT_{o})} \tag{12}\]
where \(\mathbf{f}\in\mathbb{C}^{N_{\text{a}}}\) is a generic beamforming (BF) vector of unit norm. We design \(\mathbf{f}\) so that it covers a section of the beamspace with a constant gain in the main beam, and very low gain elsewhere (see [9] for details).
### _Received Signal Model_
The received signal at the UE after channel (7) is processed by the sensing matrix \(\mathbf{D}_{i}\) and a combining matrix \(\mathbf{U}_{i}\in\mathbb{C}^{L_{\text{a}}\times L_{\text{rf}}}\) resulting in the analog linear processing matrix \(\mathbf{V}_{i}=\mathbf{D}_{i}\mathbf{U}_{i}\). After removing the CP and applying standard OFDM processing, the sampled signal is (see e.g. [10])
\[\mathbf{y}_{i}[n,m]=\\ g^{\text{dl}}\mathbf{V}_{i}^{\mathsf{H}}\mathbf{b}(\phi)x_{i}[n,m]e^{j2 \pi(nT_{o}\frac{\nu_{0}}{2}-m\Delta f\frac{\nu_{0}}{2})}+\mathbf{w}_{i}[n,m], \tag{13}\]
where we have defined \(g^{\text{dl}}\coloneqq h^{\text{dl}}\mathbf{a}^{\mathsf{T}}(\theta)\mathbf{f}\). Similarly, the received (back-scattered-) signal at the BS at the \(i\)-th slot is
\[\mathbf{r}_{i}[n,m]=\\ g_{i}^{\text{ul}}\mathbf{U}_{\text{BS},i}^{\mathsf{H}}\mathbf{b}(\theta)x_{i }[n,m]e^{j2\pi(nT_{o}\nu_{0}-m\Delta f\tau_{0})}+\mathbf{n}_{i}[n,m], \tag{14}\]
where \(\mathbf{n}_{i}[n,m]=\frac{1}{M}\sum_{k=0}^{M-1}\mathbf{n}_{i}[n,k]e^{-j2\pi\frac{ \nu_{0}}{M}}\) is the noise after the discrete Fourier transform (DFT) with \(\mathbf{n}_{i}[n,m]\sim\mathcal{N}_{\mathcal{C}}(\mathbf{0},\sigma^{2}\mathbf{I} _{N_{\text{rf}}})\), and \(g_{i}^{\text{dl}}\coloneqq h(\mathbf{\Phi}_{i})\mathbf{b}^{\mathsf{T}}(\phi) \mathbf{f}\) the overall complex UL channel coefficient.
Fig. 1: Schematic of the system model. The HIRS architecture adopts the model in [7].
### _Design of Receive Beamformers_
As discussed in section II-C, the UE and BS apply a combining matrix to the signal received at their respective ULAs, due to the implemented hybrid BF architecture. To meet the page limit in this article, we provide only a brief overview of the design strategy for the sequence of combining matrices. The main concept here, is to design these matrices such that they probe different narrow angular sectors of the beam space across different slots. To this end, using a method based on solving a magnitude least-squares problem for designing BF vectors in [9, 11], we obtain a codebook of beamforming vectors \(\mathcal{U}_{\text{UE}}=\{\mathbf{u}_{1},...,\mathbf{u}_{K}\}\), where each of the \(k\in[K]\) codewords is a _flat-top_ beam designed to cover a specific section on the desired field of view such that the codewords are not overlapping. In every slot \(i\) of BA, the UE randomly samples \(L_{\text{rf}}\) BF vectors \(\{\hat{\mathbf{u}}_{1},...,\hat{\mathbf{u}}_{L_{\text{rf}}}\}\) from \(\mathcal{U}_{\text{UE}}\) and obtains its combining matrix, i.e. \(\mathbf{U}_{i}=\frac{1}{\sqrt{L_{\text{rf}}}}\{\hat{\mathbf{u}}_{1},...,\hat{ \mathbf{u}}_{L_{\text{rf}}}\}\). A similar procedure takes place at the BS to obtain the BF vectors \(\mathbf{U}_{\text{BS},i}\) indicated in (14).
## III Beam Alignment
### _Multi-Slot Maximum Likelihood Estimation_
To solve the BA problem, both the BS and UE must estimate their AoAs. We derive a maximum likelihood (ML) scheme and, to increase the accuracy, we suppose that in a certain slot of BA all the observations up to the current slot are taken into account for the AoA estimation so that the accuracy improves over time. Since the overall complex UL channel coefficient \(g_{i}^{\text{ul}}\) of the BS might vary in each slot due to the chosen IRS configuration, we additionally derive the multi-slot maximum likelihood estimation (MMLE) at the BS for the case of slot-wise varying complex channel coefficients. We thus first derive the multi-slot ML estimate at UE and we rewrite (13) as
\[\boldsymbol{y}_{i}[n,m]=g^{\text{dl}}\mathbf{V}_{i}^{\text{H}}\mathbf{b}( \phi)x_{i}[n,m]t_{n,m}(\tau_{0},\nu_{0})+\boldsymbol{w}_{i}[n,m], \tag{15}\]
where \(t_{n,m}(\tau_{0},\nu_{0})\coloneqq e^{-j\pi(m\Delta f\tau_{0}-nT_{d}\nu_{0})}\). We can reformulate the expression of (15) by stacking the \(NM\) observations into a column vector \(\boldsymbol{y}_{i}\in\mathbb{C}^{NML_{\text{rf}}}\) and, defining the expression \(\mathbf{G}_{i}(\tau_{0},\nu_{0},\phi)\coloneqq\big{(}\mathbf{T}(\tau_{0},\nu_ {0})\otimes\mathbf{V}_{i}^{\text{H}}\mathbf{b}(\phi)\big{)}\), the column vector can be defined as
\[\boldsymbol{y}_{i}=g^{\text{dl}}\mathbf{G}_{i}(\tau_{0},\nu_{0},\phi) \boldsymbol{x}_{i}+\boldsymbol{w}_{i}. \tag{16}\]
The likelihood-function of \(\boldsymbol{y}_{i}\) is
\[L(\boldsymbol{y}_{i};(g^{\text{dl}},\tau_{0},\nu_{0},\phi))= \frac{1}{\text{det}(2\pi\sigma^{2}\mathbf{I}_{NML_{\text{rf}}})^{1/2}}.\] \[\exp\left(-\frac{1}{2\sigma^{2}}\left((\boldsymbol{y}_{i}-g^{ \text{dl}}\mathbf{G}_{i}\boldsymbol{x}_{i})^{\text{H}}(\boldsymbol{y}_{i}-g^{ \text{dl}}\mathbf{G}_{i}\boldsymbol{x}_{i})\right)\right). \tag{17}\]
After collecting all the previous observations up to the \(i\)-th slot \(\boldsymbol{y}^{(i)}=[\boldsymbol{y}_{1},\ldots,\boldsymbol{y}_{i}]\), the log-likelihood function is
\[\ell(\boldsymbol{y}^{(i)}; (g^{\text{dl}},\tau_{0},\nu_{0},\phi))=\log\left(L(\boldsymbol{y }^{(i)};(g^{\text{dl}},\tau_{0},\nu_{0},\phi))\right)\] \[=\sum_{s=1}^{i}\log\left(L(\boldsymbol{y}_{i};(g^{\text{dl}}, \tau_{0},\nu_{0},\phi))\right). \tag{18}\]
Using the ML estimates for unknown parameters in [12] and
\[\mathbf{V}_{(i)} =\sum_{s=1}^{i}\lVert\boldsymbol{x}_{s}\rVert_{2}^{2}\mathbf{V}_ {s}\mathbf{V}_{s}^{\text{H}}\] \[\mathbf{c}_{(i)}(\tau_{0},\nu_{0}) =\left[\sum_{s=1}^{i}\boldsymbol{x}_{s}^{\text{T}}\mathbf{T}( \tau_{0},\nu_{0})\mathbf{Y}_{s}^{\text{H}}\mathbf{V}_{s}^{\text{H}}\right],\]
we can write the ML estimate as
\[(\hat{g}_{i}^{\text{dl}},\hat{\tau}_{i},\hat{\nu}_{i},\hat{\phi}_ {i}) =\operatorname*{arg\,max}_{g^{\text{dl}},\tau_{0},\nu_{0},\phi} \operatorname{Re}\left\{2g^{\text{dl}}\mathbf{c}_{(i)}^{\text{H}}(\tau_{0}, \nu_{0})\mathbf{b}(\phi)\right. \tag{19}\] \[\left.-\big{|}g^{\text{dl}}\big{|}^{2}\mathbf{b}^{\text{H}}( \phi)\mathbf{V}_{(i)}\mathbf{b}(\phi)\right\}\]
Optimizing (19) with the respect of \(\operatorname{Re}(g^{\text{dl}})\) and \(\operatorname{Im}(g^{\text{dl}})\), we obtain
\[g^{\text{dl}}_{\text{opt}}=\frac{\mathbf{b}^{\text{H}}(\phi)\mathbf{c}_{(i)}( \tau_{0},\nu_{0})}{\mathbf{b}^{\text{H}}(\phi)\mathbf{V}_{(i)}\mathbf{b}(\phi)}, \tag{20}\]
and the ML estimates
\[(\hat{\tau}_{i},\hat{\nu}_{i},\hat{\phi}_{i})=\operatorname*{arg\,max}_{\tau_{ 0},\nu_{0},\phi}\frac{\left|\mathbf{b}^{\text{H}}(\phi)\mathbf{c}_{(i)}(\tau_{0}, \nu_{0})\right|^{2}}{\mathbf{b}^{\text{H}}(\phi)\mathbf{V}_{(i)}\mathbf{b}(\phi)}, \tag{21}\]
which are approximately found by evaluating the objective function in a finite set of points.
At the BS we use the same steps except that the channel coefficients in (14) depend on the slot index. We can thus rewrite the received signal (14) at the BS as
\[\boldsymbol{r}_{i}=\left(\tilde{\mathbf{T}}(\tau_{0},\nu_{0})\otimes g_{i}^{ \text{ul}}\mathbf{U}_{\text{BS},i}^{\text{H}}\mathbf{a}(\theta)\right) \boldsymbol{x}_{i}+\mathbf{n}_{i} \tag{22}\]
Hence, the ML estimate \((\{\hat{g}^{\text{dl}}\}_{s=1}^{i},\hat{\tau}_{i},\hat{\nu}_{i},\hat{\theta}_ {i})\) is
\[\operatorname*{arg\,min}_{\{g^{\text{dl}}\}_{s=1}^{i},\tau_{0}, \nu_{0},\theta} \operatorname{Re}\left\{\sum_{s=1}^{i}|g_{s}^{\text{ul}}|^{2}\lVert \boldsymbol{x}_{s}\rVert_{2}^{2}\mathbf{a}^{\text{H}}(\theta)\mathbf{U}_{ \text{BS},s}\mathbf{U}_{\text{BS},s}^{\text{H}}\mathbf{a}(\theta)\right. \tag{23}\] \[\left.-2g_{s}^{\text{ul}}\mathbf{T}_{s}^{\text{T}}\tilde{\mathbf{T }}(\tau_{0},\nu_{0})\mathbf{R}_{s}^{\text{H}}\mathbf{U}_{\text{BS},s}^{\text{H}} \mathbf{a}(\theta)\right\},\]
where \(\mathbf{R}_{s}\in\mathbb{C}^{N_{\text{rf}}\times NM}\) is the matrix of the observation at BS in the \(s\)-th slot. As for the UE, by defining
\[\tilde{\mathbf{U}}_{s} =\lVert\boldsymbol{x}_{s}\rVert_{2}^{2}\mathbf{U}_{\text{BS}, s}\mathbf{U}_{\text{BS},s}^{\text{H}} \tag{24}\] \[\tilde{\mathbf{c}}_{s}(\tau_{0},\nu_{0}) =\left[\boldsymbol{x}_{s}^{\text{T}}\tilde{\mathbf{T}}(\tau_{0}, \nu_{0})\mathbf{R}_{s}^{\text{H}}\mathbf{U}_{\text{BS},s}^{\text{H}}\right]^{ \text{H}},\]
and optimizing (23) with the respect of \(\operatorname{Re}(g_{s}^{\text{dl}})\) and \(\operatorname{Im}(g_{s}^{\text{dl}})\), the optimal value of \(g_{s}^{\text{dl}}\) is
\[g_{s,\text{opt}}^{\text{ul}}=\frac{\mathbf{a}^{\text{H}}(\theta)\tilde{ \mathbf{c}}_{s}(\tau_{0},\nu_{0})}{\mathbf{a}^{\text{H}}(\theta)\tilde{ \mathbf{U}}_{s}\mathbf{a}(\theta)}, \tag{25}\]
which yields the ML estimates at the BS in the \(i\)-th slot as
\[(\hat{\tau}_{i},\hat{\nu}_{i},\hat{\theta}_{i})=\operatorname*{arg\,max}_{\tau_{0}, \nu_{0},\theta}\sum_{s=1}^{i}\frac{|\mathbf{a}^{\text{H}}(\theta)\tilde{ \mathbf{c}}_{s}(\tau_{
### _Cramer Rao Lower Bound_
We derive the CRLB as a benchmark. Let \(g=|g^{\rm dl}|\) and \(\psi_{g}=\angle(g^{\rm dl})\) be the amplitude and phase of \(g^{\rm dl}\), respectively, and define the vector \(\mathbf{\xi}=[g,\psi_{g},\phi,\tau_{0}^{\prime},\nu_{0}]\) with the unknown real parameters. We form the \(5\times 5\) Fisher information matrix whose \((k,l)\)-th element is
\[[{\bf I}(\mathbf{\xi},\mathbf{X})]_{k,l}=\] \[\frac{2}{\sigma^{2}}\sum_{s=1}^{i}\sum_{n,m}{\rm Re}\left\{\frac{ \partial\mathbf{\xi}_{s}^{\sf H}[n,m;\mathbf{\xi}]}{\partial\xi_{k}}\frac{\partial \mathbf{s}_{s}[n,m;\mathbf{\xi}]}{\partial\xi_{l}}\right\}, \tag{27}\]
where \(\mathbf{X}=\{\mathbf{X}_{1},\ldots,\mathbf{X}_{i}\}\) is the set of all pilot symbols sent up to the \(i\)-th slot, and \(\mathbf{X}_{s}=\{x_{s}[n,m]\}\)\(\forall\)\(n,m\) the set of all pilot symbols sent in the \(s\)-th slot. The expression (27) can be manipulated to take the following structure:
\[{\bf I}(\mathbf{\xi},\mathbf{X})=\frac{1}{\sigma^{2}}\begin{bmatrix}I_{gg}&0&I_{g \phi}&0&0\\ 0&I_{\psi_{g}\psi_{g}}&I_{\psi_{g}\phi}&I_{\psi_{g}\tau_{0}^{\prime}}&I_{\psi_ {g}\nu_{0}}\\ I_{g\phi}&I_{\psi_{g}\phi}&I_{\phi\phi}&I_{\psi_{g}\tau_{0}^{\prime}}&I_{\phi\nu _{0}}\\ 0&I_{\psi_{g}\tau_{0}^{\prime}}&I_{\sigma_{0}^{\prime}}&I_{\tau_{0}^{\prime} \tau_{0}^{\prime}}&I_{\sigma_{0}^{\prime}\nu_{0}}\\ 0&I_{\psi_{g}\nu_{0}}&I_{\phi\nu_{0}}&I_{\tau_{0}^{\prime}\nu_{0}}&I_{\nu_{0} \nu_{0}}\end{bmatrix}, \tag{28}\]
Let \(\hat{\phi}\) be an unbiased estimator of \(\phi\). Since we consider only AoA estimation, it can be further simplified to yield the approximated CRLB in the \(i\)-th slot as (29), where we defined \(\tilde{\bf b}(\phi)\) as \(\tilde{\bf b}(\phi)=\text{diag}(0,\ldots,L_{\rm a}-1)\,{\bf b}(\phi)\).
### _IRS parameter tuning_
We present here a method to set the IRS parameters, namely \(\beta\) and \(\{\psi_{i}\}_{i=1}^{L_{\rm a}}\), in order to help the BS estimate its AoD. We define the moving standard deviation of the UE local estimate at time slot \(i\) as
\[\sigma(\phi_{i-N_{\rm w}+1}^{i})\coloneqq\sqrt{\frac{1}{N_{\rm w}-1}\sum_{j=0 }^{N_{\rm w}-1}\left(\hat{\phi}_{i-j}-\overline{\phi}_{i-N_{\rm w}+1}^{i} \right)^{2}}, \tag{30}\]
where \(\overline{\phi}_{i-N_{\rm w}+1}^{i}\coloneqq\frac{1}{N_{\rm w}}\sum_{j=0}^{N_ {\rm w}-1}\hat{\phi}_{i-j}\) is the moving average of the estimate. Our method sets \(\beta=0\) until the moving standard deviation drops below a predefined threshold, and \(\beta=1\) thereafter. In particular, we select the threshold as the \(3\,\mathrm{dB}\) beamwidth of an \(L_{\rm a}\)-antenna ULA, given by [13, Ch. 6]
\[\Theta_{3\,\mathrm{dB}}=2\left[\frac{\pi}{2}-\arccos\frac{2\cdot 1.391}{\pi L _{\rm a}}\right]. \tag{31}\]
Regarding the IRS phase shifts, it is trivial to observe that the magnitude of the two-way coefficient in (10) is maximized when we set \(\psi_{i}=2\pi(i-1)\sin(\phi)\).
The resulting IRS configuration strategy is
\[\begin{cases}\mathbf{\Phi}_{i}(\beta,\psi)=\mathbf{0}_{L_{\rm a}\times L_{\rm a}}& \quad\mathrm{if}\quad\sigma(\phi_{i-N_{\rm w}+1}^{i})>\Theta_{3\,\mathrm{dB}}, \\ \mathbf{\Phi}_{i}(\beta,\psi)=\text{diag}({\bf b}(2\hat{\phi}_{i}))&\quad\mathrm{ if}\quad\sigma(\phi_{i-N_{\rm w}+1}^{i})<\Theta_{3\,\mathrm{dB}},\\ \mathbf{\Omega}_{i}(\beta)=\mathbf{0}_{L_{\rm a}\times L_{\rm a}}&\quad\mathrm{if} \quad\sigma(\phi_{i-N_{\rm w}+1}^{i})<\Theta_{3\,\mathrm{dB}},\end{cases} \tag{32}\]
where \(\hat{\phi}_{i}\) is is the ML estimate obtained by the UE as described in section III-A.
### _Radar Cross Section_
To model the two-way channel between BS and UE, it is fundamental to consider the radar cross-section (RCS) of the IRS. In each slot \(i\) of BA, the RCS of the IRS can be computed by
\[\sigma_{\rm RCS,i}\coloneqq\sigma_{\rm RCS,BBF}\cdot\cos(\phi)\cdot G_{\rm IRS} (\mathbf{\Phi}_{i}) \tag{33}\]
where \(\sigma_{\rm RCS,BBF}\) denotes the RCS of the IRS before BF. Given that the IRS is configured for reflection towards a certain direction, its RCS increases towards this direction by the achievable IRS gain which is defined in (10) as
\[G_{\rm IRS}(\mathbf{\Phi}_{i})\coloneqq|{\bf b}^{\sf T}(\phi)\mathbf{\Phi}_{i}^{\sf H} \text{b}(\phi)|. \tag{34}\]
We propose a model for the RCS of the IRS based on [14]. This model is obtained by considering a realistic IRS array composed of conventional metallic patches. This model takes into account the physical array dimensions and the operating wavelength. The numerical value is given by
\[\sigma_{\rm RCS,BBF}=\frac{4\pi(\frac{\lambda_{\rm c}}{2}L_{\rm a})^{2}(\frac{ \lambda_{\rm c}}{2})^{2}}{\lambda_{\rm c}^{2}}. \tag{35}\]
For performance comparison purposes, we consider also a hypothetical value of RCS before BF. For this purpose, we assume the IRS fits within a conventional mobile phone. Measurements of the back of a human hand [15] or other similar-sized objects [16] show that one can obtain an average RCS between -20 dBm and -15 dBm. However, since such objects present curved shapes and less radar reflectivity than IRS, it is reasonable to assume that the monostatic RCS of the IRS should be higher than these values, yielding \(\sigma_{\rm RCS,BBF}>-15\) dBm. Recent work on drones' RCS [17] found that a metallic object with an area of 128 mm \(\times\) 53 mm (similar size to a mobile phone) results in a RCS value of \(\sigma_{\rm RCS,MP}(\lambda_{\rm c})=13\) dBsm at a carrier frequency \(f_{\rm c}=60\) GHz. We thus assume that \(\sigma_{\rm RCS,ABF}\) after perfect BF is upper bounded as \(\sigma_{\rm RCS,ABF}\leq\sigma_{\rm RCS,MP}(\lambda_{\rm c})=13\) dBsm. To this end, we select the hypothetical value of \(-5\,\mathrm{dBsm}\) for this comparison. This value is justified since the IRS gain is upper bounded by
\[G_{\rm IRS}(\mathbf{\Phi}_{i})=|{\bf b}^{\sf T}(\phi)\mathbf{\Phi}_{i}^{\sf H}\text{b}( \phi)|\leq L_{\rm a}=64\equiv 18\,\mathrm{dB}, \tag{36}\]
which means \(\sigma_{\rm RCS,i}\) can reach its upper bound of \(13\,\mathrm{dBsm}\) in case of \(\phi=0\) and perfect reflection.
## IV Numerical Results
We now provide numerical results to verify the effectiveness of the methods proposed in the previous section. In the remainder, we consider the parameters shown in Table I. The channel parameters in (7) and (8) are assumed to remain constant over \(N_{\rm slot}\) slots, defined as the maximum number of slots expected to be necessary for BA. This is justified for moderate values of \(N_{\rm slot}\) since the frame duration is approximately \(50\,\mathrm{\SIUnitSymbolMicro s}\). Some of the results are given as a function of the SNR that would be obtained at the UE in case no beamforming would be used at the transmitter nor at the receiver. We refer to this magnitude as the SNR before beamforming (\(\rm SNR_{UE,BBF}\)), which is given by
\[\rm SNR_{UE,BBF}\coloneqq\frac{\lambda_{\rm c}^{2}}{(4\pi d)^{2}}\frac{P_{\rm t }}{\sigma^{2}}. \tag{37}\]
First, the AoA estimation accuracy at the UE side is investigated. Figure 2 shows the estimated AoA root mean square error (RMSE) as a function of \(\text{SNR}_{\text{UE},\text{BBF}}\). For evaluation of the RMSE, we run a large number of simulations over certain range of distances, where at each run, the AoA and AoD are chosen uniformly at random from the set \([-87^{\circ},87^{\circ}]\). Note that the discretization error of the ML estimation, i.e. the lowest achievable RMSE due to the discretized grid for the ML estimation in (21), is shown to evaluate the general quality of the MMLE results. It can be observed that the proposed estimation scheme improves significantly with larger number of slots for BA. We would like to further remark that, although the above simulation presents the ML estimate of the AoA, the ML estimation metrics in (21), and (26) at the UE and BS side respectively, can be used to obtain an estimate of the delay, Doppler and angle parameters simultaneously, where these parameters are defined over a 3-dimensional grid of parameters.
The following figures indicate performance in terms of the achievable spectral efficiency at the UE after obtaining angular estimates and using them to tune the beamformers. This is numerically computed by averaging
\[\log_{2}\left(1+\mathrm{SNR}_{\text{UE},\text{BBF}}|\mathbf{a}^{\mathsf{T}}( \theta)\mathbf{a}^{*}(\hat{\theta})\mathbf{b}^{\mathsf{H}}(\hat{\phi})\mathbf{ b}(\phi)|^{2}\right) \tag{38}\]
over multiple simulations over a range of distances, where \(\hat{\phi}\) and \(\hat{\theta}\) are ML estimates obtained as derived in Section III-A.
It is easy to verify that, by applying the values in Table I to (35), the RCS evaluates to approximately \(-11\,\mathrm{dBsm}\). The achievable spectral efficiency after beamforming when the number of slots for BA is fixed to 32 is shown in Figure 3. There, we consider the analytic RCS, a hypothetical one, and a case where the IRS is replaced by a metallic plate of the same size. It can be observed that the communication performance after BA is close to optimal for both RCS values when the SNR is as low as \(-5\,\mathrm{dB}\) to \(0\,\mathrm{dB}\).
Inspired by the previous result, we now fix the \(\mathrm{SNR}_{\text{UE},\text{BBF}}\) to \(-4\,\mathrm{dB}\) (corresponding to a distance of \(10\,\mathrm{m}\) for our system configuration, reasonable for indoor scenarios) and study performance in terms of achievable spectral efficiency as a function of the number of slots allocated for BA in Figure 4. The result shows that our IRS based BA method consistently improves spectral efficiency by at least \(2\,\mathrm{bits}/\mathrm{s}/\mathrm{Hz}\). Note that in all of the above simulations we have used a relatively small transmit power of \(1\)mW. By using larger values, as would be the case with most BSs, the effective operational range of the scheme can be extended to meet requirements for larger cell sizes.
## V Conclusions
Motivated by the requirement for BA in mmWave communications with highly directional beamforming, we have proposed the use of an on-device-mounted HIRS to aid the BA procedure. In the proposed scheme, a multi-slot parameter estimation framework is developed to deal with the restriction imposed by the HDA architecture. Our numerical results demonstrate that with sufficiently large number of slots, the user device can reliably estimate the AoA of the incoming communication signal and maintain a significantly higher spectral efficiency.
Fig. 2: RMSE value of the AoA estimated at the user side.
## VI Acknowledgment
The authors would like to thank Gerhard Kramer for his careful reading of the manuscript and his very useful feedback.
S. K. Dehkordi and F. Pedraza would like to acknowledge the financial support by the Federal Ministry of Education and Research of Germany in the program of "Souverian. Digital. Vernetzt." Joint project 6G-RIC, project identification number: 16KISK030.
The research of Lorenzo Zaniboni is funded by Deutsche Forschungsgemeinschaft (DFG) through the grant KR 3517/12-1.
|
2310.14464
|
A Cryptographic Perspective on the Verifiability of Quantum Advantage
|
In recent years, achieving verifiable quantum advantage on a NISQ device has
emerged as an important open problem in quantum information. The sampling-based
quantum advantages are not known to have efficient verification methods. This
paper investigates the verification of quantum advantage from a cryptographic
perspective. We establish a strong connection between the verifiability of
quantum advantage and cryptographic and complexity primitives, including
efficiently samplable, statistically far but computationally indistinguishable
pairs of (mixed) quantum states ($\mathsf{EFI}$), pseudorandom states
($\mathsf{PRS}$), and variants of minimum circuit size problems
($\mathsf{MCSP}$). Specifically, we prove that a) a sampling-based quantum
advantage is either verifiable or can be used to build $\mathsf{EFI}$ and even
$\mathsf{PRS}$ and b) polynomial-time algorithms for a variant of
$\mathsf{MCSP}$ would imply efficient verification of quantum advantages.
Our work shows that the quest for verifiable quantum advantages may lead to
applications of quantum cryptography, and the construction of quantum
primitives can provide new insights into the verifiability of quantum
advantages.
|
Nai-Hui Chia, Honghao Fu, Fang Song, Penghui Yao
|
2023-10-23T00:31:51Z
|
http://arxiv.org/abs/2310.14464v1
|
# A Cryptographic Perspective on the Verifiability of Quantum Advantage
###### Abstract
In recent years, achieving verifiable quantum advantage on a NISQ device has emerged as an important open problem in quantum information. The sampling-based quantum advantages are not known to have efficient verification methods. This paper investigates the verification of quantum advantage from a cryptographic perspective. We establish a strong connection between the verifiability of quantum advantage and cryptographic and complexity primitives, including efficiently samplable, statistically far but computationally indistinguishable pairs of (mixed) quantum states (EFI), pseudorandom states (PRS), and variants of minimum circuit size problems (MCSP). Specifically, we prove that a) a sampling-based quantum advantage is either verifiable or can be used to build EFI and even PRS and b) polynomial-time algorithms for a variant of MCSP would imply efficient verification of quantum advantages.
Our work shows that the quest for verifiable quantum advantages may lead to applications of quantum cryptography, and the construction of quantum primitives can provide new insights into the verifiability of quantum advantages.
## 1 Introduction
Quantum advantage experiments aim to demonstrate tasks that quantum computers outperform classical computers. In recent years, random circuit sampling (RCS) [1] and Boson sampling [1] emerge as promising proposals since they can be implemented on a NISQ (Noisy Intermediate-Scale Quantum) device and admit _provably_ complexity-theoretical evidence for the hardness on classical computers [2]. Besides these two desirable criteria for a quantum advantage experiment, another critical criterion is the ability to _verify_ the outcomes from such experiments, preferably by an efficient classical computer. Verification for RCS and Boson sampling both turn out to be challenging. At present, it remains open to demonstrate a quantum advantage experiment that satisfies all three of these criteria (see Fig. 1 for a summary).
About the verifiability of RCS, the linear cross-entropy benchmarking (XEB) is first proposed as a verification method [1]. However, XEB is sample efficient but not computationally efficient, and it can be spoofed [1, 2]. More generally, a work by Hangleiter et. al [1] cast a further shadow on their verifiability. They show that if the target distribution anticoncentrates, certifying closeness to the target distribution requires exponentially many samples, which covers RCS, Boson sampling and IQP sampling. This result rules out efficient verification for the known quantum advantage experiments based on sampling.
What about general quantum sampling experiments? How do we determine if such an experiment has an efficient verification method? In [10], the verification task is modelled as a game between a quantum party and a classical challenger, which we will discuss more later. However, they can only show the limitations of the verification methods that calculate the empirical average of some scoring functions of individual samples in this model.
### Our results
In this paper, we investigate the verifiability of sampling-based quantum advantage experiments via a _cryptographic_ perspective. To this end, we first put forth formal definitions of verifiability. Subsequently, we study the implication of the hardness of a variant of the minimal circuit size problem (MCSP) on verifiability. Furthermore, we establish the connection between verifiability and fundamental quantum cryptographic primitives: EFI (efficiently generated, statistically far, and computationally indistinguishable states) and PRS (pseudorandomm states). Lastly, we generalize verifiable quantum advantage to capture the verifiability of interactive proof of quantumness. We hope that our work will advance the understanding of the verifiability of quantum advantage experiments and provide insights into the development of future quantum advantage experiments.
Figure 1: Scott Aaronson’s categorization of quantum advantage proposals. Random circuit sampling [1] and Boson sampling [1] are NISQable and Classically hard. Cryptographic proof of quantumness (PoQ) [1, 2] and Shor’s algorithm [14] are classically hard and efficiently verifiable. QAOA [10] and VQE [21] are NISQable and efficiently verifiable.
The model of the verification process is depicted in Fig. 2. It consists of three parties: Alice (a quantum advocate and experiment designer), Bob (a quantum skeptic) and a verifier 6. Alice runs the quantum experiment and sends transcripts of her experiment, including the setup of the experiment apparatus and outcomes, to the verifier. Bob, as a challenger, proposes a classically samplable distribution that is indistinguishable from Alice's distribution, and sends the description of his sampling algorithm along with samples of his distribution to the verifier. The verifier's goal is to distinguish Alice and Bob's samples, so in the rest of the paper we also call him the distinguisher. The distinguisher takes all the information from Alice and Bob as input. In the case of RCS, Alice sends out her random circuit \(C\) and her measurement outcome on \(C|0^{n}\rangle\). Bob proposes a spoofing algorithm and sends the description of the algorithm along with his samples to the distinguisher.
Footnote 6: We came up with this model unaware of the two-party game proposed in [12], albeit the two models share some similarities.
Definition 1 (Verifiable quantum advantage (Informal)): Let \(\mathfrak{C}\) be a set of polynomial-sized quantum circuits on \(n\) qubits. We say the experiment that samples a \(C\in\mathfrak{C}\) and repeatedly measures the output state in the computational basis achieves verifiable quantum advantage if for all classical polynomial-time samplable distribution \(\mathcal{D}\) whose sampler is \(\mathcal{S}_{\mathcal{D}}\), there exists a classical polynomial time distinguisher \(\mathcal{A}\) such that
\[\mathop{\mathbb{E}}_{C\leftarrow\mathfrak{C}}|\Pr[\mathcal{A}(C,\mathcal{S}_{ \mathcal{D}},\mathbf{z}_{C})=1]-\Pr[\mathcal{A}_{D}(C,\mathcal{S}_{\mathcal{D}}, \mathbf{z}_{\mathcal{D}})=1]|\geq 1/\poly(n),\]
where \(\mathbf{z}_{C}\) is a polynomial-sized set of samples generated from measuring \(C|0^{n}\rangle\) in the computational basis, and \(\mathbf{z}_{\mathcal{D}}\) is a set of samples drawn from \(\mathcal{D}\).
We give several \(\mathsf{VQA}\) examples to demonstrate the expressiveness of our verifiability definition, such as Fourier sampling (e.g., based on Shor's algorithm and Simon's problem). Note that our distinguisher is more general than the ones used in the experiments [1] and studied in [12]. Their distinguishers are agnostic about how the classical samples are sampled, score each sample individually, and make their decisions based on the average of the scores. As pointed out in [12], if the distinguisher knows the spoofing algorithm of XEB proposed in [11], the distinguisher can distinguish the spoofing samples from the quantum samples. Hence, we define verifiable quantum advantage with respect to such a more general distinguished.
Minimum circuit size problem (MCSP) vs. VQA.We aim to identify the computational hardness of verifying quantum advantages. One potential approach is finding a problem for which
Figure 2: Verification process for RCS: The verifier publishes a circuit family \(\mathfrak{C}\). Then Alice sends back \(C\in\mathfrak{C}\) and samples \(\mathbf{z}_{C}\) obtained from measuring \(C|0^{n}\rangle\), and Bob sends back the description of the sampler \(\mathcal{S}_{D}\) for his classically samplable spoofing distribution \(D\), along with samples \(\mathbf{z}_{D}\).
the existence of efficient algorithms would lead to efficient verification, which is similar to the connections between Meta-complexity problems and cryptography.
Meta-complexity problems, which ask to identify specific complexity measures (e.g., circuit complexity) of given Boolean functions, is a fundamental topic in complexity theory. It is worth noting that efficient algorithms for these problems imply that one-way functions do not exist [10, 14]. Chia et al. [10] investigated quantum minimum circuit size problems (MCSP) by considering the hardness of identifying quantum circuit complexity of functions, states, and unitary matrices. They showed that the existence of efficient algorithms leads to efficient algorithms for breaking all pseudorandom state schemes and post-quantum one-way functions.
Inspired by the connections between meta-complexity problems and cryptography, we introduce a variant of meta-complexity problems called the _minimum circuit size problems for samples_ (SampMCSP), which asks the minimum size of classical samplers that can generate samples indistinguishable from the given samples. This problem is analogous to the state minimum circuit size problem introduced in [10], which asks to identify the quantum circuit complexity of given quantum states. We demonstrate that if SampMCSP can be solved in polynomial time, then a class of quantum advantage experiments can be verified efficiently.
**EFI vs. VQA**. Next, we study the relationships between verifiability and the quantum cryptographic primitive EFI. EFI is a fundamental quantum cryptographic primitive, which is equivalent to quantum commitment schemes, quantum oblivious transfer, quantum multi-party computation and others [1]. Note that, classically, one-way functions are necessary but might not be sufficient to build these applications.
We show that a type of _duality_ exists between EFI and verifiable quantum advantage, when we consider classically-secure EFI pairs, i.e., whose computational indistinguishability holds only against classical algorithms.
Theorem 1 (Informal): _If the quantum advantage of a quantum experiment is verifiable, then the output states do not form an EFI pair with any quantum state that encodes a classical samplable distribution._
Theorem 2 (Informal): _If the average of output states of a quantum experiment is statistically far from any quantum state that encodes a classically samplable distribution and the output states do not form a classically secure EFI with any classical polynomial-time samplable distribution, then the experiment is verifiable._
If we allow verifying quantum advantage by a quantum computer, we obtain a similar duality between quantum-secure EFIs and quantum verifiability. We think that this model with quantum verifiers is also worth exploring and is discussed more in Section 7.
These results provide necessary and sufficient conditions for verifiability based on whether the quantum circuit family can form an EFI pair with a classical polynomial-time samplable distribution, respectively. To the best of our knowledge, all existing EFI pairs satisfy such a property, i.e., one of the EFI generators can be simulated by classical polynomial-time algorithms.
**Pseudorandom states (PRS) vs. VQA**. A set of states is a PRS if a random state in this set is computationally indistinguishable from a Haar random state [11]. PRS is an essential quantum cryptographic primitive that can be used to build other primitives, including one-time digital signature and EFI. Moreover, the existence of PRS implies the existence of EFI, and thus the
aforementioned applications that are equivalent to EFI can also be constructed from PRS. Moreover, there is evidence showing that the existence of PRS is a weaker assumption than the existence of one-way functions [10].
Intuitively, if the output states of a quantum advantage experiment are pseudorandom, the measurement output distribution should be indistinguishable from the measurement output distribution of Haar random states. Moreover, the measurement output distribution of Haar random states can be approximated by a classical distribution, so this quantum advantage experiment doesn't achieve verifiability. However, in the definition of PRS, the distinguisher is unaware of the preparation circuit of the given state, but the distinguisher in a quantum advantage experiment is. Hence, we can only prove this result for a subclass of PRS, called classically unidentifiable PRS, which intuitively says that when distinguishing samples from measuring different states, knowing the circuit doesn't help. Many existing PRS constructions, such as the random phase states and binary phase states [11, 12], are classically unidentifiable.
Theorem 3 (Informal): _If the quantum advantage of a quantum sampling algorithm is verifiable, then the output states are not classically unidentifiable PRS._
The motivation behind Theorem 3 is that RCS is proposed as a candidate construction of PRS[13]. If the output states of random circuits are classically unidentifiable, Theorem 3 gives us a proof that RCS experiments are unverifiable. Note that [10] shows the distribution induced by measuring a random circuit is indistinguishable from some classical distribution, which doesn't imply RCS is not VQA according to Definition 1. Conversely, Theorem 3 also tells us that if some construction of PRS fails, it is possible to use this construction for verifiable quantum advantage. This is a win-win situation.
**What about interactive quantum advantage experiments?** So far, we have focused on sampling-based quantum advantage experiments. There are interactive verifiable quantum advantage proposals called proof of quantumness (PoQ) [1, 2, 3, 4]. These PoQs achieve verifiability, but one obstacle in implementing these protocols is maintaining coherence during the interactions.
Hence, we generalize Definition 1 to capture the strength of both Definition 1 and the verifiability of PoQ. In the generalized definition, the trusted party is the _designated verifier_, who generates public parameters and a private verification key. After getting all the samples, the designated verifier uses the verification key to distinguish Alice's quantum samples from Bob's samples. We call this _Designated verifiable quantum advantage_ or DVQA.
Under this definition, the trusted verifier is offline, so Alice doesn't need to interact with the trusted verifier and can generate the samples on her own as in Definition 1. Moreover, it is possible to compile existing PoQ to satisfy the new definition. For example: Assuming a random oracle, the interactive protocol of [1] fits this definition. The function keys and trapdoors of their protocol are the public parameters and private verification keys here. Then, the classical or quantum prover can run the operations of the verifier in the original protocol locally by querying the random oracle for the challenges. In the end, the prover sends all the generated transcripts to the distinguisher \(\mathcal{A}\), who uses the verification key to distinguish the transcripts. In the compiled protocol, the verifier is offline as in Definition 1, and the verifiability of the original PoQ is preserved
**Implications.** We offer a few perspectives.
* For a quantum advocate (experiment designer): The study of quantum cryptography can provide new insights into designing a verifiable quantum advantage experiment. For example, one possible route indicated by Theorem 2 is to start with a classically insecure EFI, and then apply some amplification technique to dilate the statistical distance to obtain the strong quantum advantage while remaining classically insecure.
* For a quantum skeptic: A spoofing strategy can be found through the lens of quantum cryptography. Theorem 1 says that the spoofing distribution can be a distribution that forms an EFI pair with most of the output states. Theorem 3 says that if the output states of an experiment are classically unidentifiable PRS, the uniform distribution suffices.
* For a quantum cryptographer: The quest for verifiability of quantum advantages might lead to quantum cryptographic applications. Theorem 2 implies that if an experiment is not verifiable, then it will form a classical-secure EFI with a classical polynomial-time samplable distribution. Since Theorem 2 can be lifted against quantum adversaries, it is possible to build standard EFI and the primitives based on EFI from a quantumly unverifiable experiment.
In summary, our results show connections between the verifiability of quantum advantages and the quantum cryptographic primitives. It is worth noting that computational tasks demonstrating quantum advantages on near-term quantum devices might not directly result in useful applications; however, our results show that the quest for quantum advantages and their verifiability can provide new insights and methods to build fundamental quantum cryptographic primitives.
### Open problems
As this is only an initial attempt at studying the relationship between the verifiability of quantum advantage experiments and quantum cryptographic primitives, there are many open problems. We list some of them here.
**Random circuits, PRS, and EFI.**: Are the output states of random circuits PRS, or even classically unidentifiable PRS? Similarly, can we use random circuits to construct EFI? There is evidence that the output states of random circuits are PRS. For example, it is known that polynomial-sized random circuits are approximate poly-designs [1], which indicates that output states of random circuits are highly indistinguishable from Haar random states. Also, it is possible to build an EFI by pairing random circuits and other sufficiently random samplers while ensuring the two output states are statistically far.
**Quantum cryptography on NISQ devices.**: If the output states of random circuits are not PRS, can we still construct PRS using less structured NISQ circuits with a fixed architecture? The known constructions of PRS[1, 2, 3, 4] all require structured circuits, although some of them only require shallow circuits. If the NISQ device can construct classically _identifiable_ PRS, our result cannot rule out the possibility of verifiable quantum advantage on NISQ devices. Similarly, it is interesting to know whether one can use NISQ devices to construct EFI.
**The effect of noise.**: It is known that efficient sampling from the output distribution of a noisy random quantum circuit can be done classically [1]. What would be the implication of this result on PRS? If the output states of _noiseless_ random circuits are PRS, will noisy circuits still output PRS? Intuitively, noise will lead to mixed states, which might affect the security of PRS since a Haar random state is a pure state. Along this line, would it be possible to change the definition of PRS to be indistinguishable from "noisy Haar random states?" while keeping all
the applications of the original PRS? Likewise, we are wondering whether noise would impact the construction of EFI. Note that EFI must be two computationally indistinguishable and statistically far mixed states. Thus, noise could even make the states more indistinguishable. On the other hand, if the noise is too large, the two states might be statistically close.
**DVQA: reduced trusted setup and generic compiler.**: Our current transformation of existing PoQ protocols to a DVQA experiment is proven in the random oracle model. Can we replace the random oracle with a suitable family of hash functions such as correlation intractable hash [13]? Ideally, can one design a generic compiler that converts any PoQ protocol directly to a DVQA system?
**Complete characterization of verifiability.**: In this work, we identify several basic conditions that give useful characterizations of verifiable quantum advantage. It would be fruitful to find other characterizations of verifiable quantum advantage and investigate their applications in quantum information and cryptography.
Organization.We define the quantum primitives in Section 2. We formally define verifiable quantum advantage and discuss its connection to a variant of MCSP in Section 3. In Section 4, we explore the relationship between verifiability and EFI, and in Section 5 we show the relationship between verifiability and PRS. Then we define and discuss DVQA in Section 6. Finally, in Section 7, we lift verifiability to against quantum distinguishers and explore its relation with EFI.
Acknowledgement.We thank Yunchao Liu for the helpful discussions. NHC was supported by NSF award FET-2243659, Google Scholar Award, and DOE award DE-SC0024301. HF was supported by the US National Science Foundation QLCI program (grant OMA-2016245). FS was supported in part by the US National Science Foundation grants CCF-2042414, CCF-2054758 (CAREER) and CCF-2224131. PY was supported in part by the National Natural Science Foundation of China (Grant No. 62332009, 61972191), and the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302900).
## 2 Quantum cryptographic primitives
Definition 2 (Efi pairs [1]): An EFI pair generator is a quantum algorithm \(G:(b,1^{\lambda})\mapsto\rho_{b}\) that on inputs \(b\in\{0,1\}\) and security parameter \(\lambda\), outputs a quantum state \(\rho_{b}\), such that the following conditions hold.
1. \(G\) runs in quantum polynomial time.
2. \(\rho_{0}\) and \(\rho_{1}\) are statistically distinguishable, i.e., \(\frac{1}{2}\|\rho_{0}-\rho_{1}\|_{1}\geq 1/\poly(\lambda)\).
3. \(\rho_{0}\) and \(\rho_{1}\) are computationally indistinguishable, i.e., for all quantum poly-time algorithm \(\mathcal{A}\), \(|\Pr[\mathcal{A}(\rho_{0})=1]-\Pr[\mathcal{A}(\rho_{1})=1]|\leq\negl(\lambda)\).
There are a few other special cases that are worth noting.
* (EFID) When all objects are specialized to their classical counterparts, we recover the classical primitive of EFID pairs [10]. Namely, \(G\) is a poly-time classical algorithm which produces samples from one of two distributions \(D_{b}\), and the distinguisher \(\mathcal{A}\) is an arbitrary classical poly-time algorithm. We can view \(\rho_{b}:=\sum_{i}D_{b}(i)|i\rangle\langle i|\) as a (mixed) state encoding \(D_{b}\).
* (Quantum-secure EFID) If the indistinguishability of EFID holds against poly-time _quantum_ distinguishers and other objects remain classical; we call it a quantum-secure EFID.
* (Classical-secure \(\mathsf{EFI}\) or quantum-generated \(\mathsf{EFID}\)) If the indistinguishability of \(\mathsf{EFI}\) is only required to hold against poly-time _classical_ distinguishers, we call it a classical-secure \(\mathsf{EFI}\). This can also be viewed as a \(\mathsf{EFID}\) but the generating algorithm \(G\) is permitted to be a quantum algorithm, and \(D_{b}\) corresponds to measuring \(\rho_{b}\) in the computational basis. Hence we can alternatively call it a quantum-generated \(\mathsf{EFID}\).
* (Quantum-secure quantum-generated \(\mathsf{EFID}\) (\(\mathsf{qq}\)-\(\mathsf{EFID}\))) This is \(\mathsf{EFI}\) where \(\rho_{b}\) is restricted encoding a classical distribution \(D_{b}\). Clearly any \(\mathsf{qq}\)-\(\mathsf{EFID}\) is immediately an \(\mathsf{EFI}\) by definition; on the other hand, any \(\mathsf{EFI}\) readily implies an \(\mathsf{qq}\)-\(\mathsf{EFID}\) by letting \(D_{b}\) be the distribution induced by measuring \(\rho_{b}\).
Definition 3 (Pseudorandom states (Prs) [15]): Let \(\lambda\) be the security parameter. Let \(\mathcal{H}\) be a Hilbert space and \(\mathcal{K}\) a key space, both parameterized by \(\lambda\). A keyed family of quantum states \(\{\phi_{k}\in\mathcal{S}(\mathcal{H})\}_{k\in\mathcal{K}}\) is _pseudorandom_ if the following hold:
1. (Efficient generation). There is a polynomial-time quantum algorithm \(G\) that generates state \(|\phi_{k}\rangle\) on input \(k\). That is, for all \(k\in\mathcal{K},G(k)=|\phi_{k}\rangle\).
2. (Pseudorandomness). Any polynomially many copies of \(|\phi_{k}\rangle\) with the same random \(k\leftarrow\mathcal{K}\) is computationally indistinguishable from the same number of copies of a Haar random state. More precisely, for any efficient quantum algorithm \(\mathcal{A}\) and any \(m\in\mathsf{poly}(\lambda)\), \[\left|\Pr_{k\leftarrow\mathcal{K}}[\mathcal{A}(|\phi_{k}\rangle^{\otimes m}) =1]-\Pr_{\phi\leftarrow\mu}[\mathcal{A}(|\phi\rangle^{\otimes m})=1\right| \leq\mathsf{negl}(\lambda)\,,\] where \(\mu\) is the Haar measure on \(\mathcal{S}(\mathcal{H})\).
Verifiability of \(\mathsf{Prs}\)Let \(|\psi\rangle\) be a state generated from \(\mathsf{StateGen}\). Given the corresponding key \(k\) and \(|\psi\rangle^{\mathsf{poly}(m)}\), one can verify that \(|\psi\rangle\) is generated from \(\mathsf{StateGen}(k)\) via swap test. It is worth noting that this verification procedure requires implementing the swap test for quantum states.
Definition 4 (Classically unidentifiable state family): Let \(\lambda\) be the security parameter. Let \(\Phi:=\{|\phi_{k}\rangle\}_{k\in\mathcal{K}}\) be a family of efficiently generatable states, i.e., there exists efficient \(C_{k}\) such that \(|\phi_{k}\rangle=C_{k}|0^{n}\rangle\). We call \(\Phi\)_classically unidentifiable_ if for any efficient classical algorithm \(A\), any \(i\neq j\), and any polynomial \(m=\mathsf{poly}(\lambda)\)
\[|\Pr[A(C_{i},\mathbf{z}_{i})=1]-\Pr[A(C_{i},\mathbf{z}_{j})=1]|\leq\mathsf{negl}( \lambda)\,,\]
where \(\mathbf{z}_{i}:=(z_{i}^{1},\ldots,z_{i}^{m})\) are the outcomes by measuring \(|\phi_{i}\rangle^{\otimes m}\) in the computational basis.
Remark 1: The random phase state family \(\{|\phi_{k}\rangle\}\) proposed in [15] below is an example of classically unidentifiable \(\mathsf{PRS}\),
\[|\phi_{k}\rangle=\frac{1}{\sqrt{N}}\sum_{x\in[N]}\omega_{N}^{f_{k}(x)}|x \rangle\,,\]
where \(N=2^{n}\) and \(\{f_{k}:[N]\rightarrow[N]\mid k\in\mathcal{K}\}\) is a quantum-secure pseudorandom function. This is because when measured in the computational basis, \(|\phi_{k}\rangle\) always induce the uniform distribution. Similarly the special case of binary phases [1], i.e., \(|\phi_{k}\rangle=\frac{1}{\sqrt{N}}\sum_{x\in\{0,1\}^{n}}(-1)^{f_{k}(x)}|x\rangle\) is also a classically unidentifiable \(\mathsf{PRS}\).
## 3 Verifiable quantum advantages
Definition 5 (Verifiable quantum advantages ((\(s,t,\varepsilon\))-Vqa)): Let \(\lambda\) be the security parameter. Let \(\mathfrak{C}\) be a family of polynomial-size quantum circuits on \(n\) qubits, where \(n=\poly(\lambda)\). For any \(C\in\mathfrak{C}\), let \(D_{C}\) be the distribution induced by measuring \(C|0^{n}\rangle\) in the computational basis.
We call \(\mathfrak{C}\) a family of \((s,t,\varepsilon)\) _verifiable quantum advantage_ (\(\mathsf{VQA}\)), if for all distribution \(D\) samplable by a time-\(s\) classical algorithm, it holds that for a uniformly random \(C\) drawn from \(\mathfrak{C}\), there exists a time-\(t\)**classical** algorithm \(\mathcal{A}\) that \(\varepsilon\)-distinguishes \(D_{C}\) from \(D\), namely
\[\mathop{\mathbb{E}}_{C\leftarrow\mathfrak{C}}|\Pr[\mathcal{A}(C,\mathcal{S}_{ D},\boldsymbol{z}_{C})=1]-\Pr[\mathcal{A}(C,\mathcal{S}_{D},\boldsymbol{z}_{D})=1] |\geq\varepsilon\,,\]
where \(\mathcal{S}_{D}\) is the description of a time-\(s\) classical sampler for \(D\), \(\boldsymbol{z}_{C}\) and \(\boldsymbol{z}_{D}\) are sets of up to \(t\) samples drawn from \(D_{C}\) and \(D\) respectively.
This work focuses on the case where \(s,t=\poly(\lambda)\) and \(\epsilon=1/\poly(\lambda)\). I.e., we ask whether a classical polynomial-time verifier \(\mathcal{A}\) can distinguish the quantum samples from the classical samples with noticeable probability. However, one can also consider verifiers with different powers by choosing the proper parameters.
The distinguishers considered in the literature of quantum advantage experiments are weaker than ours because their distinguishers are agnostic of the sampler of the classical distribution. Hence, we give an alternative definition of verifiable quantum advantage below.
Definition 6 (Universally verifiable quantum advantages ((\(s,t,\varepsilon\))-Uvda)): Let \(\lambda\) be the security parameter. Let \(\mathfrak{C}\) be a family of polynomial-size quantum circuits on \(n\) qubits, where \(n=\poly(\lambda)\). For any \(C\in\mathfrak{C}\), let \(D_{C}\) be the distribution induced by measuring \(C|0^{n}\rangle\) in the computational basis.
We call \(\mathfrak{C}\) a family of \((s,t,\varepsilon)\) universally verifiable quantum advantage (\(\mathsf{UVQA}\)), if for all distribution \(D\) samplable by a time-\(s\) classical algorithm, it holds that for a uniformly random \(C\) drawn from \(\mathfrak{C}\), there exists a time-\(t\)**classical** algorithm \(\mathcal{A}\) that \(\varepsilon\)-distinguishes \(D_{C}\) from \(D\), namely
\[\mathop{\mathbb{E}}_{C\leftarrow\mathfrak{C}}|\Pr[\mathcal{A}(C,\boldsymbol{z }_{C})=1]-\Pr[\mathcal{A}(C,\boldsymbol{z}_{D})=1]|\geq\varepsilon\,,\]
where \(\boldsymbol{z}_{C}\) and \(\boldsymbol{z}_{D}\) are sets of up to \(t\) samples drawn from \(D_{C}\) and \(D\) respectively.
Comparing the two definitions, it is easy to see that for the same set of parameters, if \(\mathfrak{C}\) is \(\mathsf{UVQA}\), it is also \(\mathsf{VQA}\), and if it is not \(\mathsf{VQA}\), it is not \(\mathsf{UVQA}\) either.
Discussion on quantum advantages.It is worth stressing that the condition in Definition 5 encapsulates _quantum advantage_ and _verifiability_ simultaneously. In particular, it implies that any classically time-\(s\) samplable distribution is statistically \(\varepsilon\)-far from \(D_{C}\) on average, i.e.,
\[\mathop{\mathbb{E}}_{C\leftarrow\mathfrak{C}}\lVert D_{C}-D\rVert_{1}\geq \varepsilon\,. \tag{1}\]
Another type of advantage is sometimes useful too, and it is conjectured to hold in many quantum supremacy proposals. Let \(D_{\mathfrak{C}}\) be the distribution of first sampling \(C\leftarrow\mathfrak{C}\) uniformly at random and then measuring \(C|0^{n}\rangle\) in the computational basis. We call it _strong \((s,\varepsilon)\)-quantum advantage_ if for any classically time-\(s\) samplable distribution \(D\),
\[\lVert D_{\mathfrak{C}}-D\rVert_{1}\geq\varepsilon\,. \tag{2}\]
For example, the hardness result of RCS intuitively says that the distribution obtained from measuring a random circuit is statistically far from any classical polynomial-time samplable distribution under some conditions.
Alternatively let \(\mathfrak{C}=\{C_{k}\}\) and define mixed states
\[\rho_{\mathfrak{C}} :=\operatorname{Tr}_{A}\left(\frac{1}{|\mathfrak{C}|}\sum_{k}|k \rangle\langle k|_{A}\otimes C_{k}|0^{n}\rangle\langle 0^{n}|C_{k}^{\dagger} \right)\,,\] \[\rho_{D} :=\sum_{i}D(i)|i\rangle\langle i|\,.\]
The strong quantum advantage condition (eq. (2)) can then be equivalently expressed as
\[\|\rho_{\mathfrak{C}}-\rho_{D}\|_{1}\geq\varepsilon\,. \tag{3}\]
### Example: Verifiable quantum advantage
Here, we introduce some sampling problems that satisfy the definition of verifiable quantum advantage.
Vqa_from quantum Fourier sampling._
Definition 7 (Simon's problem): Let \(f:\{0,1\}^{n}\to\{0,1\}^{m}\) be a 2-to-1 function with the promise that there exists \(s\in\{0,1\}^{n}\) such that \(f(x)=f(x\oplus s)\) for all \(x\in\{0,1\}^{n}\). Given oracle access to \(f\), find \(s\).
It is well-known that there exists a quantum polynomial time algorithm solving Simon's problem, and classical algorithms must use superpolynomially many queries [20].
Theorem 3.1: _Let \(\mathcal{F}\) be the set of all Simon's functions. Then, relative to \(\mathcal{F}\), there exists a quantum circuit family \(\mathfrak{C}^{\mathcal{F}}\) that is UVqa._
Proof: First, we use Simon's algorithm to form the quantum circuit family \(\mathfrak{C}^{\mathcal{F}}\) as follows: Given a random Simon's function \(f\) with hidden shift \(s\in\{0,1\}^{n}\), the quantum circuit \(C^{f}\) implements the Simon's algorithm to obtain the quantum state \(\rho_{f}\). Note that when measuring \(\rho_{f}\) in the computational basis, one will obtain a random \(x\) for which \(x\cdot s=0\). Hence, our circuit family is defined as \(\mathfrak{C}^{\mathcal{F}}:=\{C^{f}:f\in\mathcal{F}\}\).
Our distinguisher \(\mathcal{A}^{f}\) is as follows: On inputs \(C^{f}\) and sufficiently many samples \(x_{0},\ldots,x_{m}\), \(\mathcal{A}^{f}\) runs Gaussian elimination (the classical post-processing in Simon's algorithm) to identify \(s\) and then check if \(s\) is the hidden shift of \(f\).
Obviously, samples generated from measuring \(C^{f}\) in the computational basis will be accepted by \(\mathcal{A}^{f}\) with high probability. On the other hand, no efficient classical algorithms can generate samples accepted by \(\mathcal{A}^{f}\) with noticeable probability; this follows from the fact that no polynomial-time classical algorithm can solve Simon's problem with noticeable probability.
Following a similar idea, we can obtain the following corollary by considering Shor's algorithm for the Factoring problem.
Corollary 1: _Assuming factoring is hard for any classical polynomial-time algorithm. Then, there exists a quantum circuit family that is UVqa._
Note that neither \(\mathfrak{C}^{\mathcal{F}}\) nor Shor's algorithm can be implemented on NISQ devices.
Cross-entropy benchmark (XEB).In the cross-entropy benchmark (XEB) [1], given the description of a random quantum circuit \(C\), the quantum machine prepares multiple samples \(x_{1},\ldots.x_{k}\in\{0,1\}^{n}\) accordingly, the verifier tests whether \(F_{XEB}=\frac{\sum_{i=1}^{k}(0^{n}|C|x_{i})}{k}\) is close to \(2/2^{n}\) or close to \(1/2^{n}\). If \(F_{XEB}\) is close to \(2/2^{n}\), then the samples are prepared from a quantum machine. If \(F_{XEB}\) is close to \(1/2^{n}\), the samples are prepared by some classical machines.
Suppose that RCS has quantum advantages as described in eq. (1), under the linear cross-entropy quantum threshold assumption (XQUATH) [1], then RCS is a \((s,t,\varepsilon)\)-UVQA with \(s=\mathsf{poly}(n)\), \(t=\omega(\mathsf{poly}(n))\) and \(\varepsilon\) a constant. The reason that \(t\) is superpolynomial in \(n\) is that a classical machine requires time a superpolynomial in \(n\) to compute \(|\langle x|C|0^{n}\rangle|^{2}\).
Similarly, Boson [1] and instantaneous quantum polynomial (IQP) [2] sampling experiments are all UVQA with verification time a superpolynomial in \(\lambda\) if the experiments have achieved quantum advantages described in eq. (1). In a recent work [1], the authors give evidence that IQP sampling is UVQA under a new conjecture.
### An universal efficient verifier from meta-complexity problems
Here, we introduce variants of a meta-complexity problem for which the existence of an efficient algorithm would imply a universal polynomial-time verifier for the following class of quantum advantages.
Definition 8 (Sample Efficient Verifiable Quantum Advantage (Se-Vqa)): The definition is the same as (\(s,t,\epsilon\))-VQA except that the number of samples is at most \(\mathsf{poly}(\lambda)\).
Definition 9 (Minimum Circuit Size Problems for Samples (SampMCSP)): Let \(\lambda\) be the security parameter. Let \(D\) be a distribution over \(n\)-bit strings and \(t(\cdot)\) be any function where \(n=\mathsf{poly}(\lambda)\). Given polynomially many samples \(z_{1},\ldots.z_{\ell}\) from \(D\), \(t(\cdot)\) and \(s(\cdot)\), the problem is to decide whether there exists a classical time-\(s(n)\) sampler \(S_{D^{\prime}}\) such that \(S_{D^{\prime}}\) can sample from a distribution \(D^{\prime}\) such that \(D\) and \(D^{\prime}\) are indistinguishable for any \(t(n)\)-time classical algorithm \(\mathcal{A}\) with polynomially many samples:
\[|\Pr_{z_{1},\ldots,z_{\ell}\sim D}[\mathcal{A}(z_{1},\ldots,z_{\ell}, \mathcal{S}_{D},\mathcal{S}_{D^{\prime}})=1]-\Pr_{z_{1},\ldots,z_{\ell}\sim D ^{\prime}}[\mathcal{A}(z_{1},\ldots,z_{\ell},\mathcal{S}_{D},\mathcal{S}_{D^ {\prime}})=1]|\leq\mathsf{negl}(n),\]
where \(\mathcal{S}_{D}\) is a sampler of \(D\).
Definition 10 (Oblivious Minimum Circuit Size Problems for Samples (ObSampMCSP)): The definition is the same as above except that the distinguisher doesn't take the description of \(\mathcal{S}_{D^{\prime}}\) as input.
Both ObSampMCSP and SampMCSP are computable. A trivial algorithm is as follows: Given samples \(z_{1},\ldots,z_{\mathsf{poly}(n)}\), \(s(\cdot)\), and \(t(\cdot)\), the algorithm tries all \(s(n)\)-time samplers and \(t(n)\)-time distinguished.
The following theorems show that the existence of efficient classical algorithms for ObSampMCSP and SampMCSP will imply that all experiments that are SE-VQA or SE-UVQA, i.e., the advantage can be verified using \(\mathsf{poly}(n)\) samples, can be verified in classical polynomial time. In other words, algorithms for these two problems provide _universal_ procedures to efficiently verify SE-VQA or SE-UVQA.
**Theorem 5**.: _If \(\mathsf{SampMCSP}\) with \((s(\cdot),t(\cdot))\) can be solved in classical polynomial time, then an \((s,t,\epsilon)\)-\(\mathsf{SE}\)-\(\mathsf{VQA}\) experiment is an \((s,\mathrm{poly}(n),\epsilon+\mathsf{negl}(n))\)-\(\mathsf{VQA}\)._
Proof.: Suppose that \(\mathcal{A}\) is a classical polynomial-time algorithm for \(\mathsf{SampMCSP}\). Let \(z_{1},\ldots,z_{\mathsf{poly}(n)}\) be the samples generated from the experiment, \(C\) be the description of the quantum circuit, and \(S_{D}\) be the description of the classical cheating sampler. Then, we can construct a polynomial-time algorithm \(\mathcal{A}^{\prime}\) to identify whether \(z_{1},\ldots,z_{\mathsf{poly}(n)}\) are generated from \(C\) as follows: \(\mathcal{A}^{\prime}\) on inputs \((C,\mathcal{S}_{D},z_{1},\ldots,z_{\mathsf{poly}(n)})\), applies \(\mathcal{A}\) on \((z_{1},\ldots,z_{\mathsf{poly}(n)})\) and \(s=|\mathcal{S}_{D}|\) where \(|\mathcal{S}_{D}|\) is the circuit size of \(\mathcal{S}_{D}\). If \(\mathcal{A}\) outputs \(1\) (i.e., there exists a classical circuit with size at most \(s\)), \(\mathcal{A}^{\prime}\) outputs \(0\) (i.e., the samples are not from \(C\)); otherwise, \(\mathcal{A}^{\prime}\) outputs \(1\).
Obviously, \(\mathcal{A}^{\prime}\) runs in classical polynomial time if \(\mathcal{A}\) is a classical polynomial-time algorithm.
For correctness, since that quantum circuit family \(\mathfrak{C}\) is \(\mathsf{SE}\)-\(\mathsf{VQA}\), no efficient classical sampler can generate polynomially many samples that are \(t(n)\)-indistinguishable from \(C\) chosen randomly from \(\mathfrak{C}\) by definition. Therefore, if \((z_{1},\ldots,z_{\mathsf{poly}(n)})\) are generated from \(C\), \(\mathcal{A}\) outputs \(0\) with a probability that is at least \(1-\mathsf{negl}(n)\). On the other hand, if \((z_{1},\ldots,z_{\mathsf{poly}(n)})\) are generated from \(\mathcal{S}_{D}\), there exist classical samplers with size at most \(|\mathcal{S}_{D}|\) generating samples indistinguishable from \(D\). Therefore, \(\mathcal{A}\) outputs \(1\), and \(\mathcal{A}^{\prime}\) knows that the samples are not from \(C\). This completes the proof.
The following corollary follows the same argument.
**Corollary 2**.: _If \(\mathsf{ObSampMCSP}\) can be solved in classical polynomial time, then an \((s,t,\epsilon)\)-\(\mathsf{SE}\)-\(\mathsf{UVQA}\) is also \((s,\mathrm{poly}(n),\epsilon+\mathsf{negl}(n))\)-\(\mathsf{UVQA}\)._
## 4 Verifiability and Efi
**Definition 11** (Classically samplable state \(\rho_{D}\)).: _Let \(\mathcal{D}=\{p_{1},\ldots,p_{2^{n}}\}\) be some distributions over \(\{0,1\}^{n}\) for which there exists a PPT algorithm that can efficiently sample from \(\mathcal{D}\). We define \(\rho_{\mathcal{D}}=\sum_{x\in\{0,1\}^{n}}p_{x}|x\rangle\langle x|\)._
**Definition 12** (Extended circuit set \(\mathfrak{C}^{*}\) of \(\mathfrak{C}\)).: _Let \(\mathfrak{C}\) be a set of \(n\)-qubit polynomial-size quantum circuits. We definite a set of \(2n\)-qubit polynomial-size quantum circuits \(\mathfrak{C}^{*}\) as follows: Without loss of generality, for each \(C\in\mathfrak{C}\), if \(C|0^{n}\rangle=\sum_{x\in\{0,1\}^{n}}\alpha_{i,x}|x\rangle\), then \(C^{*}|0^{n}\rangle_{A}|0^{n}\rangle_{B}=\sum_{x\in\{0,1\}^{n}}\alpha_{i,x}|x \rangle_{A}|x\rangle_{B}\)._
**Theorem 6**.: _Let \(\lambda\) be the security parameter. Let \(\mathfrak{C}\) be a set of \(n\)-qubit polynomial-size quantum circuits, where \(n=\mathsf{poly}(\lambda)\), and \(\mathfrak{C}^{*}\) be its extended circuit set as defined in Definition 12. Suppose \(\mathfrak{C}\) is \(\mathsf{VQA}\). Set \(\rho_{C}:=\mathrm{Tr}_{B}(C^{*}|0^{2n}\rangle\langle 0^{2n}|_{AB}(C^{*})^{ \dagger})\) for any \(C\in\mathfrak{C}^{*}\) and_
\[G=\{C\in\mathfrak{C}^{*}:\ \exists\text{ classically samplable state }\rho_{D}\text{ s.t }\rho_{C}\text{ and }\rho_{D}\text{ form an EFI pair}\}.\]
_It holds that \(|G|\leq(1-1/\mathsf{poly}(\lambda))|\mathfrak{C}^{*}|\) for some polynomial._
Proof.: Let \((C_{0},C_{1})\) be a pair of EFI generators and \(\rho_{0}\) and \(\rho_{1}\) be the corresponding output states. We show that if there exists an algorithm \(\mathcal{A}\) such that \(|\Pr[\mathcal{A}(\rho_{0}^{\otimes t(n)})=1]-\Pr[\mathcal{A}(\rho_{1}^{\otimes t (n)})=1]|>\mathsf{negl}(n)\) for any polynomial \(t(\cdot)\), then \(\mathcal{A}\) can break the EFI pair \((C_{0},C_{1})\).
We prove this by a hybrid argument. Let \(H_{i}=\rho_{0}^{\otimes i}\otimes\rho_{1}^{\otimes(t-i)}\) for which \(H_{0}=\rho_{0}^{\otimes t}\) and \(H_{t}=\rho_{1}^{\otimes t}\). Since \(\mathcal{A}\) can distinguish \(H_{0}\) from \(H_{t}\), there must exist an \(i^{*}\) for which \(\mathcal{A}\) can distinguish \(H_{i*}\) from \(H_{i^{*}+1}\). Then, we can construct a distinguisher \(\mathcal{A}^{\prime}\) to distinguish \(\rho_{0}\) from \(\rho_{1}\) as follows: a) \(\mathcal{A}^{\prime}\) first
chooses an \(i\) uniformly randomly, b) \(\mathcal{A}^{\prime}\) prepares the state \(\rho_{0}^{\otimes i}\otimes\rho\otimes\rho_{1}^{\otimes t-i-1}\), where \(\rho\) is the input state of the EFI game, c) and then \(\mathcal{A}^{\prime}\) runs \(\mathcal{A}\) on \(\rho_{0}^{\otimes i}\otimes\rho\otimes\rho_{1}^{\otimes t-i-1}\). Note that \(\rho_{0}^{\otimes i}\otimes\rho\otimes\rho_{1}^{\otimes t-i-1}\) is \(H_{i}\) when \(\rho=\rho_{0}\) and is \(H_{i+1}\) otherwise. The probability that \(\mathcal{A}^{\prime}\) succeeds is noticeable since \(\mathcal{A}^{\prime}\) chooses \(i=i^{*}\) with probability \(1/t\) in a) and \(\mathcal{A}\) distinguishes \(H_{i}\) from \(H_{i+1}\) with noticeable probability if \(i=i^{*}\). This completes the proof.
Given the above result, we can prove the theorem by contradiction. Suppose that \(\mathfrak{C}\) is \(\mathsf{VQA}\). Notice that the distribution induced by measuring \(C|0^{n}\rangle\) in the computational basis is the same as that induced by measuring \(\rho_{C}\) in the computational basis. Then, for all classically samplable distributions \(\mathcal{D}^{\prime}\), there must exist a PPT algorithm \(\mathcal{A}\) such that
\[\mathop{\mathbb{E}}_{C\leftarrow\mathfrak{C}}\big{|}\Pr_{\boldsymbol{z}_{C} \leftarrow\rho_{C}}[\mathcal{A}(C,\mathcal{S}_{\mathcal{D}^{\prime}}, \boldsymbol{z}_{C})=1]-\Pr_{\boldsymbol{z}_{\mathcal{D}^{\prime}}\leftarrow \mathcal{D}^{\prime}}[\mathcal{A}(C,\mathcal{S}_{\mathcal{D}^{\prime}}, \boldsymbol{z}_{\mathcal{D}^{\prime}})=1]\big{|}\geq\frac{1}{\mathsf{poly}(n)},\]
where \(\mathcal{S}_{\mathcal{D}^{\prime}}\) is the description of the classical sampler and \(\boldsymbol{z}_{C}\) and \(\boldsymbol{z}_{\mathcal{D}^{\prime}}\) are polynomially many samples from measuring \(\rho_{C}\) and from \(\mathcal{D}^{\prime}\) respectively.
Now, one can use the algorithm \(\mathcal{A}\) to build another algorithm \(\mathcal{A}^{\prime}\) distinguishing \(\rho_{C}\) and \(\rho_{\mathcal{D}}\) as follows: On inputs \(C^{*}\), \(\mathcal{S}_{\mathcal{D}}\), and polynomially many copies of \(\rho\) which is either \(\rho_{C}\) or \(\rho_{\mathcal{D}}\), the algorithm first measures all copies of \(\rho\) in computational basis; we denote the measurement outcomes as \(\boldsymbol{z}\). Then, \(\mathcal{A}^{\prime}\) applies \(\mathcal{A}\) on inputs \(C^{*}\), \(\mathcal{S}_{\mathcal{D}}\), and \(\boldsymbol{z}\) and outputs whatever \(\mathcal{A}\) outputs.
For the correctness, since \(\mathcal{A}\) can identify whether the samples \(\boldsymbol{z}\) are measurement outcomes of \(\rho_{C}\) or \(\mathcal{S}_{\mathcal{D}}\) with \(1/\mathsf{poly}(\lambda)\) advantages random over \(C\) and the measurement outcomes, there must be at least \(1/\mathsf{poly}(n)\) fraction of \(C\)'s for which \(\mathcal{A}^{\prime}\) can distinguish \(\rho_{C}\) from \(\rho_{\mathcal{D}}\) with \(1/\mathsf{poly}(n)\) advantages. It is obvious that \(\mathcal{A}^{\prime}\) is efficient since it only needs to measure \(\mathsf{poly}(n)\) copies of \(\rho\) in the computational basis and applies \(\mathcal{A}\) on the outcomes.
Next, we define another mixed state obtained from \(\mathfrak{C}\).
Definition 13: Let \(\mathfrak{C}=\{C_{1},\ldots,C_{N}\}\) be a set of \(n\)-qubit polynomial-size quantum circuits. We define \(\rho_{\mathfrak{C}}\) as follows: Let \(C^{*}\) be the (\(n+\log N\))-qubit circuit such that \(C^{*}|k\rangle_{A}|0^{n}\rangle_{B}=|k\rangle C_{k}|0^{n}\rangle\). \(\rho_{\mathfrak{C}}=\operatorname{Tr}_{A}(C^{*}(\frac{1}{N}\sum_{k}|k\rangle \langle k|_{A}\otimes|0^{n}\rangle\langle 0^{n}|_{B})(C^{*})^{\dagger})\). We define \(D_{\mathfrak{C}}\) as measuring \(\rho_{\mathfrak{C}}\) in the computational basis.
Theorem 4.1: _Let \(\mathfrak{C}=\{C_{1},\ldots,C_{N}\}\) be a set of \(n\)-qubit polynomial-size quantum circuits. If \(\mathfrak{C}\) achieves strong quantum advantage (eq. (2)) and there is no classical polynomial-time samplable distribution \(D\) such that \(\rho_{\mathfrak{C}}\) and \(\rho_{D}\) forms a classical secure EFI pair, then \(\mathfrak{C}\) is \(\mathsf{VQA}\)._
Proof: Both \(\rho_{\mathfrak{C}}\) and \(\rho_{D}\) are efficiently preparable following the conditions of the theorem. Suppose \(\mathfrak{C}\) is not \(\mathsf{VQA}\). The condition of the theorem also implies that there exists a classical polynomial-time samplable distribution \(D\) such that \(\|\rho_{\mathfrak{C}}-\rho_{D}\|\geq 1/\mathsf{poly}(\lambda)\) but \(D_{C}\) cannot be distinguished from \(D\) for most \(C_{i}\in\mathfrak{C}\).
If \(\rho_{\mathfrak{C}}\) and \(\rho_{D}\) do not form a classical secure EFI pair, there exists an algorithm that can distinguish between one sample from measuring \(\rho_{\mathfrak{C}}\) and one sample of \(D\). We can construct a distinguisher \(\mathcal{A}^{\prime}\) for one sample from the circuit sampling experiment with \(\mathfrak{C}\) and one sample from \(D\). \(\mathcal{A}^{\prime}\) simply ignores the inputs \(C_{i}\) and \(\mathcal{S}_{D}\), runs \(\mathcal{A}\) on the samples, and outputs what \(\mathcal{A}\) outputs. The advantage
of the algorithm \(\mathcal{A}^{\prime}\) in distinguishing the sample \(z_{C_{i}}\) of the circuit \(C_{i}\) from the sample \(z_{D}\) of \(D\) is
\[\mathop{\mathbb{E}}_{C_{i}\in\mathfrak{C}}\lvert\Pr[\mathcal{A}^{ \prime}(C_{i},\mathcal{S}_{D},z_{C_{i}})=0]-\Pr[\mathcal{A}^{\prime}(C_{i}, \mathcal{S}_{D},z_{D})=0]\rvert\] \[\geq \lvert\mathop{\mathbb{E}}_{C_{i}\in\mathfrak{C}}\left(\Pr[ \mathcal{A}^{\prime}(C_{i},\mathcal{S}_{D},z_{C_{i}})=0]-\Pr[\mathcal{A}^{ \prime}(C_{i},\mathcal{S}_{D},z_{D})=0]\right)\rvert\] \[= \lvert\mathop{\mathbb{E}}_{C_{i}\in\mathfrak{C}}\left(\Pr[ \mathcal{A}(z_{C_{i}})=0]-\Pr[\mathcal{A}(z_{D})=0]\right)\rvert\] \[= \lvert\mathop{\mathbb{E}}_{C_{i}\in\mathfrak{C}}\left(\Pr[ \mathcal{A}(z_{C_{i}})=0]\right)-\Pr[\mathcal{A}(z_{D})=0]\rvert\] \[= \lvert\Pr[\mathcal{A}(z_{\mathfrak{C}})=0]-\Pr[\mathcal{A}(z_{D} )=0]\rvert\] \[\geq 1/\poly(\lambda).\]
The sample \(z_{\mathfrak{C}}\) is obtained from measuring a random circuit \(C_{i}\). Notice that
\[\langle x|\rho_{\mathfrak{C}}|x\rangle=\frac{1}{N}\sum_{k=1}^{N}\lvert\langle x |C_{k}|0^{n}\rangle\rvert^{2}.\]
Hence \(z_{\mathfrak{C}}\) follows the same distribution of measuring \(\rho_{\mathfrak{C}}\) and the last inequality follows the distinguishability of \(\mathcal{A}\). The second last equality follows the observation that when averaged over \(C_{i}\in\mathfrak{C}\), the distribution of \(z_{C_{i}}\) is the same as that of \(z_{\mathfrak{C}}\). Because \(\mathcal{A}^{\prime}\) has a noticeable distinguishability, it contradicts \(\mathfrak{C}\) is not \(\mathsf{VQA}\).
## 5 Verifiability and \(\mathsf{PRS}\)
Theorem 5.1: _Let \(\lambda\) be the security parameter and \(n=poly(\lambda)\). Let \(\mathfrak{C}=\{C_{1},\ldots,C_{N}\}\) be a set of \(n\)-qubit polynomial-sized quantum circuits. If RCS with \(\mathfrak{C}\) is \(\mathsf{VQA}\), then the set of states \(\{C_{1}|0^{n}\rangle,\ldots,C_{N}|0^{n}\rangle\}\) is not a classically unidentifiable \(\mathsf{PRS}\)._
Proof: For an arbitrary distribution \(D\), we let \(\boldsymbol{z}_{D}:=\{\boldsymbol{z}_{D}^{1},\ldots\boldsymbol{z}_{D}^{m}\}\) be \(m\) i.i.d. samples from \(D\), where \(m=poly(\lambda)\). For any \(k\in[N]\), let \(\boldsymbol{z}_{k}=\{\boldsymbol{z}_{k}^{1},\ldots,\boldsymbol{z}_{k}^{m}\}\) be samples generated from measuring \(|\psi_{k}\rangle^{\otimes m}\) with \(|\psi_{k}\rangle=C_{k}|0^{n}\rangle\) in the computational basis. We will describe a distribution \(D\), which is efficiently samplable by a classical algorithm denoted by \(\mathcal{S}_{D}\), such that for polynomial-time distinguisher \(\mathcal{A}\) and a uniformly random \(k\leftarrow[N]\),
\[\lvert\Pr[\mathcal{A}(C_{k},\mathcal{S}_{D},\boldsymbol{z}_{k})=1]-\Pr[ \mathcal{A}(C_{k},\mathcal{S}_{D},\boldsymbol{z}_{D})=1]\rvert\leq\negl( \lambda)\,. \tag{4}\]
1. Since the set of states \(\{C_{1}|0^{n}\rangle,\ldots,C_{N}|0^{n}\rangle\}\) is a classically unidentifiable \(\mathsf{PRS}\) and \(\mathcal{S}_{D}\) is independent of the circuits and samples, for any \(j\neq k\) and any PPT algorithm \(\mathcal{A}\), we can apply Definition 4 to \(\mathcal{A}(\cdot,\mathcal{S}_{D},\cdot)\) to get \[\lvert\Pr[\mathcal{A}(C_{k},\mathcal{S}_{D},\boldsymbol{z}_{k})=1]-\Pr[ \mathcal{A}(C_{k},\mathcal{S}_{D},\boldsymbol{z}_{j})=1]\rvert<\negl( \lambda)\,.\]
2. Let \(\boldsymbol{z}_{\mu}\) be samples by measuring \(|\phi\rangle^{\otimes m}\) in the computational basis where \(|\phi\rangle\leftarrow\mu\) is a Haar random state. Since \(\{|\psi_{k}\rangle=C_{k}|0\rangle\}_{k\in[N]}\) is a \(\mathsf{PRS}\) family, it holds that for a uniformly random \(k\leftarrow[N]\) and any \(j\neq k\) \[\lvert\Pr[\mathcal{A}(C_{k},\mathcal{S}_{D},\boldsymbol{z}_{j})=1]-\Pr[ \mathcal{A}(C_{k},\mathcal{S}_{D},\boldsymbol{z}_{\mu})=1]\rvert<\negl( \lambda)\,.\] Otherwise, since \(C_{k}\) and \(\mathcal{S}_{D}\) are independent of the samples, \(\mathcal{A}(C_{k},\mathcal{S}_{D},\cdot)\) can be used to build a distinguisher between \(\mathsf{PRS}\) and Haar random states.
3. The distribution \(D\) is the uniform distribution on \(\{0,1\}^{n}\), and the sampler \(\mathcal{S}_{D}\) simply uniformly samples \(m\) distinct samples \(\mathbf{z}_{D}^{i}\)'s from \(\{0,1\}^{n}\). By Lemma 2, \(\{\mathbf{z}_{\mu}^{i}\}_{1\leq i\leq m}\) collide with probability \(1-\mathsf{negl}(\lambda)\). By the unitary invariance of the Haar measure, the distribution of \(\{\mathbf{z}_{\mu}^{i}\}_{1\leq i\leq m}\) conditioning on no collision is uniform. Thus, the output of \(\mathcal{S}_{D}\) is \(\mathsf{negl}(\lambda)\)-close to \(\mathbf{z}_{\mu}\) with probability \(1-\mathsf{negl}(\lambda)\). Thus, for all \(k\in[N]\) and any PPT algorithm \(\mathcal{A}\) \[|\Pr[\mathcal{A}(C_{k},\mathcal{S}_{D},\mathbf{z}_{D})=1]-\Pr[\mathcal{A}(C_{k},\mathcal{S}_{D},\mathbf{z}_{\mu})=1]|\leq\mathsf{negl}(\lambda)\,.\]
This theorem follows from chaining all the steps by the triangle inequality.
Set \(\Delta=\{v\in\mathbb{R}^{2^{n}}:v_{i}\geq 0,\sum_{i}v_{i}=1\}\) be the set of all probability distributions on \(\{0,1\}^{n}\). Let \((\mathbf{g}_{x})_{x\in\{0,1\}^{n}}\) and \((\mathbf{h}_{x})_{x\in\{0,1\}^{n}}\) be two sequences of i.i.d. random variables drawn from \(N(0,1)\) Set \(\mathbf{G}=\sum_{x}\mathbf{g}_{x}^{2}\) and \(\mathbf{H}=\sum_{x}\mathbf{h}_{x}^{2}\). Notice that \(\mathbf{G}\) and \(\mathbf{H}\) follow the chi-squared distribution of degree \(2^{n}\), denoted by \(\chi_{2^{n}}\). Define the random variable \(\mathbf{p}\) such that for all \(x\in\{0,1\}^{n}\)
\[\mathbf{p}(x)=\frac{\mathbf{g}_{x}^{2}+\mathbf{h}_{x}^{2}}{\mathbf{G}+\mathbf{ H}}, \tag{5}\]
which induces a probability distribution over \(\Delta\). It is well known that \(\mathbf{p}\) is the output distribution when measuring \(n\)-qubit Haar random states on the computational basis [1].
Lemma 1 ([16, comment below Lemma 1]): _For \(n\geq 1\), let \(\mathbf{r}\) be a random variable distributed according to the chi-squared distribution \(\chi_{n}\). Then for every \(x>0\), we have_
\[\Pr[n-2\sqrt{nx}\leq\mathbf{r}^{2}\leq n+2\sqrt{nx}+2x]\geq 1-2e^{-x}\;.\]
Lemma 2: _Given the security parameter \(\lambda,n=poly(\lambda),m=poly(n)\), let \(\nu\) be a distribution drawn from \(\mathbf{p}\). With probability \(1-\mathsf{negl}(\lambda)\), the following holds._
_Let \(\mathbf{z}=(\mathbf{z}_{1},\ldots,\mathbf{z}_{m})\) be \(m\) i.i.d. samples drawn from \(\nu\). Then_
\[\Pr[\exists\ i\neq j:\mathbf{z}_{i}=\mathbf{z}_{j}]\leq 50m^{2}2^{-n}\]
Proof: Let \(\nu\) be a distribution drawn from \(\mathbf{p}\) defined in (5). Notice that \(\nu_{x}\sim(\mathbf{g}_{x}^{2}+\mathbf{h}_{x}^{2})/(\mathbf{G}+\mathbf{H})\). Then the probability that samples drawn according to \(v\) have a collision is
\[\Pr_{\nu}[\exists i\neq j\in[m]\text{ s.t. }\mathbf{z}_{i}=\mathbf{z}_{j}]\leq m^{2} \sum_{x}\nu(x)^{2}.\]
Let \(\mathcal{E}\) be the event that \(\mathbf{G}\geq 2^{n}-4\sqrt{2^{n}n}\) and \(\mathbf{H}\geq 2^{n}-4\sqrt{2^{n}n}\). By Lemma 1,
\[\Pr[\mathcal{E}]\geq 1-4e^{-n}. \tag{6}\]
Then we have
\[\mathbb{E}_{\nu}[\Pr[\exists i\neq j\text{ s.t. }\mathbf{z}_{i}=\mathbf{z}_{j}]]\] \[\leq m^{2}\mathbb{E}_{\nu}\left[\sum_{x}\frac{(\mathbf{g}_{x}^{2}+ \mathbf{h}_{x}^{2})^{2}}{(\mathbf{G}+\mathbf{H})^{2}}\right]\] \[=m^{2}\mathbb{E}_{\nu}\left[\sum_{x}\frac{(\mathbf{g}_{x}^{2}+ \mathbf{h}_{x}^{2})^{2}}{(\mathbf{G}+\mathbf{H})^{2}}\mid\mathcal{E}\right] \cdot\Pr[\mathcal{E}]+m^{2}\mathbb{E}_{\nu}\left[\sum_{x}\frac{(\mathbf{g}_{x}^ {2}+\mathbf{h}_{x}^{2})^{2}}{(\mathbf{G}+\mathbf{H})^{2}}\mid\neg\mathcal{E} \right]\cdot\Pr[\neg\mathcal{E}]\] \[\leq m^{2}\mathbb{E}_{\nu}\left[\frac{\sum_{x}(\mathbf{g}_{x}^{2 }+\mathbf{h}_{x}^{2})^{2}}{(2^{n}-4\sqrt{2^{n}n})^{2}}\mid\mathcal{E}\right] \cdot\Pr[\mathcal{E}]+m^{2}\Pr[\neg\mathcal{E}]\] \[\leq\frac{m^{2}}{2^{2n-2}}\mathbb{E}_{\nu}\left[\sum_{x}(\mathbf{ g}_{x}^{2}+\mathbf{h}_{x}^{2})^{2}\right]+4m^{2}2^{-n}\] \[=\frac{8m^{2}2^{n}}{2^{2n-2}}+4m^{2}2^{-n}\leq 50m^{2}2^{-n},\]
where in the second inequality we use
\[\sum_{x}(\mathbf{g}_{x}^{2}+\mathbf{h}_{x}^{2})^{2}\leq(\mathbf{G}+\mathbf{H} )^{2}\]
to bound \(\mathbb{E}_{\nu}\left[\sum_{x}\frac{(\mathbf{g}_{x}^{2}+\mathbf{h}_{x}^{2})^{2 }}{(\mathbf{G}+\mathbf{H})^{2}}\mid\neg\mathcal{E}\right]\leq 1\). By the Markov inequality,
\[\Pr_{\nu}\left[\Pr[\exists\ i\neq j\text{ s.t. }\mathbf{z}_{i}=\mathbf{z}_{j}]\leq 50m^{2 }2^{-n/2}\right]\geq 1-2^{-n/2}.\]
Then the lemma follows from that \(n=poly(\lambda),m=poly(n)\).
## 6 Designated verifiability
Another type of verifiable quantum advantage experiments involve interactions between a trusted verifier and a computationally bounded quantum or classical prover, which is not covered by Definition 5. Such experiments are called _proof of quantumness_ (PoQ).
Definition 14 (Proof of quantumness (\((s,t,\varepsilon)\)-PoQ)): Let \(\lambda\) be the security parameter. We say a protocol between a classical verifier \(V\) and a prover \(P\) is an \((s,t,\varepsilon)\)-PoQ, if there exists a quantum time-\(s\) prover \(P_{Q}\) such that for all time-\(t\) classical prover \(P_{C}\), it holds that
\[\Pr[\langle V,P_{Q}\rangle=1]-\Pr[\langle V,P_{C}\rangle=1]\geq\varepsilon\,,\]
where \(\langle V,P_{Q}\rangle\) and \(\langle V,P_{C}\rangle\) denote the decision of \(V\) after interacting with \(P_{Q}\) and \(P_{C}\) respectively.
Some PoQ protocols have been proposed in [1, 2, 3]. It would be an intriguing feature if the trusted party in PoQ could be offline just as in Definition 5. This motivates our definition of VQA with a setup stage, where a trusted party initializes a VQA experiment with some public parameter as well as a verification key that is issued to a designated verifier. Quantum provers can then work offline.
Definition 15 (Designated verifiable quantum advantages (\((s,t,\varepsilon\))-Dvqa)): Let \(\lambda\) be the security parameter. Consider an experiment \(E\) specified by \((\mathsf{Setup},P)\) where
* \((\mathsf{pp},\mathsf{vk})\leftarrow\mathsf{Setup}(1^{\lambda})\): a classical time-\(\mathsf{poly}(\lambda)\) algorithm that outputs a public parameter \(\mathsf{pp}\) and a verification key \(\mathsf{vk}\),
* \(z\gets P(\mathsf{pp})\): a quantum time-\(\mathsf{poly}(\lambda)\) algorithm that outputs a transcript \(z\) on input \(\mathsf{pp}\).
We denote a classical simulation algorithm of \(E\) by \(\mathsf{Sim}\).
We say \(E\) is \((s,t,\varepsilon)\)-designated verifiable quantum advantage (DVQA), if there exists some polynomial \(q\) of \(\lambda\) such that for all time-\(t\) classical simulator \(\mathsf{Sim}\), there exists a classical time-\(s\) algorithms \(\mathcal{A}(\mathsf{pp},\mathsf{vk},P,\mathsf{Sim},\boldsymbol{z})\in\{0,1\}\) that on input \(\mathsf{pp}\), \(\mathsf{vk}\), the description of \(P\), the description of \(\mathsf{Sim}\), and \(q(\lambda)\) transcripts generated by either \(P\) or \(\mathsf{Sim}\), outputs a bit, such that
\[\operatorname*{\mathbb{E}}_{(\mathsf{pp},\mathsf{vk})\leftarrow\mathsf{Setup}( 1^{\lambda})}|\Pr[\mathcal{A}(\mathsf{pp},\mathsf{vk},P,\mathsf{Sim}, \boldsymbol{z}_{P})=1]-\Pr[\mathcal{A}(\mathsf{pp},\mathsf{vk},P,\mathsf{Sim},\boldsymbol{z}_{\mathsf{Sim}})=1]|\geq\varepsilon\,,\]
where \(\boldsymbol{z}_{P}\) is generated by running \(P(\mathsf{pp})\)\(q(\lambda)\) times independently, and \(\boldsymbol{z}_{\mathsf{Sim}}\) is generated by \(\mathsf{Sim}\).
It is called designated VQA because only the designated distinguisher \(\mathcal{A}\) can get the verification key \(\mathsf{vk}\). When not explicitly mentioned, \(s=t=\mathsf{poly}(\lambda)\) and \(\varepsilon=1/\,\mathsf{poly}(\lambda)\).
Assuming a random oracle, the PoQ of [3] can be made non-interactive and satisfies Definition 15 [1]. More specifically, the trusted party first generates multiple function keys along with their trapdoors. The function keys are published as \(\mathsf{pp}\) and the trapdoors are kept as \(\mathsf{vk}\). When the prover gets \(\mathsf{pp}\), the prover follows the steps of the original protocol of [3] on each function key, except that the challenge is generated by querying the random oracle. In the end, the trusted party collects all the transcripts, runs the verifier's check of [3] on each transcript, and accepts if all of them are correct. It is easy to see that in the new protocol, the trusted party doesn't need to stay online when the prover is generating the transcripts.
Theorem 7.1: _Assuming (classical) RO and LWE, there exists a DVQA experiment._
Moreover, Definition 5 can be viewed as a special case of Definition 15: The public parameter is the circuit family \(\mathfrak{C}\). There is no \(\mathsf{vk}\). The prover \(P\) runs a random \(C\in\mathfrak{C}\) on \(|0^{n}\rangle\) and measures the qubits in the computational basis. The simulator runs \(\mathcal{S}_{D}\) to generate samples.
## 7 Verifying quantum advantage by a quantum verifier
### Defining quantum verifiable quantum advantages
One lesson in recent developments of quantum advantage experiments is that classically verifying the results can be challenging. Can we employ quantum computers to help with the verification? This might sound circular, but we think that it is a viable route worth exploring. When we advance beyond the NISQ era, quantum advantage experiments may be repurposed as benchmarking for quantum computers, and checking the benchmarking metrics will be done by other quantum computers.
In fact, we argue that it is already relevant in the NISQ era. A classical verifier could already benefit dramatically when equipped with limited quantum computing capacity, especially if we mindfully tailor our experiment design to this setting. For example, interactive protocols for proving quantumness were known relatively early as long as a verifier can prepare some simple single-qubit
states [1, 10]; whereas constructing a protocol with a purely classical verifier had been notoriously challenging and was only resolved recently in Mahadev's breakthrough result [14]. To put it in a real-world context, people are investigating whether RCS results can be verified quantumly [1, Section V.C]. Hence, two non-colluding parties (e.g., Google vs. IBM) could verify the other party's results, with the help of their respective NISQ device.
Hence, we extend our definitions and formalize verifiable quantum advantage in the presence of quantum verifiers (QVQA). On the technical side, quantum verification enables a smoother duality between EFI and QVQA, as shown in Section 7.2.
Definition 16 (Quantum-verifiable quantum advantages (\((s,t,\varepsilon)\)-Qvqa)): Let \(\lambda\) be the security parameter. Let \(\mathfrak{C}\) be a family of polynomial-size quantum circuits on \(n\) qubits. For any \(C\in\mathfrak{C}\) let \(D_{C}\) be the distribution induced by measuring \(C|0^{n}\rangle\) in the computational basis.
We call \(\mathfrak{C}\) a family of \((s,t,\varepsilon)\) quantum-verifiable quantum advantage (QVQA), if for all distribution \(D\) samplable by a time-\(s\) classical algorithm, it holds that for a uniformly random \(C\) drawn from \(\mathfrak{C}\), there exists a time-\(t\)**quantum** algorithm \(\mathcal{A}\) that \(\varepsilon\)-distinguishes \(D_{C}\) from \(D\), namely
\[\mathop{\mathbb{E}}_{C\leftarrow\mathfrak{C}}\left|\Pr[\mathcal{A}(C, \mathcal{S}_{D},\mathbf{z}_{C})=1]-\Pr[\mathcal{A}(C,\mathcal{S}_{D},\mathbf{z}_{D})= 1]\right|\geq\varepsilon\,,\]
where \(\mathcal{S}_{D}\) is the description of a time-\(s\) classical sampler for \(D\), \(\mathbf{z}_{C}\) and \(\mathbf{z}_{D}\) are sets of up to \(t\) samples drawn from \(D_{C}\) and \(D\) respectively.
Definition 17 (Universally quantum-verifiable quantum advantages (\((s,t,\varepsilon)\)-Uqvqa)): Let \(\lambda\) be the security parameter. Let \(\mathfrak{C}\) be a family of polynomial-size quantum circuits on \(n\) qubits. For any \(C\in\mathfrak{C}\) let \(D_{C}\) be the distribution induced by measuring \(C|0^{n}\rangle\) in the computational basis.
We call \(\mathfrak{C}\) a family of \((s,t,\varepsilon)\)**universally** quantum-verifiable quantum advantage (UQVQA), if for all distribution \(D\) samplable by a time-\(s\) classical algorithm, there exists a time-\(t\) quantum algorithm \(\mathcal{A}\), such that for a uniformly random \(C\) drawn from \(\mathfrak{C}\), \(\mathcal{A}\)\(\varepsilon\)-distinguishes \(D_{C}\) from \(D\), namely
\[\mathop{\mathbb{E}}_{C\leftarrow\mathfrak{C}}\left|\Pr[\mathcal{A}(C,\mathbf{z}_{ C})=1]-\Pr[\mathcal{A}(C,\mathbf{z}_{D})=1]\right|\geq\varepsilon\,,\]
where \(\mathbf{z}_{C}\) and \(\mathbf{z}_{D}\) are sets of up to \(t\) samples drawn from \(D_{C}\) and \(D\) respectively.
We are typically concerned with the efficient regime where we consider all poly-time samplable classical distributions, poly-time distinguisher \(\mathcal{A}\) and inverse-poly noticeable advantage, i.e., \(s=\mathsf{poly}(\lambda)\), \(t=\mathsf{poly}(\lambda)\), and \(\varepsilon=\frac{1}{\mathsf{poly}(\lambda)}\). We will simply call \(\mathfrak{C}\) a QVQA (resp. UQVQA) if the conditions in Definition 16 (resp. Definition 17) are satisfied in this setting.
We define \((s,t,\epsilon)\)-SE-QVQA and \((s,t,\epsilon)\)-SE-UQVQA following the definitions of SE-VQA and SE-UVQA. Briefly, they are the same as QVQA and UQVQA except that the number of samples is restricted to \(\mathsf{poly}(n)\). Then, we can show that efficient quantum algorithms for SampMCSP and ObSampMCSP can lead to polynomial-time quantum verification of SE-VQA and SE-UVQA following proofs similar to Theorem 5 as following corollaries.
Corollary 3: _If SampMCSP with \((s(\cdot),t(\cdot))\) can be solved in quantum polynomial time, then an \((s,t,\epsilon)\)-SE-QVQA experiment is an \((s,\mathrm{poly}(n),\epsilon+\mathsf{negl}(n))\)-QVQA._
Corollary 4: _If ObSampMCSP can be solved in quantum polynomial time, then an \((s,t,\epsilon)\)-SE-UQVQA is also \((s,\mathrm{poly}(n),\epsilon+\mathsf{negl}(n))\)-UQVQA._
Finally, we also describe an analogue of Definition 15 with a designated quantum verifier.
Definition 18 (Designated quantum verifiable quantum advantages (\((s,t,\varepsilon)\)-Dqva)): Let \(\lambda\) be the security parameter. Consider an experiment \(E\) specified by \((\mathsf{Setup},P)\) where
* \((\mathsf{pp},\mathsf{vk})\leftarrow\mathsf{Setup}(1^{\lambda})\): a classical \(\mathsf{poly}(\lambda)\)-time algorithm that outputs a public parameter \(\mathsf{pp}\) and a verification key \(\mathsf{vk}\),
* \(z\gets P(\mathsf{pp})\): a quantum \(\mathsf{poly}(\lambda)\)-time algorithm that outputs a transcript \(z\) on input \(\mathsf{pp}\).
We denote a classical simulation algorithm of \(E\) by \(\mathsf{Sim}\).
We say \(E\) is \((s,t,\varepsilon)\)- _designated quantum verifiable quantum advantage (\(\mathsf{DQVQA}\))_, if there exists some polynomial \(q\) such that for _all_ time-\(t\) classical simulator \(\mathsf{Sim}\), there exists a _quantum_ time-\(s\) algorithms \(\mathcal{A}(\mathsf{pp},\mathsf{vk},P,\mathsf{Sim},\boldsymbol{z})\in\{0,1\}\) such that
\[\operatorname*{\mathbb{E}}_{(\mathsf{pp},\mathsf{vk})\leftarrow\mathsf{Setup}( 1^{\lambda})}|\Pr[\mathcal{A}(\mathsf{pp},\mathsf{vk},P,\mathsf{Sim}, \boldsymbol{z}_{P})=1]-\Pr[\mathcal{A}(\mathsf{pp},\mathsf{vk},P,\mathsf{Sim}, \boldsymbol{z}_{\mathsf{Sim}})=1]|\geq\varepsilon\,,\]
where \(\boldsymbol{z}_{P}\) is generated by running \(P(\mathsf{pp})\)\(q(\lambda)\) times independently, and \(\boldsymbol{z}_{\mathsf{Sim}}\) is generated by \(\mathsf{Sim}\).
### Duality between Efi and Qvqa
For any \(n\)-qubit unitary circuit \(C\), define another \(2n\)-qubit unitary circuit \(\hat{C}:=\mathbf{CNOT}(C\otimes\mathds{1})\), where \(\mathbf{CNOT}:|x\rangle|y\rangle\mapsto|x\rangle|x\oplus y\rangle\) is the generalized CNOT gate on \(n\)-qubit. We define
\[\rho_{C}:=\operatorname{Tr}_{B}(\hat{C}|0^{2n}\rangle\langle 0^{2n}|_{AB}\hat{C }^{\dagger})\,,\]
which is equivalent to a quantum state encoding the distribution induced by measuring \(C|0^{n}\rangle\) under the computational basis.
Theorem 7.1: _Let \(\mathfrak{C}=\{C_{k}\}\) be a family of \(n\)-qubit poly-size quantum circuits. If for a \((1-\mathsf{negl}(\lambda))\) fraction of \(C\in\mathfrak{C}\), there exists a classically poly-time samplable distribution \(D\) such that \(\rho_{C}\) and \(\rho_{D}\) form an Efi pair. Then, \(\mathfrak{C}\) is not a Qvqa family._
Proof: The proof is similar to that of Theorem 7.1. Suppose for contradiction that \(\mathfrak{C}\) is a QVQA family. Then for any classical \(D\), there is an quantum poly-time \(\mathcal{A}\), such that
\[\operatorname*{\mathbb{E}}_{C\leftarrow\mathfrak{C}}|\Pr[\mathcal{A}(C, \mathcal{S}_{D},\boldsymbol{z}_{C})=1]-\Pr[\mathcal{A}(C,\mathcal{S}_{D}, \boldsymbol{z}_{D})=1]|\geq\varepsilon\,,\]
for some \(\varepsilon\geq\frac{1}{\mathsf{poly}(\lambda)}\) using \(m=\mathsf{poly}(\lambda)\) samples. For any \(i\in[m]\), we define
\[\boldsymbol{z}(i) :=(\boldsymbol{z}_{C}^{1},\ldots,\boldsymbol{z}_{D}^{i}, \boldsymbol{z}_{D}^{i+1},\ldots,\boldsymbol{z}_{D}^{m})\,,\] \[\boldsymbol{z}(i+1) :=(\boldsymbol{z}_{C}^{1},\ldots,\boldsymbol{z}_{C}^{i}, \boldsymbol{z}_{D}^{i+1},\ldots,\boldsymbol{z}_{D}^{m})\,.\]
Then by a hybrid argument, there must exist an \(i^{*}\) such that
\[\operatorname*{\mathbb{E}}_{C\leftarrow\mathfrak{C}}|\Pr[\mathcal{A}(C, \mathcal{S}_{D},\boldsymbol{z}(i^{*}))=1]-\Pr[\mathcal{A}(C,\mathcal{S}_{D}, \boldsymbol{z}(i^{*}+1))=1]|\geq\varepsilon/m\,.\]
We then construct \(\mathcal{A}^{\prime}\) to distinguish \(\rho_{C}\) and \(\rho_{D}\) efficiently. On an input state \(\rho\in\{\rho_{C},\rho_{D}\}\) and index \(i^{*7}\), \(\mathcal{A}^{\prime}\) constructs a state \(\sigma\),
\[\sigma:=(\rho_{C}^{1},\ldots,\rho_{C}^{i-1},\rho,\rho_{D}^{i^{*}+1},\ldots,\rho _{D}^{m})\,.\]
Observe that \(\sigma=\boldsymbol{z}(i^{*}+1)\) if \(\rho=\rho_{C}\) and \(\sigma=\boldsymbol{z}(i^{*})\) if \(\rho=\rho_{D}\). \(\mathcal{A}^{\prime}\) then runs \(A\) on \(\sigma\) together with \(C,\mathcal{S}_{D}\) (described by the \(\mathsf{EFI}\) generator \(G\)) and outputs what \(A\) outputs. We can see that
\[\mathop{\mathbb{E}}_{C\leftarrow\mathcal{E}}|\Pr[\mathcal{A}^{ \prime}(\rho_{\mathfrak{C}})=1]-\Pr[\mathcal{A}^{\prime}(\rho_{D})=1]|\] \[= \mathop{\mathbb{E}}_{C\leftarrow\mathcal{E}}|\Pr[\mathcal{A}(C, \mathcal{S}_{D},\boldsymbol{z}(i^{*}+1))=1]-\Pr[\mathcal{A}(C,\mathcal{S}_{D },\boldsymbol{z}(i^{*}))=1]|\] \[\geq \varepsilon/m\,.\]
This implies that there must exists a \(1/\operatorname{\mathsf{poly}}(\lambda)\) fraction of \(C\in\mathfrak{C}\) such that \(\mathcal{A}^{\prime}\) successfully tells apart with inverse-poly probability. This shows a contradiction.
We remark that since \(\rho_{C}\) and \(\rho_{D}\) are both mixed states encoding classical distributions, the premise in the statement can be weakened to \(\mathsf{qq}\)-\(\mathsf{EFID}\).
**Theorem 11**.: _Let \(\mathfrak{C}=\{C_{k}\}\) be a family of \(n\)-qubit poly-size quantum circuits. If \(\mathfrak{C}\) admits strong quantum advantage (eq. (2)) but \(\mathfrak{C}\) is not a \(\mathsf{QVQA}\) family (Definition 16), then \(\mathsf{EFI}\) exists._
Proof.: This proof is similar to that of Theorem 7. Since \(\mathfrak{C}\) admits a strong quantum advantage, it means that for any classical distribution \(D\), \(\|\rho_{\mathfrak{C}}-\rho_{D}\|_{1}\geq\varepsilon\), where
\[\rho_{\mathfrak{C}}:= \operatorname{Tr}_{A}\left(\frac{1}{|\mathfrak{C}|}\sum_{k}|k \rangle\langle k|_{A}\otimes C_{k}|0^{n}\rangle\langle 0^{n}|C_{k}^{\dagger} \right)\,,\] \[\rho_{D}:= \sum_{i}D(i)|i\rangle\langle i|\,.\]
On the other hand, because \(\mathfrak{C}\) is not a \(\mathsf{QVQA}\) family, there must exists a classical distribution \(D\) and such that for all quantum poly-time algorithm \(\mathcal{A}\),
\[\mathop{\mathbb{E}}_{C\leftarrow\mathfrak{C}}|\Pr[\mathcal{A}(C,\mathcal{S}_{ D},\boldsymbol{z}_{C})=1]-\Pr[\mathcal{A}(C,\mathcal{S}_{D},\boldsymbol{z}_{D})=1 ]|\leq\mathsf{negl}(\lambda)\,,\]
where \(\mathcal{S}_{D}\) is an efficient sampler for \(D\). Observe that \(\boldsymbol{z}_{C}\) is identical to multiple copies of \(\rho_{\mathfrak{C}}\), and \(\boldsymbol{z}_{D}\) is identical to multiple copies of \(\rho_{D}\). Therefore we construct a generator \(G\) such that:
\[G(0):=\rho_{\mathfrak{C}}\,,\quad G(1):=\rho_{D}=\sum_{i}D(i)|i\rangle\langle i |\,.\]
We can show that \(G\) gives an \(\mathsf{EFI}\) pair. First of all, \(G\) is efficiently computable because \(G(0)\) amounts to sample a random \(C_{k}\), and \(G(1)\) can simply run the efficient sampler \(\mathcal{S}_{D}\). Then by the strong quantum advantage premise, \(\|\rho_{\mathfrak{C}}-\rho_{D}\|\geq\varepsilon\). Finally, \(\rho_{\mathfrak{C}}\) and \(\rho_{D}\) are quantum computationally
indistinguishable because for any quantum poly-time \(\mathcal{A}\), it holds that
\[|\Pr[\mathcal{A}(G,\rho_{\mathfrak{C}})=1]-\Pr[\mathcal{A}(G,\rho_{D} )=1]|\] \[= \left|\underset{C_{k}\leftarrow\mathfrak{C}}{\mathbb{E}}\Pr[ \mathcal{A}(G,\rho_{\mathfrak{C}})=1]-\Pr[\mathcal{A}(G,\rho_{D})=1]\right|\] \[\leq \underset{C_{k}\leftarrow\mathfrak{C}}{\mathbb{E}}\left|\Pr[ \mathcal{A}(G,\rho_{\mathfrak{C}})=1]-\Pr[\mathcal{A}(G,\rho_{D})=1]\right|\] \[= \underset{C_{k}\leftarrow\mathfrak{C}}{\mathbb{E}}\left|\Pr[ \mathcal{A}(C_{k},\mathcal{S}_{D},\mathbf{z}_{C_{k}})=1]-\Pr[\mathcal{A}(C_{k}, \mathcal{S}_{D},\mathbf{z}_{D})=1]\right|\] \[\leq \mathsf{negl}(\lambda)\,,\]
which completes the proof.
|
2308.16039
|
A Continuous Non-ergodic Theory for the Wave Set-up
|
Inhomogeneities in the wave field due to wave groups, currents, and shoaling
among other ocean processes can affect the mean water level. In this work, the
classical and unsolved problem of continuously computing the set-down and the
following set-up induced by wave breaking on a shoal of constant finite slope
is tackled. This is possible by using available theoretical knowledge on how to
approximate the distribution of wave random phases in finite depth. Then, the
non-homogeneous spectral analysis of the wave field allows the computation of
the ensemble average by means of the phase distribution and the inversion of
the integral of the second moment for the special case of a shoaling process
with uniform phase distribution. In doing so, I am able to obtain a direct
effect of the slope magnitude on the phases distribution. Therefore, an
analytical and slope-dependent mean water level with continuity over the entire
range of water depth is provided.
|
Saulo Mendes
|
2023-08-30T13:56:32Z
|
http://arxiv.org/abs/2308.16039v1
|
# A Continuous Non-ergodic Theory for the Wave Set-up
###### Abstract
Inhomogeneities in the wave field due to wave groups, currents, and shoaling among other ocean processes can affect the mean water level. In this work, the classical and unsolved problem of continuously computing the set-down and the following set-up induced by wave breaking on a shoal of constant finite slope is tackled. This is possible by using available theoretical knowledge on how to approximate the distribution of wave random phases in finite depth. Then, the non-homogeneous spectral analysis of the wave field allows the computation of the ensemble average by means of the phase distribution and the inversion of the integral of the second moment for the special case of a shoaling process with uniform phase distribution. In doing so, I am able to obtain a direct effect of the slope magnitude on the phases distribution. Therefore, an analytical and slope-dependent mean water level with continuity over the entire range of water depth is provided.
## I Introduction
Conservation principles in the physical sciences often have significant consequences for the solutions of governing equations [1; 2]. However, discussions on the energy partition in wave hydrodynamics have risen very tardily [3; 4; 5; 6; 7; 8]. Not surprisingly, a strong debate surfaced over the proper formulation for the momentum of water waves [9; 10; 11; 12] to understand the phenomenon of longshore currents [13; 14; 15; 16]. Other nearshore phenomena were concurrently discovered [17; 18], most prominently the change in mean water level outside the surf zone (set-down) followed by a set-up within this zone [19; 20]. Experimental evidence supported the radiation stress theory [21; 22] as it could seemingly explain all these phenomena. Computation of the set-down/set-up has important consequences for adjacent water wave processes. For instance, while studying the design of breakwaters Hunt Jr. [23] showed that the run-up of waves over a beach depends on the set-up. Furthermore, rip [24; 25; 26] and longshore currents [27; 28] are also influenced by the alongshore variations in set-up. Additionally, rogue wave occurrence in the transition between deep, intermediate and shallow waters has recently been linked with the analytical effect of slope on non-homogeneous evolution of irregular waves [29]. Remarkably, the change in mean water level can be achieved by any ocean process in which the energetic and momentum balances are disturbed, such that the classical view of the wave group-induced set-down over a flat bottom [22] is complemented by shoaling-induced [30; 31] and current-induced counterparts [32].
The set-down is usually computed from the Bernoulli equation [33], thus the sub-harmonic is an _a posteriori_ term due to the conservation principles applied to a second-order irregular wave field. However, as discussed in Mei _et al._[33], the typical calculation neglects the depth gradient, and thus overlooks the effect of slope. In addition, there is no analytical way to obtain a transition between set-down and set-up, as the latter occurs following wave dissipation and breaking which can not be applied to the Bernoulli equation. Experiments assessing the continuous evolution from wave group-induced set-down transitioning to shoaling-induced set-down until dissipation is strong, and a wave set-up appears have been conducted since the 1960s [34]. Despite the elapse of half a century, no continuous theoretical model has been developed to reproduce such an evolution other than the piecewise formulation based on the radiation stress [21; 22]. Indeed, Battjes [35] provided the best empirically-driven closed-form model for the set-up, but does not overcome the piecewise formulation between set-down and set-up. Likewise, although Hsu _et al._[36] generalized the set-down and set-up calculations of McDougal and Hudseth [28] for oblique waves under the effect of refraction and Massel and Gourlay [37] generalized the problem to include bottom friction, these two examples show that theoretical and numerical advances have still not been able to overcome the piecewise decomposition.
The field of wave statistics is often treated as a mere consequence of the solutions to the governing equations of hydrodynamics, and fundamental ocean processes are not believed to be affected by extreme wave occurrence or joint probability densities of surface elevation and random phases. In this work, I follow the footsteps of Mendes [38] and demonstrate that wave statistics can indeed affect the fundamental physical process leading to the set-down and set-up, whose dynamical evolution can not be treated by a unique approach due to wave breaking.
## II Governing Equations and Statement of the Problem
Given the exact solution of the generalized velocity potential \(\Phi(x,z,t)\) and surface elevation \(\zeta(x,t)\) one can
compute integral quantities and their conservation such as mass, momentum or energy flux [5; 7]. Solving the Bernoulli equation leads to a clear dependence of the set-down on the mathematical form of the solution for the velocity potential [33]:
\[\langle\zeta\rangle=-\frac{1}{2g}\left[\left\langle\left(\frac{\partial\Phi}{ \partial x}\right)^{2}\right\rangle-\left\langle\left(\frac{\partial\Phi}{ \partial z}\right)^{2}\right\rangle\right]\equiv-\frac{\langle u^{2}\rangle- \langle w^{2}\rangle}{2g}\,. \tag{1}\]
The velocity potential up to second-order in steepness can be written as [39; 40]:
\[\Phi=\frac{a\omega}{k}\frac{\cosh\theta}{\sinh\Lambda}\sin\phi+\left(\frac{3 ka}{8}\right)\frac{a\omega}{k}\frac{\cosh\left(2\theta\right)}{\sinh^{4} \Lambda}\sin\left(2\phi\right)\quad, \tag{2}\]
with notation \(\theta=k(z+h)\), \(\Lambda=kh\) and \(\phi=kx-\omega t\). If \(\nabla h\equiv\partial h/\partial x=0\) is assumed, the horizontal component of the velocity vector reads \((\partial\phi/\partial x=k\), \(\sin_{x}\phi\equiv\partial[\sin\phi]/\partial x)\):
\[u = \frac{a\omega}{k}\Bigg{\{}\frac{\cosh\theta}{\sinh\Lambda}\text{ sin}_{x}\,\phi+\left(\frac{3ka}{8}\right)\frac{\cosh\left(2\theta\right)}{\sinh^{4} \Lambda}\text{sin}_{x}\left(2\phi\right)\Bigg{\}}\,, \tag{3}\] \[= a\omega\left\{\frac{\cosh\theta}{\sinh\Lambda}\cos\phi+\left( \frac{3ka}{4}\right)\frac{\cosh\left(2\theta\right)}{\sinh^{4}\Lambda}\cos \left(2\phi\right)\right\}.\]
Likewise, noting that \(\partial h/\partial z=0\) by definition and \(\partial\theta/\partial z=k\), the vertical component of the velocity reads:
\[w = \frac{a\omega}{k}\Bigg{\{}\frac{\sin\phi}{\sinh\Lambda}\text{ cosh}_{z}\,\theta+\left(\frac{3ka}{8}\right)\frac{\sin\left(2\phi\right)}{\sinh^{4} \Lambda}\text{cosh}_{z}\left(2\theta\right)\Bigg{\}}\,, \tag{4}\] \[= a\omega\left\{\frac{\sinh\theta}{\sinh\Lambda}\sin\phi+\left( \frac{3ka}{4}\right)\frac{\sinh\left(2\theta\right)}{\sinh^{4}\Lambda}\sin \left(2\phi\right)\right\}\,.\]
For the next step of taking the time average of the square of the velocity components, the reader is reminded of periodic averaging of trigonometric functions (see eq. (12)):
\[\lim_{T\rightarrow+\infty}\int_{0}^{T}\sin^{2n+1}\phi\,\frac{dt}{T}=\lim_{T \rightarrow+\infty}\int_{0}^{T}\cos^{2n+1}\phi\,\frac{dt}{T}=0\,, \tag{5}\]
for all \(n\in\mathbb{N}^{*}\). Through integration by parts, one has as a corollary for all \((m,n)\in\mathbb{N}^{*}\):
\[\langle\sin^{2n+1}\phi\,\cos^{2m+1}\phi\rangle\ =\ \langle\sin^{2n}\phi\,\cos^{2m+1} \phi\rangle=0\,, \tag{6}\]
where the operator \(\langle\cdot\rangle\) denotes time averaging. The square of the velocity components will have only two non-vanishing terms \(\langle\sin^{2}\phi\rangle=\langle\cos^{2}\phi\rangle=1/2\). Therefore, by means of eqs. (1, 3-4) the set-down may be computed over a flat bottom:
\[\langle\zeta\rangle = -\frac{(a\omega)^{2}}{2g}\Bigg{\{}\frac{\left[\cosh^{2}\theta \langle\cos^{2}\phi\rangle-\sinh^{2}\theta\langle\sin^{2}\phi\rangle\right]}{ \sinh^{2}\Lambda}+\left(\frac{3ka}{4}\right)^{2}\frac{\left[\cosh^{2}\left(2 \theta\right)\langle\cos^{2}\left(2\phi\right)\rangle-\sinh^{2}\left(2\theta \right)\langle\sin^{2}\left(2\phi\right)\rangle\right]}{\sinh^{8}\Lambda} \Bigg{\}}\quad, \tag{7}\] \[= -\frac{a^{2}}{2g}\cdot\frac{\omega^{2}}{2\sinh^{2}\Lambda} \Bigg{\{}\cosh^{2}\theta-\sinh^{2}\theta+\left(\frac{3ka}{4}\right)^{2}\frac{ \left[\cosh^{2}\left(2\theta\right)-\sinh^{2}\left(2\theta\right)\right]}{ \sinh^{6}\Lambda}\Bigg{\}}\quad,\]
which taking into account the hyperbolic identity \(\cosh^{2}\theta-\sinh^{2}\theta=1\) and the leading order in the dispersion (\(\omega^{2}=gk\tanh\Lambda\)) leads to the formula:
\[\langle\zeta\rangle\approx-\frac{ka^{2}}{2\sinh\left(2kh\right)}\left[1+\frac{ 9(ka)^{2}}{16\sinh^{6}kh}\right]\quad. \tag{8}\]
As far as the author is aware, the term of second order in steepness inside the brackets does not appear in the literature, likely being neglected without further discussion. In the limit of second order theory through the Ursell number \(\text{Ur}\leqslant 8\pi^{2}/3\)[25], the term inside brackets increases the overall set-down by not more than 15%. However, beyond the limit of the second-order theory, the additional term will cause the set-down to diverge. Thus, this is a symptom of the unsuitability of this approach in computing the set-down/set-up over continuous range in relative water depth.
Alternatively, one may compute the gradient of the set-down from the horizontal gradient of the radiation stress, as the momentum balance equations lead to the radiation stress formula [21; 22]:
\[\nabla S_{xx}=-\rho g(\langle\zeta\rangle+h)\nabla\langle\zeta\rangle\quad, \tag{9}\]
where the cross-shore component of the radiation stress is a function of the ratio between phase and group velocities:
\[S_{xx}=\left\langle\int_{-h}^{\zeta}(p+\rho u^{2})\,dz\right\rangle-\int_{-h}^{ \zeta}p_{0}\,dz\, \tag{10}\]
where \(p=p_{0}+\rho g\zeta=\rho g(\zeta-z)\) is the pressure field underneath the waves. With a little algebra, limited to the first order in steepness this integration leads to:
\[S_{xx} = \rho\left\langle\,\int_{-h}^{\zeta}(u^{2}-w^{2})dz\right\rangle+ \frac{1}{2}\rho g\langle\zeta^{2}\rangle\,\] \[= \frac{1}{2}\rho g\langle\zeta^{2}\rangle-2\rho gh\langle\zeta \rangle=\frac{1}{2}\rho ga^{2}\left[\frac{1}{2}+\frac{2kh}{\sinh\left(2kh \right)}\right]\,.\]
This procedure verifies that eq. (8) up to the first order is the solution of eq. (9). Hence, there is no advantage in t
integrating the latter equation as compared to perform derivatives in eq. (1). Furthermore, the set-down computed above was derived without any need for shoaling formulae, which suggests there is little difference between wave-group set-down driven by difference in wave heights and shoaling of a very mild slope \(\nabla h\lesssim 1/100\). Noteworthy, waves may break in the region of validity of second-order waves, and as soon as the waves reach the surf zone and are subject to strong dissipation the solution to eq. (9) is no longer the equivalent of solving Bernoulli's equation. Nonetheless, the set-down caused by the cross-shore component of the radiation stress can not be properly formulated in the surf zone even in eq. (9), except if one assumes that the wave height stays constant within this region [22; 35; 34]. The major issue tackled by this work is to find a continuous analytical way to express the mean water level in these two physically different zones. This can not be done either through eq. (9) nor eq. (1). Hence, waves of second-order in steepness follow an approximated piecewise formula for the set-down and set-up, see for instance Chen _et al._[31]. In the next sections, I attempt to describe both set-down and set-up continuously over a plane beach within a single physical principle applicable to both zones.
## III Random phase distribution and wave statistics
Although wave dissipation typically separates physical theories of ocean wave mechanics between prior to breaking and after breaking, statistical measures of irregular waves do not suffer from this dynamical problem, at least empirically. For instance, Glukhovskii [41] showed that is possible to connect in a continuous range for relative water depth (\(k_{p}h\)) in terms of the ratio \(H_{s}/h\) the distributions of wave heights in deep and shallow water, otherwise known to be restricted to piecewise solutions (See Wu _et al._[42], Mendes and Scotti [43] and Karmpadakis _et al._[44] for distributions dependent on this ratio). The latter ratio is often found to be a good proxy for wave breaking [35; 45]. As such, in this section I invoke the practical knowledge of wave statistics and reveal how the non-ergodicity of irregular water waves plays a role in oscillating the mean water level from its initial condition (without waves) continuously as they approach a plane beach. Let the ensemble average of a random variable \(X(t)\) be defined:
\[\mathbb{E}\left[X(t)\right]=\int_{0}^{+\infty}X(t)\,f(X)\,dX\quad, \tag{12}\]
where \(f(X)\) is the probability density of the random variable \(X(t)\). Then, a stochastic process is said to be _ergodic_ if \(\mathbb{E}\left[X(t)\right]=\langle X(t)\rangle\) holds. Now, it is of interest to compute ensemble averages of powers of the surface elevation \(\zeta^{n}\) for \(n\in\mathbb{N}^{*}\). The mean water level can be easily computed in the case of linear waves [46], and it will always leads to an ergodic process over a flat bottom with \(\mathbb{E}[\zeta]=\langle\zeta\rangle=0\). Through a change of variables [47] and applying the law of the unconscious statistician [48] to eq. (12), I compute the ensemble average as follows:
\[\mathbb{E}\left[\zeta\right]=\int_{-\infty}^{+\infty}\zeta\,f(\zeta)\,d\zeta =\int_{0}^{2\pi}\zeta(\phi)f(\phi)\,d\phi\quad. \tag{13}\]
Eq. (13) delineates how the probability density of the surface elevation is transformed into the distribution of random phases in computing the very same ensemble average. In the case of linear waves, ergodicity is corollary of an uniform distribution of phases (or alternatively, a Gaussian distribution of the surface elevation):
\[\mathbb{E}\left[\zeta\right] = \sum_{i}\frac{a_{i}}{2\pi}\int_{0}^{2\pi}\cos\phi\,d\phi=\langle \zeta(t)\rangle\,, \tag{14}\] \[= \lim_{T\rightarrow+\infty}\sum_{i}\frac{a_{i}}{T}\int_{0}^{T} \cos\phi\,dt=0\.\]
Likewise, the second moment (the spectral energy density) also features ergodicity:
\[\mathbb{E}\left[\zeta^{2}\right] = \sum_{i}\frac{a_{i}^{2}}{2\pi}\int_{0}^{2\pi}\cos^{2}\phi\,d\phi =\langle\zeta^{2}(t)\rangle\,,\] \[= \lim_{T\rightarrow+\infty}\sum_{i}\frac{a_{i}^{2}}{T}\int_{0}^{T} \cos^{2}\left(\omega_{i}t\right)dt=\sum_{i}\frac{a_{i}^{2}}{2}\,\]
which is well-known to be related to the Khintchine [49] theorem. Consequently, it can be demonstrated as in appendix A that any signal which can be described as a two-dimensional random walk, is zero mean with finite variance and independent random variables, will result in a Rayleigh distribution [50; 51]. For a review, see section 7.7-2 of Middleton [52] and 6.2.2 of Mendes [53]. However, if the distribution of random phases is not uniform the envelope will deviate from the Rayleigh law. Indeed, when waves travel over a shoal the distribution of surface elevation, crests or crest-to-trough heights are strongly deviated from the normal law [54; 55; 56]. The latter can be understood as random phases no longer being uniformly distributed, as demonstrated by Bitner [57]. The above statistical formulation computes the mean water level from the time average as if the shoaling process was ergodic. Although the latter is not factual, for a variety of purposes the ergodic approximation is useful and greatly simplifies calculations. Compared to the deterministic approach of section II, the possible generalization of the statistical approach in eq. (14) has the advantage of continuously assessing the mean water level in the surf zone.
## IV Non-ergodic continuous set-up
In this section I attempt to compute the ensemble average from the random phase approach. Consider the case of spatial inhomogeneity due to a shoal. In this case,
the time series is approximately weakly stationary. The evolution of the non-homogeneous spectrum is better formulated as a homogeneous spectrum corrected by a term \(\Gamma=\langle\zeta^{2}\rangle/\mathscr{E}\) that absorbs such inhomogeneity, where \(\mathscr{E}\) is the spectral energy density factoring out the linear energy \((1/2)\rho ga^{2}\). Up to second order in steepness, one would find a surface elevation in its simplest form [40]:
\[\zeta=\sum_{i}a_{i}\cos\phi+\sum_{i}a_{i}(ka)_{i}\left[\frac{3-\tanh^{2}\left( k_{i}h\right)}{4\tanh^{3}\left(k_{i}h\right)}\right]\cos\left(2\phi\right). \tag{16}\]
In the notation of Mendes and Kasparian [29] I can rewrite it as \((\varepsilon_{i}=2a_{i}/\lambda_{i})\),
\[\zeta(x,t)=\sum_{i}a_{i}\cos\phi+\sum_{i}a_{i}\cos\left(2\phi\right)\left( \frac{\pi\varepsilon_{i}\sqrt{\tilde{\chi}_{1}}}{4}\right)\,. \tag{17}\]
The \(\Gamma\) correction simply reads [58]:
\[\Gamma = \frac{\mu_{2}}{\mathscr{E}}=\frac{1+\left(\frac{\pi\varepsilon}{4 }\right)^{2}\tilde{\chi}_{1}}{1+\left(\frac{\pi\varepsilon}{4}\right)^{2} \left(\frac{\tilde{\chi}_{1}+\chi_{1}}{2}\right)}\quad, \tag{18}\] \[\tilde{\chi}_{1} = \left[\frac{3-\tanh^{2}\left(k_{p}h\right)}{\tanh^{3}\left(k_{p}h \right)}\right]^{2}\,\ \chi_{1}=\frac{9\cosh(2k_{p}h)}{\sinh^{6}(k_{p}h)}\,,\]
where \(\varepsilon=H_{s}/\lambda\) denotes the irregular wave measure of significant steepness, with \(H_{s}\) being the significant wave height and \(\lambda\) the zero-crossing period. The effect of the \(\Gamma\) correction on the moments of \(\zeta\) in a uniform distribution of phases up to second order follows (note that the Airy case has \(\mathbb{E}[\zeta^{2}]=1\)):
\[\mu_{1} = \frac{\mathbb{E}[\zeta]}{\sqrt{\mu_{2}}}=\frac{\sqrt{2}}{a}\frac {1}{\sqrt{1+\left(\frac{\pi\varepsilon}{4}\right)^{2}\tilde{\chi}_{1}}}\times \tag{19}\] \[\int_{0}^{2\pi}\frac{1}{2\pi}\left[a\cos\phi+\frac{\pi\varepsilon }{4}\sqrt{\tilde{\chi}_{1}}\cdot a\cos\left(2\phi\right)\right]\,d\phi=0\quad,\] \[\mu_{2} = \frac{2}{a^{2}}\int_{0}^{2\pi}\frac{1}{2\pi}\left[a\cos\phi+\frac {\pi\varepsilon}{4}\sqrt{\tilde{\chi}_{1}}\cdot a\cos\left(2\phi\right)\right]^ {2}\,d\phi\] \[= 1+\left(\frac{\pi\varepsilon}{4}\right)^{2}\tilde{\chi}_{1}\quad.\]
If linear wave theory is assumed (\(\zeta=a\cos\phi\)) but with slightly non-uniform phase distribution affected by \(2\pi f(\phi)=1-\frac{\pi\varepsilon}{4}\sqrt{\tilde{\chi}_{1}}\,\cos\phi\), \(\mu_{2}\) is recovered to second order. This suggests one can estimate the magnitude of the perturbation over the otherwise uniform distribution \(2\pi f(\phi)-1\sim-\frac{\pi\varepsilon}{4}\sqrt{\tilde{\chi}_{1}}\,\cos\phi\). Therefore, a generalized distribution of the form is sought:
\[2\pi f(\phi)=\left[1-\left(\frac{\pi\varepsilon}{4}\sqrt{\tilde{\chi}_{1}} \right)\frac{\cos\phi}{\Xi_{1}}+\left(\frac{\pi\varepsilon}{4}\sqrt{\tilde{ \chi}_{1}}\right)^{2}\frac{\cos\left(2\phi\right)}{\Xi_{2}}\right]\,, \tag{20}\]
where \((\Xi_{1},\Xi_{2})\) are coefficients to be uniquely determined by wave theories. As a remark, from the point of view of the probability distribution [58] the second moment has been normalized by the energy, now modified by a second-order correction. In this case, the true second normalized moment is \(\mu_{2}/\mathscr{E}=\Gamma\). Likewise, the first moment reads \(\mu_{1}=\mathbb{E}[\zeta]/\sqrt{\mathscr{E}\mu_{2}}\).
### Computation through Gram-Charlier Series
To study non-uniform distributions of random phases one must obtain the joint probability density of both phases and envelope (the analytical equivalent of the wave height). The non-uniformity of random phases is therefore closely related to the non-Gaussianity of the surface elevation probability density. Historically, deviations from Gaussian law are dealt through asymptotic expansions of the _central limit theorem_, having been rigorously defined since Laplace [59] with further refinements introduced by Chebyshev [60], Berry [61] and Esseen [62]. The approximation through Gram-Charlier or Edgeworth series of the Gaussian distribution is therefore not a refinement of the Gaussian distribution, but rather an approximated distribution when this limit has not been reached. Because the joint probability density of the envelope of the surface elevation \(\zeta\) and its Hilbert transform \(\hat{\zeta}\) can be well approximated by the expansion of third-order cumulants in terms of Hermite polynomials, Tayfun [63] showed that it can lead to the skewness \(\mu_{3}\) correction in deep water:
\[f(\zeta,\hat{\zeta})=\frac{e^{-\frac{1}{2}(\zeta^{2}+\hat{\zeta}^{2})}}{2\pi} \left[1+\frac{\mu_{3}}{6}\zeta(\zeta^{2}+\hat{\zeta}^{2}-4)\right]\quad. \tag{21}\]
Integrating the joint density over the envelope \(R=(\zeta^{2}+\hat{\zeta}^{2})^{1/2}\) with respective pair \((\zeta,\hat{\zeta})=(R\cos\phi,R\sin\phi)\), one finds the phase distribution in deep water [63]:
\[f(\phi)=\frac{1}{2\pi}\left[1-\frac{\mu_{3}}{6}\sqrt{\frac{\pi}{2}}\cos\phi \right]\quad. \tag{22}\]
In deep water, we typically have \(\mu_{3}\leqslant 0.6\)[64; 65], such that the peak of the phase distribution does not exceed the uniform distribution \(2\pi f(\phi)\) in \(10\%\). By comparing eq. (20) with eq. (22) and the inequality of eq. 14 of Tayfun [64], we find the bound:
\[\Xi_{1}\geqslant\frac{5}{6}(1+\nu^{2})\left[1+\mathcal{O}(\varepsilon)\right] \gtrsim 1\quad. \tag{23}\]
In intermediate and shallow waters, especially in unsteady conditions similar to the experiments of Trulsen _et al._[54], this departure from the standard uniform distribution is expected to be much larger, see for instance figure 13 of Bitner [57]. However, approximations of Gram-Charlier series are reduced to either skewness or kurtosis effects, see Mori and Janssen [66] for the latter. Nevertheless, the full computation of a Gram-Charlier series is dependent on both moments of the surface elevation [67]. Together with the explicit effect of bandwidth \(\nu\)[7] from eq. 20 of Tayfun [64], the impact of both skewness and kurtosis as computed in eq. 50 of Tayfun and Lo [68] reads:
\[2\pi f(\phi) = 1-\frac{\sqrt{\pi}}{4}ka\big{(}1-\nu\sqrt{2}+\nu^{2}\big{)}\cos\phi \tag{24}\] \[+\left(\nu\sqrt{2}-\nu^{2}\right)(ka)^{2}\cos\left(2\phi\right).\]
For a broad-banded sea state with typical JONSWAP spectrum of peakedness parameter \(\gamma=3.3\) the bandwidth is of the order of \(\nu\sim 1/2\). By means of the relation between skewness and steepness [68], I find:
\[2\pi f(\phi)\approx 1-\frac{\mu_{3}}{6}\sqrt{\frac{\pi}{2}}\cos\phi+\frac{9( ka)^{2}}{20}\cos\left(2\phi\right)\quad. \tag{25}\]
Furthermore, since eqs. 19-22 of Mori and Kobayashi [69] show that the kurtosis is computed as \(\mu_{4}=48(ka)^{2}\) in deep water (see appendix A of Mendes and Kasparian [70]), I further simplify the phase distribution as follows:
\[2\pi f(\phi)\approx 1-\frac{\mu_{3}}{6}\sqrt{\frac{\pi}{2}}\cos\phi+\frac{\mu_{ 3}^{2}}{60}\cos\left(2\phi\right)\quad, \tag{26}\]
which adds another \(0.4\%\) deviation to the maximum of \(10\%\) of the first term containing \(\cos\phi\). Hence, the approximation of eq. (22) is justified in deep water. In intermediate water, the importance of the second term can increase by fivefold due to \(\mu_{3}\sim 1\). In order to express the phase distribution in terms of relative water depth and wave steepness, I adopt the finite-depth formula of eq. 22 of Mori and Kobayashi [69]. Taking into account the ratio between the coefficient in eq. (16) and those of Mori and Kobayashi [69] amounts to \(\sqrt{\chi_{1}}/(D_{1}+D_{2})\sim 2\), the phase distribution fully reads:
\[2\pi f(\phi) \approx 1-\left(\frac{\pi\varepsilon\mathfrak{S}}{6}\sqrt{\tilde{\chi}_ {1}}\right)\cos\phi \tag{27}\] \[+\frac{6}{5\pi}\left(\frac{\pi\varepsilon\mathfrak{S}}{6}\sqrt{ \tilde{\chi}_{1}}\right)^{2}\cos\left(2\phi\right),\]
where the vertical asymmetry between crests and troughs \(1\leqslant\mathfrak{S}=2a/H\leqslant 2\) is added to correct an otherwise symmetrical approach from the beginning. In comparison with eq. (20), the above solution is equivalent of finding \(\Xi_{1}=3/2\mathfrak{S}\approx 1.25\), upholding the lower bound in eq. (23). With the exact distribution for random phases in hand, I compute the change in mean water level:
\[\mu_{1} = \int_{0}^{2\pi}\frac{d\phi}{2\pi\sigma\sqrt{\mathscr{E}}}\left[ \cos\phi+\frac{\pi\varepsilon\,\mathfrak{S}}{4}\sqrt{\tilde{\chi}_{1}}\cos \left(2\phi\right)\right]\left[1-\frac{\pi\varepsilon\,\mathfrak{S}}{6}\sqrt{ \tilde{\chi}_{1}}\cos\phi+\frac{\pi\varepsilon^{2}\,\mathfrak{S}^{2}}{30} \tilde{\chi}_{1}\cos\left(2\phi\right)\right]\, \tag{28}\] \[= \frac{\pi\varepsilon\mathfrak{S}\sqrt{2\tilde{\chi}_{1}}(\pi^{2} \varepsilon^{2}\mathfrak{S}^{2}\tilde{\chi}_{1}-20)}{240\sqrt{1+\frac{\pi^{2} \varepsilon^{2}\mathfrak{S}^{2}}{32}\left(\tilde{\chi}_{1}+\chi_{1}\right)} \sqrt{1+\frac{\pi^{2}\varepsilon^{2}\mathfrak{S}^{2}}{81}\tilde{\chi}_{1}+ \frac{\pi^{4}\varepsilon^{4}\mathfrak{S}^{4}}{2300}\tilde{\chi}_{1}^{2}-\frac {\pi^{6}\varepsilon^{6}\mathfrak{S}^{6}}{284,000}\tilde{\chi}_{1}^{3}}}\equiv \frac{4\mathbb{E}[\zeta]}{H_{s}}\,.\]
In figure 1 the theoretical evolution of the normalized mean water level is displayed, decreasing evermore towards shallow water until it reaches its global minimum at the surf zone. Then, the mean water level starts to increase and reaches the plunging point at \(\mu_{1}=0\) and quickly increases to a normalized set-up, as expected from observation [71]. Figure 1 also points to the fact that although increasing the pre-shoal steepness leads to an earlier peak in set-down, the magnitude of this peak seems to vary weakly with the steepness. This is
Figure 1: Contour of the normalized set-down/up as a function of pre-shoal steepness \(\varepsilon_{0}\).
in qualitative agreement with the linear term of the set-down formula of eq. (8), because the latter converges to \(-a^{2}/4h\) in the surf zone. Therefore, the new formula recovers both sides of the piecewise theory of Longuet-Higgins and Stewart [21], and being continuous explains theoretically when and how fast the transition between set-down and set-up occurs.
In order to compare the set-down computed from the random phase distribution with the radiation stress theory, eq. (8) is rewritten for irregular waves by using the same transformation from eq. (16):
\[\frac{\langle\zeta\rangle}{\sigma}\approx\frac{4\langle\zeta\rangle}{H_{s}}=- \frac{\pi\varepsilon\mathfrak{S}}{\sqrt{2}\sinh\left(2.2k_{p}h\right)}\left[1 +\frac{9\pi^{2}\varepsilon^{2}\mathfrak{S}^{2}}{16\sinh^{6}\left(1.1k_{p}h \right)}\right]. \tag{29}\]
Thus, figure 2 compares the non-ergodic set-down from the ensemble average with the time average from eq. (29). The set-down computation based on the radiation stress of waves over a flat bottom underpredicts the result of the present model, and this is expected because the latter is based on the \(\Gamma\) model that is valid for relatively steep slopes. This is in agreement with the fact that Longuet-Higgins and Stewart [21] underpredicts the set-down in the experiments performed by Saville [20] for steep slopes [73; 31; 74].
In addition, it seems that the point in which wave breaking occurs, which in the present model first moment plot corresponds to the transition between set-down and set-up, is well predicted by the mean. To find the breaking point one uses \(\varepsilon\leqslant\tanh\left(kh\right)/7\) and thus find \(kh\geqslant\tanh^{-1}(7\varepsilon)\) according to Miche [72]. Factoring out \(\sqrt{\mathscr{E}}\) which is of the order of \(\mathcal{O}(1)\), The breaking point can be obtained by setting \(d\mu_{1}/d\tilde{\chi}_{1}=0\), finding to leading order:
\[\tilde{\chi}_{1}\approx\frac{12}{\pi^{2}\varepsilon^{2}\mathfrak{S}^{2}} \left[\sqrt{1+\frac{10\pi}{9}}-1\right]\approx\frac{40}{3\pi^{2}\varepsilon^ {2}\mathfrak{S}^{2}}\quad. \tag{30}\]
Following the definition of the trigonometric coefficient \(\tilde{\chi}_{1}\) in eqs. (16-19), I solve the cubic equation in \(\tanh k_{p}h\) and find the critical relative depth for \(\mathfrak{S}\approx 1.2\):
\[k_{p}h \approx -\tanh^{-1}\left[\frac{\varepsilon}{3}-\frac{(1+i\sqrt{3}) \varepsilon^{2}}{6\psi^{1/3}}-\frac{(1-i\sqrt{3})\psi^{1/3}}{6}\right]\,,\] \[\psi = \varepsilon^{3}+\frac{81\varepsilon}{2}\left(\sqrt{1-\frac{4 \varepsilon^{2}}{81}}-1\right)\,. \tag{31}\]
Figure 3: Critical values (breaking, plunging) for the relative water depth as a function of curves of fixed steepness according to the present theory and steepness-limited wave breaking [72] with and without vertical wave asymmetry.
Figure 2: (a) Set-down/up normalized by significant wave height as a function of water depth as computed from the non-ergodic formula eq. (28) and the classical approach of eq. (8) adapted to irregular waves in eq. (29) for a fixed steepness \(\varepsilon=1/20\). (b) Excess set-down in percentage of \(H_{s}\) by the non-ergodic computation as compared to the classical counterpart with varying pre-shoal steepness \(\varepsilon_{0}\) and relative water depth.
I can also obtain the location of the plunging/spilling point by solving \(\mu_{1}(\varepsilon,(k_{p}h)_{c})=0\) which implies \(\tilde{\chi}_{1}=20/\pi^{2}\varepsilon^{2}\mathfrak{S}^{2}\):
\[k_{p}h = -\tanh^{-1}\left[\frac{5\varepsilon}{18}-\frac{25(1+i\sqrt{3}) \varepsilon^{2}}{36\tilde{\psi}^{1/3}}-\frac{(1-i\sqrt{3})\tilde{\psi}^{1/3}} {36}\right],\] \[\tilde{\psi} = 125\,\varepsilon^{3}+7290\,\varepsilon\left(\sqrt{1-\frac{25 \varepsilon^{2}}{729}}-1\right)\,. \tag{32}\]
Figure 3 compares steepness-limited wave breaking critical points with the breaking and plunging points as estimated by the present theory, showing qualitative agreement. However, the model shows that depth-limited wave breaking turns the transition between set-down and set-up earlier than inferred from Miche [72] at low wave steepness.
## V Slope effect
To fully compute the effect of steep slopes on the mean water level, one must start with its effect on the derivative of the velocity potential. The algebra can be split into slope-dependent and slope-independent terms:
\[u=\frac{\partial\Phi}{\partial x}=u_{0}+\Delta u(\nabla h)\quad;\quad w=\frac {\partial\Phi}{\partial z}\quad. \tag{33}\]
Hence, the slope-dependent term of the horizontal velocity component will lead to a modification in eq. (1):
\[\left<\zeta\right>_{\nabla_{h}} = -\frac{\left<(u_{0}+\Delta u)^{2}-w^{2}\right>}{2g} \tag{34}\] \[= \left<\zeta\right>-\frac{\left<2u_{0}\Delta u+(\Delta u)^{2} \right>}{2g},\]
with the term \(\left<\zeta\right>\) having been already obtained in eq. (8). The complementary part of the horizontal velocity is computed by taking derivatives \(\partial/\partial x\) on the hyperbolic functions in eq. (3) since \(\partial\theta/\partial x=\partial\Lambda/\partial x=k\nabla h\):
\[\frac{k\Delta u}{a\omega}=\sin\phi\,\frac{\partial}{\partial x}\left[\frac{ \cosh\theta}{\sinh\Lambda}\right]+\frac{3ka}{8}\sin\left(2\phi\right)\frac{ \partial}{\partial x}\left[\frac{\cosh\left(2\theta\right)}{\sinh^{4}\Lambda}\right] \tag{35}\]
and thus one can obtain:
\[\Delta u = \frac{a\omega\nabla h}{\sinh^{2}\Lambda}\left\{\mathscr{H}_{1} \sin\phi+\left(\frac{3ka}{4}\right)\frac{\mathscr{H}_{2}\sin\left(2\phi \right)}{\sinh^{4}\Lambda}\right\},\] \[\mathscr{H}_{1} = \sinh\theta\sinh\Lambda-\cosh\theta\cosh\Lambda\,\] \[\mathscr{H}_{2} = \sinh\left(2\theta\right)\sinh^{2}\Lambda-\cosh\left(2\theta \right)\sinh\left(2\Lambda\right)\,. \tag{36}\]
Because of eq. (6), the time average of \(u_{0}\Delta u\) vanishes. Although the hyperbolic coefficients appearing in \((\Delta u)^{2}\) are bulky and can not be simplified I may take the limit at \(z=0\) for these coefficients due to \(\langle u_{0}^{2}-w^{2}\rangle/2g\sim\langle(\Delta u)^{2}\rangle/2g<0.1\). In that case, it is straightforward to show that \(\lim_{z\to 0}\mathscr{H}_{1}=1\) and \(\lim_{z\to 0}\mathscr{H}_{2}=-2\sinh^{4}\Lambda/\tanh^{3}\Lambda\). Therefore, I may approximate:
\[\left<\frac{(\Delta u)^{2}}{2g}\right> = \frac{a^{2}\omega^{2}(\nabla h)^{2}}{2g\sinh^{4}\Lambda}\left< \mathscr{H}_{1}^{2}\sin^{2}\phi+\left(\frac{3ka}{4}\right)^{2}\frac{\mathscr{ H}_{2}^{2}\sin^{2}\left(2\phi\right)}{\sinh^{8}\Lambda}\right>=\frac{a^{2}}{2g} \cdot\frac{gk\tanh\Lambda\left(\nabla h\right)^{2}}{2\sinh^{4}\Lambda}\left[ \mathscr{H}_{1}^{2}+\left(\frac{3ka}{4}\right)^{2}\frac{\mathscr{H}_{2}^{2}} {\sinh^{8}\Lambda}\right]\,, \tag{37}\] \[= \frac{ka^{2}}{2\sinh\left(2\Lambda\right)}\cdot\frac{(\nabla h)^{2 }}{\sinh^{2}\Lambda}\left[\mathscr{H}_{1}^{2}+\left(\frac{3ka}{4}\right)^{2} \frac{\mathscr{H}_{2}^{2}}{\sinh^{8}\Lambda}\right]\approx-\frac{ka^{2}}{2 \sinh\left(2\Lambda\right)}\cdot\frac{(\nabla h)^{2}}{\sinh^{2}\Lambda}\left[1+ \frac{9(ka)^{2}}{4\tanh^{6}\Lambda}\right]\quad.\]
Accordingly, up to second order in steepness \(ka\) and under the effect of an arbitrary slope \(\nabla h\) without curvature (\(\nabla^{2}h=0\)), the wave-driven set-down is fully computed at last:
\[\left<\zeta\right>_{\nabla_{h}}\approx-\frac{ka^{2}}{2\sinh\left(2kh\right)} \left\{1+\frac{9(ka)^{2}}{16\sinh^{6}kh}+\frac{(\nabla h)^{2}}{\sinh^{2}kh} \left[1+\frac{9(ka)^{2}}{4\tanh^{6}kh}\right]\right\}\quad. \tag{38}\]
Figure 4: Amplification in the classical set-down formula due to the slope effect of eq. (38).
In figure 4 the disparity between mild slope and steep slope mean water levels is displayed. Taking into account the magnitude of the maximal correction due to the terms proportional to \((ka)^{2}/(kh)^{6}\) approaching the Ursell limit in shallow water, I can further simplify the slope-dependent set-down:
\[\left\langle\zeta\right\rangle_{\nabla h} \approx -\frac{5ka^{2}}{9\sinh{(2kh)}}\left[1+\frac{4}{3}\frac{(\nabla h)^ {2}}{\sinh^{2}kh}\right] \tag{39}\] \[\approx \left\langle\zeta\right\rangle\cdot\frac{10}{9}\left[1+\frac{4}{ 3}\frac{(\nabla h)^{2}}{\sinh^{2}kh}\right]\quad.\]
The above slope correction is the first of the kind to be found analytically through the classical textbook methodology, albeit the dependence on the slope has been found previously through numerical simulations [74] and experiments [30]. Although Chen _et al._[31] have also found a dependence on \((\nabla h)^{2}\) for the shoaling-induced set-down, this comes from an _a priori_ expansion of the velocity potential in powers of \(ka\cdot\nabla h\) and the formulae for the computation contains dozens of hyperbolic and trigonometric coefficients that make such a result computationally cumbersome. Notably, the present derivation does not apply to realistic reflective beaches, so the above formula is limited to cases with a negligible reflection coefficient \(K_{R}\approx 0.1\hat{x}^{2}\), hence with small surf similarity \(\hat{\xi}\sim\nabla h/\sqrt{\varepsilon}\lesssim 1\)[75]. This limitation inhibits an otherwise divergence in the set-down due to a step \(\nabla h\to\infty\). On the other hand, one may examine the maximum set-down beyond second-order theory in the limit when \(\sinh kh\approx\tanh kh\approx kh\):
\[\frac{\left\langle\zeta\right\rangle_{\nabla h}}{H_{s}}\geqslant-\frac{H_{s} }{32h}\left[1+10\,(\nabla h)^{2}\left\{1+\frac{9}{2}\left(\frac{H_{s}}{h} \right)^{2}\right\}\right]\,. \tag{40}\]
Therefore, the maximum set-down using linear theory is amplified by a percentage of \((50\nabla h)^{2}\,\%\) due to steep slopes, which for negligible shoul reflection lies in the range \(25\%-250\%\) for \(1/10<|\nabla h|<1/3\). This amplification magnitude is at par with two to threefold larger set-down than predicted by Longuet-Higgins and Stewart [21] as compared to experiments in Saville [20]. However, the set-up can not be computed from eq. (39) and no continuous formulation can be extended. In the next section, I compute and show that the same shape of the set-down zone can be obtained, although it is more complex algebraically but with the advantage of computing the mean water level continuously from set-down up to set-up.
### Non-ergodic Approach
In order to extract the slope dependence from the set-down, first one must generalize the random phase approach for eq. (19). As discussed in Mendes and Kasparian [70], the shoaling slope effect induces a correction \(\sqrt{\nabla h}\) to the existing excess kurtosis of steep slopes, thereby decreasing the non-Gaussianity as the slope tends to zero. Combination of the "uniform" slope-dependent phase distribution with the inhomogeneous slope-independent phase distribution of eq. (28) leads to (see appendix B):
\[2\pi f_{\nabla h}(\phi) \approx 1-\left(\frac{\pi\varepsilon\,\mathfrak{S}\sqrt{\chi_{1}}}{6} \right)|\nabla h|^{1/4}\cos\phi \tag{41}\] \[+\frac{6}{5\pi}\left(\frac{\pi\varepsilon\,\mathfrak{S}\sqrt{ \chi_{1}}}{6}\right)^{2}|\nabla h|^{1/2}\cos\left(2\phi\right).\]
Naturally, the generalized slope-dependent distribution recovers both inhomogeneous and uniform distributions over mild slopes. Such distribution allows us to estimate the effect of arbitrarily steep slopes on the set-down, as an alternative to the computation of the Bernoulli equation (or radiation stress) leading to eq. (39). As such, the set-down driven by the equivalent inhomogeneous random phase distribution of eq. (28) now reads:
\[\mu_{1}=\frac{\pi\varepsilon\mathfrak{S}\sqrt{2\chi_{1}}(\pi^{2} \varepsilon^{2}\mathfrak{S}^{2}\tilde{\chi}_{1}\sqrt[4]{\nabla}\overline{ \nabla}h-20)\sqrt[4]{\nabla}\overline{\nabla}h\left[1+\frac{\pi^{2} \varepsilon^{2}\mathfrak{S}^{2}}{32}\left(\tilde{\chi}_{1}+\chi_{1}\right) \right]^{-1/2}}{240\sqrt{1+\frac{\pi^{2}\varepsilon^{2}\mathfrak{S}^{2}}{720} \tilde{\chi}_{1}(45-30\sqrt[4]{\nabla}\overline{\nabla}h-6\sqrt{\nabla}h)+ \frac{\pi^{4}\varepsilon^{4}\mathfrak{S}^{4}}{2300}\tilde{\chi}_{1}^{2}( \nabla h)^{3/4}-\frac{\pi^{6}\varepsilon^{6}\mathfrak{S}^{6}}{142,000}\tilde{ \chi}_{1}^{3}\,\nabla h}}\,, \tag{42}\]
As observed in figure 5, as the slope becomes milder (\(|\nabla h|\leqslant 1/100\)) the non-ergodic set-down converges to
Figure 5: Slope dependence of the non-ergodic computation for the set-down/up.
the adiabatic classical set-down of Longuet-Higgins and Stewart [21]. Comparing it with the slope-dependent classical set-down computation introduced in eq. (38), the magnitude of the global minimum is similar, but the shape of the set-down evolution is different. Moreover, the slope-dependent classical set-down only deviates from the adiabatic one near wave breaking, thus a shortcoming. Naturally, the present formula is superior as it is also capable of computing the set-up, in particular also describing its dependence on the slope. Lastly, figure 6 describes how a steep slope amplifies the set-down, but this growth is significant only in a narrow range of relative water depth \(0.6\leqslant k_{p}h\leqslant 0.9\).
## VI Conclusion
In this work the radiation stress-led computation of the set-down has been extended to include the effect of slope. Furthermore, I have shown that statistical reasoning removes the difficulties in dealing with wave dissipation for the computation of the mean water level when calculated with a non-uniform distribution of random phases, being general for steep slopes as well. By comparing the two formulations, it can be explained why the classical calculation underpredicted the set-down over steep slopes, being recovered by the new slope-dependent model for slopes milder than \(1/100\). However, the radiation stress can still not lead to a continuous formulation between set-down and set-up, whereas the stochastic approach succeeds in this matter. Despite the usage of a simple model for the distribution of random phases, this work demonstrates that wave statistics are useful for computing primary variables of the water wave solutions, paving the way for the realization that physically visible effects can be handled by the intangible randomness measures of a sea state.
**Declaration of Interests**. The author reports no conflict of interest.
## Appendix A Rayleigh Distribution
A straightforward proof that a Rayleigh distribution will emerge from the magnitude of a random vector reveals the uniform distribution of random phases: given two random variables of unknown form \(X=\sum_{i}x_{i}\) and \(Y=\sum_{i}y_{i}\), mutually independent and identically distributed by a Gaussian probability density due to the _central limit theorem_, its joint probability density is:
\[f_{XY}=\frac{1}{2\pi\sigma^{2}}e^{-(X^{2}+Y^{2})/2\sigma^{2}}\quad. \tag{10}\]
Choosing auxiliary random variables \(X=R\,\cos\Omega\) and \(Y=R\,\sin\Omega\), equivalent of the surface elevation and its Hilbert transform [76], the Jacobian reads:
\[\left|\frac{\partial(X,Y)}{\partial(R,\Omega)}\right|=\begin{vmatrix}\cos \Omega&\sin\Omega\\ -R\,\sin\Omega&R\,\cos\Omega\end{vmatrix}=R\quad. \tag{11}\]
The marginal exceedance distribution of the envelope \(R\) if phases are uniformly distributed (\(f_{\Omega}=1/2\pi\)) becomes:
\[\mathbb{P}_{R}=\int_{R}^{+\infty}\int_{0}^{2\pi}\frac{e^{-R^{*}2/2\sigma^{2}} }{2\pi\sigma^{2}}R^{*}\,dR^{*}\,d\Omega=e^{-R^{2}/2\sigma^{2}}\, \tag{12}\]
whose derivative is the Rayleigh probability density:
\[f_{R}=\frac{R}{\sigma^{2}}e^{-R^{2}/2\sigma^{2}}\quad. \tag{13}\]
## Appendix B Slope Effect on Kurtosis: Taylor Expansion
Here I demonstrate why the effect on slope is approximately factored out as \(\sqrt{\nabla h}\) in the excess kurtosis. From section 3 of Mendes and Kasparian [70] an effective approximation for the excess kurtosis has been experimentally validated for waves traveling at steep slopes:
\[\mu_{4}(\Gamma)\approx\frac{1}{9}\left[e^{8\left(1-\frac{1}{\Theta^{2}\Gamma }\right)}-1\right]\quad. \tag{14}\]
Without loss of generality, I simply the process of performing a Taylor expansion by setting a representative value of the vertical asymmetry that varies slowly [70]. For waves with high steepness (in the range of second-order theory) similar to the conditions in figure 2 (\(\varepsilon\lesssim 1/20\)) or the experiments reviewed in Li and Chabchoub [77] I can approximate \(\mathfrak{S}\Gamma\approx\Gamma^{6}\) because \(\Gamma\lesssim 1.05\), see
Figure 6: Amplification in the mean water level formula due to the slope effect of eq. (42) as compared to a fixed mild slope of \(|\nabla h|=1/1000\).
eq. 3.26 of Mendes _et al._[58]. Furthermore, including the slope effect of eq. 29 of Mendes and Kasparian [29], the non-homogeneous spectral correction formulated in section IV can also be approximated as:
\[\Gamma\approx 1+\left(\frac{\pi\varepsilon}{4}\right)^{2}\left(\frac{\tilde{ \chi}_{1}-\chi_{1}}{2}\right)-\tilde{\mathscr{E}}_{p2}\quad, \tag{30}\]
which at the relative water depth leading to the highest amplification (\(k_{p}h=0.5\)) leads to:
\[\Gamma\approx 1+2\pi^{2}\varepsilon^{2}-\tilde{\mathscr{E}}_{p2}\quad. \tag{31}\]
Here, \(\tilde{\mathscr{E}}_{p2}\) is the net change in spectral potential energy due to the slope effect on the mean water level adjusted by a boundary term proportional to \((\nabla h)^{-1}\), being expressed as a function of the slope magnitude \(\nabla h\) in the case of a slope as (also at \(k_{p}h=0.5\) and with pre-shoal water depth \(k_{p0}h_{0}=\pi\), see figure 7):
\[\tilde{\mathscr{E}}_{p2}\approx 20\varepsilon^{2}\left[-\nabla h\left(1- \nabla h\right)+\frac{1}{125\nabla h}\right]. \tag{32}\]
As shown in figure 8, I can rewrite the net potential energy by numerically simplifying its closed-form:
\[-\tilde{\mathscr{E}}_{p2}\approx\varepsilon^{2}\left(20\sqrt{\nabla h}-7\right). \tag{33}\]
Note that this approximation clearly shows that the slope effect starts to saturate at slopes about twice the critical point \((\nabla h)_{c}=(7/20)^{2}\sim 1/8\). Then, the exponent can be rewritten as,
\[1-\frac{1}{\mathfrak{S}^{2}\Gamma} \approx 1-\left[1+\varepsilon^{2}\big{(}2\pi^{2}-7+20\sqrt{\nabla h} \big{)}\right]^{-6}\,, \tag{34}\] \[\approx 1-\left[1-6\varepsilon^{2}\big{(}2\pi^{2}-7+20\sqrt{\nabla h} \big{)}\right]\,,\] \[\approx 12\pi^{2}\varepsilon^{2}+6\varepsilon^{2}\big{(}20\sqrt{\nabla h }-7\big{)}\,.\]
Comparing to eq. (32), the leading term of the excess kurtosis is of the order,
\[\mu_{4\,,\,0}=\frac{1}{9}\left[e^{96\pi^{2}\varepsilon^{2}}-1\right]\xrightarrow{ \varepsilon\to 1/20}1\,, \tag{35}\]
as observed in well-known experiments [54]. Note however that the above formula works for small amplitude waves, and a much larger value of steepness will require corrections up to 3rd and 4th orders. Moreover, because \(e^{96\pi^{2}\varepsilon^{2}}\gg 1\) I can use:
\[\mu_{4} = \frac{e^{-48\tilde{\mathscr{E}}_{p2}}}{9}\left[e^{96\pi^{2} \varepsilon^{2}}-1\right]+\frac{1}{9}\left(e^{-48\tilde{\mathscr{E}}_{p2}}-1 \right)\,, \tag{36}\] \[= \mu_{4\,,\,0}\cdot e^{-48\tilde{\mathscr{E}}_{p2}}+\mathcal{O}(1 \partial_{p2}^{6})\quad. \tag{37}\]
Figure 8: Approximations for the boundary-adjusted slope function appearing in eq. (32).
Figure 7: Net change in spectral potential energy corrected by boundary terms as described in eq. (32).
The second term is at least one order of magnitude smaller than the first, and I can neglect it. I compute the exponential of the potential energy variation, obtaining an expansion for the assumed sea conditions (\(400\varepsilon^{2}\sim 1\)):
\[e^{-48\hat{\mathcal{E}}_{p2}} \approx 1-48\hat{\mathcal{E}}_{p2}\approx 1+48\varepsilon^{2}\big{(}20\sqrt{ \nabla h}-7\big{)}, \tag{19}\] \[\approx (1-336\varepsilon^{2})+960\varepsilon^{2}\sqrt{\nabla h}\,\] \[\sim 1000\,\varepsilon^{2}\sqrt{\nabla h}\lesssim 2\sqrt{\nabla h}\quad.\]
Expanding the exponential \(20\leqslant\mu_{4\,,\,0}/\pi^{2}\varepsilon^{2}\leqslant 40\) (see figure 9), I conclude the proof for the second-order small amplitude wave theory in steepness (\(\varepsilon\leqslant 1/20\)):
\[\mu_{4}\lesssim 80\pi^{2}\varepsilon^{2}\sqrt{\nabla h}\quad. \tag{20}\]
It can be seen in figure 9 that for low steepness one should probably use the coefficient lower bound of \(40\) to compensate for the gap between solid and dashed curves, whereas for the bulk of the range in steepness the upper bound is the best approximation.
|
2302.04466
|
A noncommutative weak type maximal inequality for modulated ergodic
averages with general weights
|
In this article, we prove a weak type $(p,p)$ maximal inequality,
$1<p<\infty$, for weighted averages of a positive Dunford-Schwarz operator $T$
acting on a noncommutative $L_p$-space associated to a semifinite von Neumann
algebra $\mathcal{M}$, with weights in $W_q$, where
$\frac{1}{p}+\frac{1}{q}=1$. This result is then utilized to obtain modulated
individual ergodic theorems with $q$-Besicovitch and $q$-Hartman sequences as
weights. Multiparameter versions of these results are also investigated.
|
Morgan O'Brien
|
2023-02-09T07:14:01Z
|
http://arxiv.org/abs/2302.04466v4
|
# Noncommutative modulated individual ergodic theorems for general weights
###### Abstract.
In this article, we prove that the averages of positive Dunford-Schwarz operator \(T\) acting on a semifinite von Neumann algebra \(\mathcal{M}\), when weighted by a sequence in \(W_{q}\), are of weak type \((p,p)\), where \(1<p,q<\infty\) satisfy \(\frac{1}{p}+\frac{1}{q}=1\). Afterwards, we use this to prove some weighted individual ergodic theorems for semifinite von Neumann algebras with various types of weights coming from analysis and number theory.
Key words and phrases:Semifinite von Neumann algebra, noncommutative weighted individual ergodic theorems, \(q\)-Besicovich sequences, Hartman sequences, arithmetic functions 2020 Mathematics Subject Classification: 47A35, 46L51
## 1. Introduction
Since Ryll-Nardzewski proved that bounded Besicovich sequences are good weights for the individual ergodic theorem for Dunford-Schwartz operators in [20], the study of modulated ergodic theorems has been an active area of research in ergodic theory. For example, such results have been studied by Bellow and Losert for certain bounded Hartman sequences with correlation in [3] (this class contains all bounded Besicovich sequences), \(q\)-Besicovich sequences by Lin, Olsen, and Tempelman in [16], and certain sequences arising from arithmetic functions in number theory by El Abdalaoui, Kulaga-Przymus, Lemanczyk, and de la Rue in [1] and Cuny and Weber in [7], just to name a few.
Generalizing results of this nature from the classical measure space setting to the von Neumann algebra setting is also an active area of research in noncommutative ergodic theory. For example, Chilin, Litvinov, and Skalski showed in [6] that bounded Besicovich sequences are good weights for the noncommutative individual ergodic theorem (see also [4, 5]). Related to these, Litvinov showed in [15] that a noncommutative version of the Wiener-Wintner ergodic theorem for ergodic \(\tau\)-preserving \(*\)-homomorphisms of a finite von Neumann algebra with weights being trigonometric polynomials. Wiener-Wintner type ergodic theorems are stronger than standard weighted ergodic theorems since they show that only one projection is needed to obtain b.a.u/a.u. convergence for every weight in a fixed set (see Section 2 for the definition of b.a.u. and a.u. convergence). In [10], Hong and Sun showed that the weights considered by Bellow and Losert satisfy a Wiener-Wintner type result in a multiparameter form for certain \(\tau\)-preserving \(*\)-automorphisms of finite von Neumann algebras. In [19], under a strong assumption on the positive Dunford-Schwartz operator under consideration (frequently satisfied for operators satisfying certain spectral theoretic properties, like being self-adjoint on \(L_{2}\)), the
author was able to prove that all bounded Hartman sequences satisfy a Wiener-Wintner type result.
For the results mentioned above in the noncommutative setting, the corresponding weighted maximal inequalities, which were used to prove convergence, required that the weight sequence be bounded. However, in [19], with an additional assumption to the other result of the paper already mentioned above (that is often already satisfied as a byproduct of the first assumption), the author showed that the ergodic averages, when weighted by \(W_{1}\) sequences (see Section 2 below for the definition of \(W_{r}\)-spaces), are bilaterally uniformly equicontinuous in measure at zero. From this, it was shown that the set of Hartman sequences in \(W_{1^{+}}\) satisfy a Wiener-Wintner type ergodic theorem for the operator as well. However, this conclusion is restricted to the noncommutative \(L_{p}\)-space the assumption is made on, unlike the version with bounded weights which extends to other spaces. So far as the author is aware, this was the first weighted individual ergodic theorem in the noncommutative setting to allow unbounded weights. However, in the commutative setting, numerous results have already been obtained for unbounded weights; see [16] and [7] for example.
In this article, we remove the assumption on boundedness and expand the list of known modulated ergodic theorems in the noncommutative setting by showing that one may allow weights in \(W_{q}\) for every positive Dunford-Schwartz operator on a semifinite von Neumann algebra. In particular, in Proposition 3.1 we will show that, when \(1<p,q<\infty\) satisfy \(\frac{1}{p}+\frac{1}{q}=1\), the ergodic averages of a positive Dunford Schwartz operator, when weighted by \(W_{q}\) sequences, are of weak type \((p,p)\) with a constant depending on that obtained from Yeadon's weak type \((1,1)\) maximal ergodic inequality and a projection independent of the weight. We also provide a version of this result for uniformly equicontinuity when \(2<p<\infty\) (while assuming \(\frac{2}{p}+\frac{1}{q}=1\)).
After this technical result, we improve the results of Litvinov, Chilin, and Skalski to allow \(q\)-Besicovich sequences, which also generalizes the commutative version of the result (which is Theorem 3.5 in [16]). Afterwards, we show that one drawback of the second main result mentioned of [19] can be mostly fixed to allow certain \(W_{q}\) Hartman sequences on other \(L_{p}\)-spaces (though, unfortunately, not for as many weights as on the space the assumption holds). Finally, in Section 4 we show that the results can be further extended to include more general weighted averages using the approach of Cuny and Weber in [7], allowing certain number theoretic weights to be able to be considered.
## 2. Preliminaries and Notation
If \(\mathcal{M}\) is a von Neumann algebra and \(\tau\) is a normal semifinite faithful trace on \(\mathcal{M}\), then we will call the pair \((\mathcal{M},\tau)\) a semifinite von Neumann algebra. We will let \(\mathbb{N}\) denote the set of natural numbers, and \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\).
Let \(\mathbf{1}\) denote the identity operator of \(\mathcal{M}\). Let \(\mathcal{P}(\mathcal{M})\) denote the set of projections in \(\mathcal{M}\). For each \(e\in\mathcal{P}(\mathcal{M})\), write \(e^{\perp}=\mathbf{1}-e\).
Suppose that \(\mathcal{M}\) acts on the Hilbert space \(\mathcal{H}\). Let \(x:\mathcal{D}_{x}\to\mathcal{H}\) be a closed densely defined operator on \(\mathcal{H}\). Then \(x\) is _affiliated_ to \(\mathcal{M}\) if \(yx\subseteq xy\) for every \(y\in\mathcal{M}^{\prime}\) (\(\mathcal{M}^{\prime}\) being the commutant of \(\mathcal{M}\)). If \(x\) is affiliated to \(\mathcal{M}\), then it is called \(\tau\)_-measurable_ if for every \(\epsilon>0\), there exists \(e\in\mathcal{P}(\mathcal{M})\) such that \(\tau(e^{\perp})\leq\epsilon\) and \(xe\in\mathcal{M}\). Let \(L_{0}(\mathcal{M},\tau)\) denote the set of all \(\tau\)-measurable operators affiliated with
\(\mathcal{M}\). For every \(\epsilon,\delta>0\), define the set
\[V(\epsilon,\delta)=\{x\in L_{0}(\mathcal{M},\tau):\|xe\|_{\infty}\leq\delta\ \text{for some}\ e\in\mathcal{P}(\mathcal{M})\ \text{with}\ \tau(e^{\perp})\leq\epsilon\}.\]
The collection \(\{V(\epsilon,\delta)\}_{\epsilon,\delta>0}\) forms a set of neighborhoods of \(0\) in \(L_{0}(\mathcal{M},\tau)\), which will give rise to the _measure topology_ of \(L_{0}(\mathcal{M},\tau)\). Equipped with the closed sum, closed product, and measure topology, \(L_{0}(\mathcal{M},\tau)\) is a complete topological \(*\)-algebra. See [18] for more on this.
If \(x\in L_{0}(\mathcal{M},\tau)\), then \(x\) is _positive_, written \(x\geq 0\), if \(\langle x\xi,\xi\rangle\geq 0\) for every \(\xi\in\mathcal{D}_{x}\), and write \(x\leq y\) if \(y-x\geq 0\), where \(x,y\in L_{0}(\mathcal{M},\tau)\) and \(x,y\geq 0\). If \(E\subseteq L_{0}(\mathcal{M},\tau)\), write \(E^{+}=\{x\in E:x\geq 0\}\). A linear operator \(S:E\to E\) is called _positive_ if \(S(x)\geq 0\) for every \(x\geq 0\).
Given \(x\in L_{0}(\mathcal{M},\tau)^{+}\), by the spectral theorem one may write \(x\) in its spectral decomposition as \(x=\int_{[0,\infty)}\lambda de_{\lambda}\). From this, one may extend the trace from \(\mathcal{M}^{+}\) to \(L_{0}(\mathcal{M},\tau)^{+}\) via
\[\tau(x)=\sup_{n}\tau\left(\int_{[0,n]}\lambda de_{\lambda}\right).\]
If \(x\in L_{0}(\mathcal{M},\tau)\), then there exists \(u\in\mathcal{M}\) and \(|x|\in L_{0}(\mathcal{M},\tau)^{+}\) such that \(x\) has a polar decomposition \(x=u|x|\), where \(|x|^{2}=x^{*}x\). For each \(1\leq p<\infty\), the noncommutative \(L_{p}\)-space associated to \(\mathcal{M}\) is defined by
\[L_{p}(\mathcal{M},\tau)=\{x\in L_{0}(\mathcal{M},\tau):\tau(|x|^{p})<\infty\}.\]
A norm may be defined on \(L_{p}(\mathcal{M},\tau)\) via \(\|x\|_{p}=(\tau(|x|^{p}))^{1/p}\). Write \(L_{\infty}(\mathcal{M},\tau)=\mathcal{M}\), and equip it with the usual operator norm \(\|\cdot\|_{\infty}\). Then \(L_{p}(\mathcal{M},\tau)\) is a Banach space with respect to \(\|\cdot\|_{p}\) for every \(1\leq p\leq\infty\). See [18] for more on this. If \(p=0\) or \(1\leq p\leq\infty\), we may write \(L_{p}=L_{p}(\mathcal{M},\tau)\) when convenient and unambiguous. It is known that \(L_{p}\subset L_{1}+\mathcal{M}\) for every \(1\leq p\leq\infty\).
A linear operator \(T:L_{1}+\mathcal{M}\to L_{1}+\mathcal{M}\) is a _Dunford-Schwartz operator_ if
\[\|T(x)\|_{p}\leq\|x\|_{p}\ \text{for every}\ x\in L_{p}\ \text{and}\ \ 1\leq p\leq\infty.\]
Let \(DS^{+}(\mathcal{M},\tau)\) denote the set of all positive Dunford-Schwartz operators on \((\mathcal{M},\tau)\).
There are a few properties of the ordering on \(\mathcal{M}^{+}\) that are vital to our arguments. If \(f:[0,\infty)\to[0,\infty)\) is an operator convex function (i.e. \(f\) is still convex if we replace \([0,\infty)\) with \(\mathcal{M}^{+}\) using functional calculus), then for every positive operator \(S:\mathcal{M}\to\mathcal{M}\) it follows that \(f(S(x))\leq S(f(x))\) for every \(x\in\mathcal{M}^{+}\). Of importance, \(f(t)=t^{p}\) is operator convex when \(1\leq p\leq 2\); Kadison's inequality is the case \(p=2\). For more on this, see [8, 12]. In a similar vein to this, the function \(f\) is said to be operator monotone if \(x,y\in\mathcal{M}^{+}\) satisfying \(x\leq y\) implies \(f(x)\leq f(y)\). The function \(f(t)=t^{1/p}\) is operator monotone for every \(1\leq p<\infty\).
The map \(0\leq t\mapsto t^{p}\) cannot necessarily be improved for operator convexity after \(1\leq p\leq 2\). For example, the case \(p=3\) fails when considering the matrices
\[x=\left[\begin{array}{cc}1&1\\ 1&1\end{array}\right]\ \text{and}\ \ y=\left[\begin{array}{cc}3&1\\ 1&1\end{array}\right].\]
Using the notion of operator convexity, the following operator Holder's inequality was shown in [11, Lemma 2.4]. In that paper, the result was proven for arbitrary measure spaces. However, we will only need it for averages of a finite number of terms, so we state it in that language.
**Lemma 2.1**.: _(Cf. [11, Lemma 2.4]) Let \((\mathcal{M},\tau)\) be a semifinite von Neumann algebra and assume \(1<p,q<\infty\) satisfy \(\frac{1}{p}+\frac{1}{q}=1\). Fix \(n\in\mathbb{N}\), and let \(\alpha_{0},...,\alpha_{n-1}\in[0,\infty)\) and \(x_{0},...,x_{n-1}\in(L_{1}(\mathcal{M},\tau)+\mathcal{M})^{+}\). Then_
\[\frac{1}{n}\sum_{k=0}^{n-1}\alpha_{k}x_{k}\leq\left(\frac{1}{n}\sum_{k=0}^{n-1} \alpha_{k}^{q}\right)^{1/q}\left(\frac{1}{n}\sum_{k=0}^{n-1}x_{k}^{p}\right)^{ 1/p}.\]
If \(1\leq q<\infty\) and \(\alpha=(\alpha_{n})_{n=0}^{\infty}\subset\mathbb{C}\), then
\[\|\alpha\|_{W_{q}}:=\left(\limsup_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}| \alpha_{k}|^{q}\right)^{1/q}\]
will denote the \(W_{q}\)-seminorm of \(\alpha\). We will write \(|\alpha|_{W_{q}}\) for the same quantity but with sup replacing \(\limsup\). Observe that \(|\alpha|_{W_{q}}<\infty\) if and only if \(\|\alpha\|_{W_{q}}<\infty\). From this, we define \(W_{q}\) as
\[W_{q}=\{\alpha=(\alpha_{n})_{n=0}^{\infty}\subset\mathbb{C}:\|\alpha\|_{W_{q} }<\infty\}.\]
Let \(W_{\infty}\) denote the space of all bounded sequences in \(\mathbb{C}\), with \(\|\alpha\|_{W_{\infty}}:=\sup_{n}|\alpha_{n}|\) and \(|\cdot|_{W_{\infty}}=\|\cdot\|_{W_{\infty}}\). If \(1\leq r<s\leq\infty\), then \(W_{s}\subset W_{r}\), and \(\|\alpha\|_{W_{r}}\leq\|\alpha\|_{W_{s}}\) and \(|\alpha|_{W_{r}}\leq|\alpha|_{W_{s}}\) for every \(\alpha\). It is known that \(\|\cdot\|_{W_{q}}\) defines a seminorm on \(W_{q}\) under which it is complete.
Given \(\mathcal{W}\subseteq W_{1}\), we will write \(\mathcal{W}^{+}\) for the set of sequences \(\alpha=(\alpha_{k})_{k=0}^{\infty}\in\mathcal{W}\) such that \(\alpha_{k}\geq 0\) for every \(k\in\mathbb{N}_{0}\) and \(|\alpha|_{W_{q}}>0\) (this later condition is equivalent to there being \(k\) such that \(\alpha_{k}\neq 0\)). Note that any \(\alpha\in W_{q}\) can be written as a linear combination of four elements of \(\alpha_{0},...,\alpha_{3}\in W_{q}^{+}\cup\{(0)_{k}\}\) for each \(1\leq q\leq\infty\), and these elements can be chosen so that \(|\alpha_{j}|_{W_{q}}\leq|\alpha|_{W_{q}}\) for each \(j=0,...,3\).
Let \(\mathbb{T}=\{\lambda\in\mathbb{C}:|\lambda|=1\}\) denote the unit circle in \(\mathbb{C}\). A trigonometric polynomial is a function \(P:\mathbb{Z}\to\mathbb{C}\) such that there exists \(\lambda_{1},...,\lambda_{k}\in\mathbb{T}\) and \(r_{1},...,r_{k}\in\mathbb{C}\) such that \(P(n)=\sum_{j=1}^{k}r_{j}\lambda_{j}^{n}\) for every \(n\in\mathbb{Z}\). A trigonometric polynomial \(P\) will also induce a sequence in \(W_{\infty}\) via \(P(\cdot):=(P(n))_{n=0}^{\infty}\); write \(\mathcal{T}\subset W_{\infty}\) for the subspace of all such sequences.
There are a few important subspaces of \(W_{1}\) that we wish to mention in particular. First off, we will write \(W_{1^{+}}\) for the closure of \(\bigcup_{r>1}W_{r}\) with respect to the \(W_{1}\)-seminorm. Next, given \(1\leq q<\infty\), we will write \(B_{q}\) for the \(W_{q}\)-seminorm of the trigonometric polynomials \(\mathcal{T}\); the elements of \(B_{q}\) will be called \(q\)_-Besicovich sequences_. In other words, if \(\alpha=(\alpha_{n})_{n=0}^{\infty}\in B_{q}\), then for every \(\epsilon>0\) there exists a trigonometric polynomial \(P\) such that
\[\left(\limsup_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}|\alpha_{k}-P(k)|^{q} \right)^{1/p}<\epsilon.\]
Note that \(\alpha\in B_{q}\) implies that \(\alpha\in W_{1}\) as well. Write \(B_{\infty}=B_{1}\cap W_{\infty}\); then a _bounded Besicovich sequence_ will be an element of \(B_{\infty}\). It is known that \(B_{1}\cap W_{\infty}=B_{r}\cap W_{\infty}\) for every \(1\leq r<\infty\), so the choice of \(r=1\) does not make any difference. Finally, a sequence \(\alpha=(\alpha_{n})_{n=0}^{\infty}\in W_{1}\) is called a _Hartman sequence_ if
\[\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\alpha_{k}\lambda^{k}\text{ exists for every }\lambda\in\mathbb{T},\]
and let \(H\subset W_{1}\) denote the space of all Hartman sequences. It is known that \(H\) is closed in \(W_{1}\) and that \(B_{q}\subset H\cap W_{q}\) for every \(1\leq q\leq\infty\).
Given \(T\in DS^{+}(\mathcal{M},\tau)\), \(x\in L_{p}(\mathcal{M},\tau)\), and \(\alpha=(\alpha_{n})_{n=0}^{\infty}\in W_{1}\), we will write
\[M_{n}^{\alpha}(T)(x):=\frac{1}{n}\sum_{k=0}^{n-1}\alpha_{k}T^{k}(x)\]
for the \(n\)-th \(\alpha\)-weighted average of \(x\) with respect to \(T\). We will write \(M_{n}(T)\) to denote the unweighted averages (i.e. the averages with \(\alpha_{n}=1\) for every \(n\in\mathbb{N}_{0}\)).
For notational convenience, if \(\underline{0}=(0)_{k=0}^{\infty}\) we will adopt the convention of
\[\frac{1}{|\underline{0}|_{W_{q}}}M_{n}^{\underline{0}}(T)(x)=\frac{0}{0}=0,\]
where the notation is the same as above. Note that \(|\alpha|_{W_{q}}=0\) is equivalent to \(\alpha_{k}=0\) for every \(k\), so \(\alpha=\underline{0}\) is the only case where this convention would come into consideration, meaning \(M_{n}^{\underline{0}}(T)=0\) anyways. With this convention, we will be able to use operators of the form \(|\alpha|_{W_{q}}^{-1}M_{n}^{\alpha}(T)\) to prove some upper bounds that are trivial when \(\alpha=\underline{0}\) without any hesitation.
The notion of pointwise convergence doesn't necessarily make sense as is in the noncommutative setting. However, after a minor modification a noncommutative analogue of it does exist. As such, we will replace a.e. convergence with the following definitions inspired by Egorov's Theorem.
Let \((x_{n})_{n=0}^{\infty}\subseteq L_{0}\) and \(x\in L_{0}\). Say that \(x_{n}\to x\) bilaterally almost uniformly (b.a.u.) as \(n\to\infty\) if, for every \(\epsilon>0\), there exists \(e\in\mathcal{P}(\mathcal{M})\) such that
\[\tau(e^{\perp})\leq\epsilon\,\text{ and }\,\lim_{n\to\infty}\|e(x_{n}-x)e\|_{ \infty}=0.\]
We will say that \(x_{n}\to x\) almost uniformly (a.u.) if the limit \(\lim_{n\to\infty}\|e(x_{n}-x)e\|_{\infty}\) can be replaced by \(\lim_{n\to\infty}\|(x_{n}-x)e\|_{\infty}\).
**Definition 2.1**.: Let \((\mathcal{M},\tau)\) be a semifinite von Neumann algebra and \((X,\|\cdot\|)\) be a normed space. A family of additive maps \(A_{n}:X\to L_{0}\), \(n\in\mathbb{N}\), is _bilaterally uniformly equicontinuous in measure (b.u.e.m.) at \(0\) on \((X,\|\cdot\|)\)_ if, for every \(\epsilon,\delta>0\), there exists \(\gamma>0\) such that \(\|x\|<\gamma\) implies the existence of \(e\in\mathcal{P}(\mathcal{M})\) such that
\[\tau(e^{\perp})\leq\epsilon\,\text{ and }\,\sup_{n\in\mathbb{N}}\|eA_{n}(x)e\|_{ \infty}\leq\delta.\]
Similarly, the sequence is _uniformly equicontinuous in measure (u.e.m.) at \(0\) on \((X,\|\cdot\|)\)_ if \(\sup_{n\in\mathbb{N}}\|eA_{n}(x)e\|_{\infty}\) above can be replaced by \(\sup_{n\in\mathbb{N}}\|A_{n}(x)e\|_{\infty}\).
Given \(1\leq p<\infty\) and an index set \(I\) (which may be uncountable), a family \(S=(S_{i})_{i\in I}\) of maps from \(L_{p}\) to \(L_{0}\) is of weak type \((p,p)\) with constant \(C>0\) if, for every \(\lambda>0\) and \(x\in L_{p}(\mathcal{M},\tau)\), there exists \(e\in\mathcal{P}(\mathcal{M})\) such that
\[\tau(e^{\perp})\leq\frac{C^{p}\|x\|_{p}^{p}}{\lambda^{p}}\text{ and }\,\sup_{i\in I }\|eS_{i}(x)e\|_{\infty}\leq\lambda.\]
Note that \(S=(S_{n})_{n=1}^{\infty}\) being weak type \((p,p)\) implies that it is b.u.e.m. at \(0\) on \((L_{p},\|\cdot\|_{p})\) (use \(\lambda=\delta\) and \(\gamma=\epsilon^{1/p}\delta/C\)).
**Proposition 2.1**.: _(Cf. [4, Theorem 2.1]) Let \((\mathcal{M},\tau)\) be a semifinite von Neumann algebra and \(T\in DS^{+}(\mathcal{M},\tau)\). Then, for any \(\beta\in W_{\infty}\), the weighted averages \((M_{n}^{\beta}(T))_{n=1}^{\infty}\) are b.u.e.m. (u.e.m.) at zero on \((L_{p},\|\cdot\|_{p})\) for every \(1\leq p<\infty\) (respectively, \(2\leq p<\infty\))._
The above result for unweighted averages were either shown directly, or as a consequence, of the results of [21, 13, 14].
**Proposition 2.2**.: _(Cf. [5, Proposition 3.1]) Let \((\mathcal{M},\tau)\) be a semifinite von Neumann algebra and \((X,\|\cdot\|)\) be a Banach space. Let \(A_{n}:X\to L_{0}(\mathcal{M},\tau)\) be a family of additive maps. If \((A_{n})_{n=1}^{\infty}\) is b.u.e.m. (u.e.m.) at zero on \((X,\|\cdot\|)\), then the set_
\[\{x\in X:(A_{n}(x))_{n=1}^{\infty}\text{ converges b.a.u. (respectively, a.u.)}\}\]
_is a closed subspace of \(X\)._
## 3. Convergence of Standard Weighted Averages
In this section, we will prove that the weighted averages of \(T\in DS^{+}(\mathcal{M},\tau)\) are b.u.e.m. at zero on \((L_{p},\|\cdot\|_{p})\) for weights in \(W_{q}\), where \(1<p,q<\infty\) and \(\frac{1}{p}+\frac{1}{q}=1\). We will then use this result to extend numerous results in the noncommutative setting to allow weights in such classes.
We first prove a minor modification of Lemma 2.1 above that is more suitable for our purposes. Namely, applying the original result directly would result in terms resembling \((\frac{1}{n}\sum_{k=0}^{n-1}S^{k}(x)^{p})^{1/p}\). If \(p>2\), then one can't guarantee that \(S^{k}(x)^{p}\leq S^{k}(x^{p})\), with terms resembling the latter being easier to work with. However, repeatedly applying the argument for the case \(p\in(1,2]\) will fix this problem, as we will now see.
**Lemma 3.1**.: _Let \((\mathcal{M},\tau)\) be a semifinite von Neumann algebra, \(S:\mathcal{M}\to\mathcal{M}\) a positive contraction, and \(x\in\mathcal{M}^{+}\). Let \(1<p,q<\infty\) be such that \(\frac{1}{p}+\frac{1}{q}=1\). If \(\alpha=(\alpha_{k})_{k=0}^{\infty}\) is such that \(\alpha_{k}\geq 0\) for every \(k\in\mathbb{N}_{0}\), then for every \(n\in\mathbb{N}\) we have_
\[\frac{1}{n}\sum_{k=0}^{n-1}\alpha_{k}S^{k}(x)\leq\left(\frac{1}{n}\sum_{k=0}^ {n-1}\alpha_{k}^{q}\right)^{1/q}\left(\frac{1}{n}\sum_{k=0}^{n-1}S^{k}(x^{p}) \right)^{1/p}.\]
Proof.: First we will assume that \(1<p\leq 2\). Observe that, if \(\beta_{k},\gamma_{k}\geq 0\) for every \(k=0,...,n-1\), then Lemma 2.1 implies that
\[\frac{1}{n}\sum_{k=0}^{n-1}\beta_{k}\gamma_{k}S^{k}(x)\leq\left(\frac{1}{n} \sum_{k=0}^{n-1}\beta_{k}^{q}\right)^{1/q}\left(\frac{1}{n}\sum_{k=0}^{n-1} \gamma_{k}^{p}S^{k}(x)^{p}\right)^{1/p}.\]
Since \(0\leq t\mapsto t^{p}\) is operator convex for \(1<p\leq 2\), and since \(S\) is a positive contraction of \(\mathcal{M}\), it follows that \(S^{k}(x)^{p}\leq S^{k}(x^{p})\) for every \(k=0,...,n-1\). Since \(\mathcal{M}^{+}\) is a positive cone, and since \(0\leq t\mapsto t^{1/p}\) is operator monotone, it follows that
\[\left(\frac{1}{n}\sum_{k=0}^{n-1}\gamma_{k}^{p}S^{k}(x)^{p}\right)^{1/p}\leq \left(\frac{1}{n}\sum_{k=0}^{n-1}\gamma_{k}^{p}S^{k}(x^{p})\right)^{1/p}.\]
Therefore
\[\frac{1}{n}\sum_{k=0}^{n-1}\beta_{k}\gamma_{k}S^{k}(x)\leq\left(\frac{1}{n} \sum_{k=0}^{n-1}\beta_{k}^{q}\right)^{1/q}\left(\frac{1}{n}\sum_{k=0}^{n-1} \gamma_{k}^{p}S^{k}(x^{p})\right)^{1/p}.\]
Since \(\alpha_{k}\geq 0\) for each \(k\), one may use \(\beta_{k}=\alpha_{k}\) and \(\gamma_{k}=1\) to find that
\[\frac{1}{n}\sum_{k=0}^{n-1}\alpha_{k}S^{k}(x)\leq\left(\frac{1}{n}\sum_{k=0}^ {n-1}\alpha_{k}^{q}\right)^{1/q}\left(\frac{1}{n}\sum_{k=0}^{n-1}S^{k}(x^{p}) \right)^{1/p}.\]
Now assume that \(1<p<\infty\). Then there exists \(p_{1},...,p_{m}\in(1,2]\) such that \(p=p_{1}\cdots p_{m}\). Let \(q=((q_{1})^{-1}+(q_{2}p_{1})^{-1}+...+(q_{m}p_{m-1}\cdots p_{1})^{-1})^{-1}\), noting that \(\frac{1}{p}+\frac{1}{q}=1\). The expression for \(q\) may be rewritten as
\[\frac{q}{q_{1}}+\frac{q}{q_{2}p_{1}}+...+\frac{q}{q_{m}p_{m-1}\cdots p_{2}p_{1 }}=1.\]
Using \(\beta_{k}=\alpha_{k}^{\frac{q}{q_{1}}}\) and \(\gamma_{k}=\alpha_{k}^{\frac{q}{q_{2}p_{1}}+...+\frac{q}{q_{m}p_{m-1}\cdots p_{ 2}p_{1}}}\) in the above argument with \(q_{1}\) and \(p_{1}\), we find that
\[\frac{1}{n}\sum_{k=0}^{n-1}\alpha_{k}S^{k}(x)=\frac{1}{n}\sum_{k= 0}^{n-1}\alpha_{k}^{\frac{q}{q_{1}}}\alpha_{k}^{\frac{q}{q_{2}p_{1}}+...+ \frac{q}{q_{m}p_{m-1}\cdots p_{2}p_{1}}}S^{k}(x)\] \[\leq \left(\frac{1}{n}\sum_{k=0}^{n-1}\left(\alpha_{k}^{\frac{q}{q_{1} }}\right)^{q_{1}}\right)^{\frac{1}{q_{1}}}\left(\frac{1}{n}\sum_{k=0}^{n-1} \left(\alpha_{k}^{\frac{q}{q_{2}p_{1}}+...+\frac{q}{q_{m}p_{m-1}\cdots p_{2}p_ {1}}}\right)^{p_{1}}S^{k}(x^{p_{1}})\right)^{\frac{1}{p_{1}}}\] \[= \left(\frac{1}{n}\sum_{k=0}^{n-1}\alpha_{k}^{q}\right)^{\frac{1} {q_{1}}}\left(\frac{1}{n}\sum_{k=0}^{n-1}\alpha_{k}^{\frac{q}{q_{2}}+...+\frac {q}{q_{m}p_{m-1}\cdots p_{2}}}S^{k}(x^{p_{1}})\right)^{\frac{1}{p_{1}}}\]
Iterating this argument on the second factor of the right side of the inequality using \(q_{2}\) and \(p_{2}\), then \(q_{3}\) and \(p_{3}\), and so on until \(q_{m}\) and \(p_{m}\) (which reduces to the argument for \(p\in(1,2]\)), and using the operator monotonicity of \(0\leq t\mapsto t^{1/p_{j}}\) for each \(j=2,...,m\) after each step, we find that
\[\frac{1}{n}\sum_{k=0}^{n-1}\alpha_{k}S^{k}(x)\leq\left(\frac{1}{n}\sum_{k=0}^ {n-1}\alpha_{k}^{q}\right)^{\frac{1}{q_{1}}+\frac{1}{q_{2}p_{1}}+...+\frac{1}{ q_{m}p_{m-1}\cdots p_{1}}}\left(\frac{1}{n}\sum_{k=0}^{n-1}S^{k}(x^{p_{1} \cdots p_{m}})\right)^{\frac{1}{p_{1}\cdots p_{m}}}.\]
From the factorizations of \(p\) and \(q\) into \(p_{i}\)'s and \(q_{j}\)'s in our assumption, we find that this expression is equivalent to
\[\frac{1}{n}\sum_{k=0}^{n-1}\alpha_{k}S^{k}(x)\leq\left(\frac{1}{n}\sum_{k=0}^ {n-1}\alpha_{k}^{q}\right)^{1/q}\left(\frac{1}{n}\sum_{k=0}^{n-1}S^{k}(x^{p}) \right)^{1/p},\]
which was to be proven.
The author would like to thank Dr. Leonard Cadilhac for the iteration argument used in this proof. This improved these results to hold for every \(p\in(1,\infty)\) instead only \(p\in(1,2]\) (with special cases for \(p\in(2,\infty)\)). The factorization on \(p\) can be obtained using the fact that, for any \(1<p<\infty\), there exists \(m\in\mathbb{N}\) such that \(2^{m-1}<p\leq 2^{m}\), so that \(p/2^{m-1}\in(1,2]\).
**Proposition 3.1**.: _Let \((\mathcal{M},\tau)\) be a semifinite von Neumann algebra, \(T\in DS^{+}(\mathcal{M},\tau)\), \(1<p,q<\infty\) be such that \(\frac{1}{p}+\frac{1}{q}=1\), and \(\alpha\in W_{q}\). Then the family of \(W_{q}\)-weighted averages \(\left(|\alpha|_{W_{r}}^{-1}M_{n}^{\alpha}(T)\right)_{(n,\alpha)\in\mathbb{N} \times W_{q}}\) are of weak type \((p,p)\) with constant \(4^{2+\frac{1}{p}}\)._
Proof.: Since \(L_{1}\cap\mathcal{M}\) is dense in the measure topology of \(L_{0}(\mathcal{M},\tau)\) and in the norm topology of \(L_{p}(\mathcal{M},\tau)\), without loss of generality we may assume that \(x\in L_{1}\cap\mathcal{M}\). Since each \(x\in L_{1}\cap\mathcal{M}\) can be written as the sum of four elements of \(L_{1}\cap\mathcal{M}^{+}\) whose \(L_{p}\)-norm does not exceed that of \(x\), we may first prove the claim for \(x\in L_{1}\cap\mathcal{M}^{+}\)
Assume \(\lambda>0\), \(x\in L_{1}\cap\mathcal{M}^{+}\), and \(\alpha=(\alpha_{k})_{k=0}^{\infty}\in W_{q}^{+}\). Then, since \(x\in\mathcal{M}^{+}\) and since \(T\in DS^{+}(\mathcal{M},\tau)\), it follows by Lemma 3.1 that, for every \(n\in\mathbb{N}\),
\[\frac{1}{n}\sum_{k=0}^{n-1}\alpha_{k}T^{k}(x)\leq\left(\frac{1}{n}\sum_{k=0}^{ n-1}\alpha_{k}^{q}\right)^{1/q}\left(\frac{1}{n}\sum_{k=0}^{n-1}T^{k}(x^{p}) \right)^{1/p}\leq|\alpha|_{W_{q}}(M_{n}(T)(x^{p}))^{1/p}.\]
Since \(x\in L_{1}\cap\mathcal{M}^{+}\), it follows that \(x^{p}\in L_{1}(\mathcal{M},\tau)\). Therefore, by Yeadon's weak type \((1,1)\) maximal ergodic inequality for \((M_{n}(T))_{n=1}^{\infty}\), there exists \(e\in\mathcal{P}(\mathcal{M})\) with
\[\tau(e^{\perp})\leq\frac{16^{p}\|x^{p}\|_{1}}{\lambda^{p}}=\frac{16^{p}\|x\|_ {p}^{p}}{\lambda^{p}}\ \ \text{and}\ \ \sup_{n\in\mathbb{N}}\|eM_{n}(T)(x^{p})e\|_{\infty}\leq\frac{\lambda^{p}}{16^ {p}}.\]
Note that \(M_{n}(T)(x)\in\mathcal{M}^{+}\) for every \(n\) implies that
\[eM_{n}(T)(x^{p})e\leq\frac{\lambda^{p}}{16^{p}}e,\]
and the operator monotonicity of \(0\leq t\mapsto t^{1/p}\) implies that
\[(eM_{n}(T)(x^{p})e)^{1/p}\leq\frac{\lambda}{16}e.\]
Reapplying norms, it follows that
\[\sup_{n\in\mathbb{N}}\|(eM_{n}(T)(x^{p})e)^{1/p}\|_{\infty}\leq\frac{\lambda}{ 16}.\]
Using Equation (3.18) of [17], it follows that
\[e(M_{n}(T)(x^{p}))^{1/p}e\leq(eM_{n}(T)(x^{p})e)^{1/p}.\]
Therefore, we find that
\[\left\|e\left(\frac{1}{n}\sum_{k=0}^{n-1}\alpha_{k}T^{k}(x)\right) e\right\|_{\infty}\leq \left\|e(|\alpha|_{W_{q}}M_{n}(T)(x^{p})^{1/p})e\right\|_{\infty}\] \[\leq |\alpha|_{W_{q}}\left\|(eM_{n}(T)(x^{p})e)^{1/p}\right\|_{\infty}\] \[\leq |\alpha|_{W_{q}}\frac{\lambda}{16}.\]
Dividing both sides by \(|\alpha|_{W_{q}}\) (noting this is nonzero by the assumption \(\alpha\in W_{q}^{+}\)), we obtain
\[\left\|e\left(\frac{M_{n}^{\alpha}(T)}{|\alpha|_{W_{q}}}\right)(x)e\right\|_{ \infty}\leq\frac{\lambda}{16}.\]
Finally, since \(\alpha\in W_{q}^{+}\) and \(n\in\mathbb{N}\) were arbitrary, it follows that
\[\sup_{(n,\alpha)\in\mathbb{N}\times W_{q}^{+}}\left\|e\left(\frac{M_{n}^{ \alpha}(T)}{|\alpha|_{W_{q}}}\right)(x)e\right\|_{\infty}\leq\frac{\lambda}{ 16}.\]
Now, assume \(\alpha\in W_{q}\), and let \(\alpha_{0},...,\alpha_{3}\in W_{q}^{+}\cup\{(0)_{k}\}\) be sequences such that \(\alpha=\sum_{j=0}^{3}i^{j}\alpha_{j}\) and \(|\alpha_{j}|_{W_{q}}\leq|\alpha|_{W_{q}}\) for each \(j=0,...,3\). By the triangle inequality (in the case of \(\alpha=(0)_{k=0}^{\infty}\), remembering our convention that \(\frac{0}{0}=0\)), one finds that
\[\left\|e\left(\frac{M_{n}^{\alpha}(T)}{|\alpha|_{W_{q}}}\right)(x)e\right\|_{ \infty}\leq\sum_{j=0}^{3}\frac{1}{|\alpha|_{W_{q}}}\left\|eM_{n}^{\alpha_{j}} (T)(x)e\right\|_{\infty}\leq\sum_{j=0}^{3}\frac{|\alpha_{j}|_{W_{q}}}{|\alpha| _{W_{q}}}\frac{\lambda}{16}\leq\frac{\lambda}{4}.\]
Assume now that \(x\in L_{1}\cap\mathcal{M}\), and write \(x=\sum_{j=0}^{3}i^{j}x_{j}\), where \(x_{j}\in L_{1}\cap\mathcal{M}^{+}\) and \(\|x_{j}\|_{p}\leq\|x\|_{p}\) for each \(j=0,...,3\). Then there exists \(e_{0},...,e_{3}\in\mathcal{P}(\mathcal{M})\) such that
\[\tau(e_{j}^{\perp})\leq\frac{16^{p}\|x_{j}\|_{p}^{p}}{\lambda^{p}}\ \ \text{and}\ \ \sup_{(n,\alpha)\in\mathbb{N}\times W_{q}}\left\|e_{j}\left(\frac{M_{n}^{ \alpha}(T)}{|\alpha|_{W_{q}}}(x_{j})\right)e_{j}\right\|_{\infty}\leq\frac{ \lambda}{4}.\]
Let \(e=\bigwedge_{j=0}^{3}e_{j}\). Then \(\tau(e^{\perp})\leq\frac{4\cdot 16^{p}\|x_{j}\|_{p}^{p}}{\lambda^{p}}\leq \frac{(4^{2+\frac{1}{p}})^{p}\|x\|_{p}^{p}}{\lambda^{p}}\) and
\[\sup_{(n,\alpha)\in\mathbb{N}\times W_{q}}\left\|e\left(\frac{M_{n }^{\alpha}(T)}{|\alpha|_{W_{q}}}(x)\right)e\right\|_{\infty}\leq \sum_{j=0}^{3}\sup_{(n,\alpha)\in\mathbb{N}\times W_{q}}\left\|e_{ j}\left(\frac{M_{n}^{\alpha}(T)}{|\alpha|_{W_{q}}}(x_{j})\right)e_{j} \right\|_{\infty}\] \[\leq \sum_{j=0}^{3}\frac{\lambda}{4}=\lambda.\]
Since \(\lambda>0\) and \(x\in L_{1}\cap\mathcal{M}\) were arbitrary, the result follows.
We will now extend the result to something more suitable to proving almost uniform convergence. Unfortunately, this extension is not exactly what one would expect; namely, \(\frac{1}{p}+\frac{1}{q}=1\) must be replaced by \(\frac{2}{p}+\frac{1}{q}=1\), and the result only uses one sequence at a time. The reason for this is that we follow the argument of [4, Proposition 4.1] and use Kadison's inequality and the fact that the b.u.e.m. result holds for \(x^{2}\in L_{p/2}\) (when \(x\in L_{p}^{+}\cap\mathcal{M}\)). Although the convergence is better, the corresponding \(W_{q}\) is smaller then the one previously obtained.
For our applications, considering only a single sequence at a time for the a.u. convergence results won't actually change any conclusions we obtain. Also, due to the fact that a less preferable set of weights will be used, we do not pursue stronger a.u. extensions at this time.
**Proposition 3.2**.: _Let \((\mathcal{M},\tau)\) be a semifinite von Neumann algebra, \(T\in DS^{+}(\mathcal{M},\tau)\), \(2<p<\infty\) and \(1<q<\infty\) satisfying \(\frac{2}{p}+\frac{1}{q}=1\), and \(\alpha\in W_{q}\). Then the weighted averages \((M_{n}^{\alpha}(T))_{n=1}^{\infty}\) are u.e.m. at zero on \((L_{p},\|\cdot\|_{p})\)._
Proof.: As in Proposition 3.1, assume without loss of generality that \(\alpha_{n}\geq 0\) for every \(n\geq 0\), so that each \(M_{n}^{\alpha}(T)\) is a positive map.
Assume \(\epsilon,\delta>0\). By Proposition 3.1, the averages \((M_{n}^{\alpha}(T))_{n=1}^{\infty}\) are b.u.e.m. at zero on \((L_{p/2},\|\cdot\|_{p/2})\); let \(\gamma>0\) to be the value corresponding to \(\epsilon\) and \(\delta^{2}\) in that definition. Let \(x\in L_{p}\cap\mathcal{M}^{+}\) be such that \(\|x\|_{p}<\sqrt{\gamma}\). Then \(\|x^{2}\|_{p/2}=\|x\|_{p}^{2}<\gamma\), so there exists \(e\in\mathcal{P}(\mathcal{M})\) with
\[\tau(e^{\perp})\leq\epsilon\ \text{and}\ \sup_{n\in\mathbb{N}}\|eM_{n}^{\alpha}(T) (x^{2})e\|_{\infty}\leq|\alpha|_{W_{q}}\delta^{2}.\]
Since \(x\in\mathcal{M}^{+}\) and \(M_{n}^{\alpha}(T)\) is positive map, Kadison's inequality implies that
\[M_{n}^{\alpha}(T)(x)^{2}\leq M_{n}^{\alpha}(T)(x^{2}),\ \ \text{so that}\ \ eM_{n}^{\alpha}(T)(x)^{2}e\leq eM_{n}^{\alpha}(T)(x^{2})e.\]
Hence
\[\|M_{n}^{\alpha}(T)(x)e\|_{\infty}^{2}=\|eM_{n}^{\alpha}(T)(x)^{2}e\|_{\infty} \leq\|eM_{n}^{\alpha}(T)(x^{2})e\|_{\infty}\leq\delta^{2},\]
so that \(\sup_{n}\|M_{n}^{\alpha}(T)(x)e\|_{\infty}\leq\delta.\) Since \(x\in L_{p}\cap\mathcal{M}^{+}\) and \(\epsilon,\delta>0\) were arbitrary, the conclusion follows by [14, Theorem 3.2 and Lemma 4.1].
**Proposition 3.3**.: _Let \((\mathcal{M},\tau)\) be a semifinite von Neumann algebra, \(T\in DS^{+}(\mathcal{M},\tau)\), \(x\in L_{1}(\mathcal{M},\tau)\cap\mathcal{M}\), and \(\mathcal{A}\subset W_{1}\). Let \(\mathcal{C}\subset W_{1}\) denote the closure of \(\mathcal{A}\) with respect to the \(W_{1}\)-seminorm. If \((M_{n}^{\alpha}(T)(x))_{n=1}^{\infty}\) converges b.a.u. (a.u.) and every \(\alpha\in\mathcal{A}\), then it converges b.a.u. (respectively, a.u.) for every \(\alpha\in\mathcal{C}\)._
Proof.: Assume \(\alpha=(\alpha_{n})_{n=0}^{\infty}\in\mathcal{C}\) and \(\epsilon>0\). Then there exists \(\beta=(\beta_{n})_{n=0}^{\infty}\in\mathcal{A}\) such that
\[\limsup_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}|\alpha_{k}-\beta_{k}|=\| \alpha-\beta\|_{W_{1}}<\epsilon.\]
Let \(N\in\mathbb{N}_{0}\) be such that the inequality holds for every \(n\geq N\). Then for such \(n\) we also have that
\[\|M_{n}^{\alpha}(T)(x)-M_{n}^{\beta}(T)(x)\|_{\infty}\leq\frac{1}{n}\sum_{k=0 }^{n-1}|\alpha_{k}-\beta_{k}|\|T^{k}(x)\|_{\infty}\leq\epsilon\|x\|_{\infty}.\]
Since \((M_{n}^{\beta}(T)(x))_{n=1}^{\infty}\) converges b.a.u. (or a.u.) as \(n\to\infty\) by assumption, and since \(\epsilon>0\) was arbitrary, it follows by that \((M_{n}^{\alpha}(T)(x))_{n=1}^{\infty}\) converges b.a.u. (respectively, a.u.) as well by [6, Lemma 4.3].
Since \(L_{1}\cap\mathcal{M}\) is dense in \(L_{p}(\mathcal{M},\tau)\) for every \(1\leq p<\infty\), we obtain the following.
**Corollary 3.1**.: _Let \((\mathcal{M},\tau)\) be a semifinite von Neumann algebra, \(T\in DS^{+}(\mathcal{M},\tau)\), and \(1<p,q<\infty\). Let \(\mathcal{A}\subset W_{q}\), and let \(\mathcal{C}\subset W_{q}\) be the \(W_{q}\)-seminorm closure of \(\mathcal{A}\). If \(\frac{1}{p}+\frac{1}{q}=1\) (respectively, \(\frac{2}{p}+\frac{1}{q}=1\)) and if \((M_{n}^{\alpha}(T)(x))_{n=1}^{\infty}\) converges b.a.u. (respectively, a.u.) for every \(\alpha\in\mathcal{A}\) and for every \(x\in L_{p}(\mathcal{M},\tau)\), then it converges b.a.u. (respectively, a.u.) for every \(\alpha\in\mathcal{C}\) and \(x\in L_{p}(\mathcal{M},\tau)\)._
In [5, Theorem 4.7], it was shown that, given a semifinite von Neumann algebra \((M,\tau)\) with a separable predual, the averages \((M_{n}^{\alpha}(T)(x))_{n=1}^{\infty}\) converge b.a.u. for every \(x\in L_{p}(\mathcal{M},\tau)\), \(T\in DS^{+}(\mathcal{M},\tau)\), and bounded Besicovich sequence \(\alpha\in B_{\infty}\), where \(1\leq p<\infty\). The convergence even occurs a.u. when \(2\leq p<\infty\); notably, it occurs a.u. for every \(x\in L_{1}\cap\mathcal{M}\).
Taking a step back, in order to prove those results, it was first shown that the weighted averages converge a.u. for all trigonometric polynomials with \(x\in L_{1}\cap\mathcal{M}\). Using this in conjunction with the above results yields the following theorem, which generalizes the mentioned noncommutative results to \(q\)-Besicovich sequences, and extends the commutative version of the result in [16, Theorem 3.5] to the noncommutative setting.
**Theorem 3.1**.: _Let \((\mathcal{M},\tau)\) be a semifinite von Neumann algebra with a separable predual, \(T\in DS^{+}(\mathcal{M},\tau)\), \(1<p,q<\infty\), and \(\alpha\in B_{q}\) be a \(q\)-Besicovich sequence. If \(\frac{1}{p}+\frac{1}{q}=1\), then the weighted averages \((M_{n}^{\alpha}(T)(x))_{n=1}^{\infty}\) converge b.a.u. for every \(x\in L_{p}(\mathcal{M},\tau)\). If instead \(\frac{2}{p}+\frac{1}{q}=1\), the convergence occurs a.u._
Proof.: We know by [6, Proposition 3.1] that \((M_{n}^{\alpha}(T)(x))_{n=1}^{\infty}\) converges a.u. for every \(x\in L_{1}\cap\mathcal{M}\) and every \(\alpha\in\mathcal{T}\). Since, by definition, the closure of \(\mathcal{T}\) with respect to the \(W_{q}\)-seminorm is equal to \(B_{q}\), the result follows by Corollary 3.1.
We now discuss another result that may be improved by the above. First, we will need a little bit of notation.
Given \(T\in DS^{+}(\mathcal{M},\tau)\) and \(1<r<\infty\), using the Jacobs-de Leeuw-Glicksberg decomposition of the space one may write
\[L_{r}(\mathcal{M},\tau)=\overline{\text{span}(\mathcal{U}_{r}(T))}\oplus \mathcal{V}_{r}(T),\]
where the closure is with respect to the norm of \(L_{r}\) and
\[\mathcal{U}_{r}(T)=\Big{\{}x\in L_{r}(\mathcal{M},\tau):T(x)=\lambda x\text{ for some }\lambda\in\mathbb{T}\Big{\}},\]
\[\mathcal{V}_{r}(T)=\Big{\{}x\in L_{r}(\mathcal{M},\tau):T^{n_{j}}(x)\to 0 \text{ weakly for some }(n_{j})_{j=0}^{\infty}\subseteq\mathbb{N}_{0}\Big{\}}.\]
In [19], if \(T^{n}(x)\to 0\) for every \(x\in\mathcal{V}_{r}(T)\) for some \(1<r<\infty\) and if \(\{T^{n}\}_{n=0}^{\infty}\) is b.u.e.m. at zero on \((L_{r},\|\cdot\|_{r})\), the author obtained the convergence of the averages for all sequences in \(W_{1^{+}}\cap H\). However, for any other \(p\neq r\) one could only guarantee the convergence on \(L_{p}\) when the weights were in \(W_{\infty}\cap H\) using the methods of that paper (unless the same assumptions also held on \(L_{p}\) as well).
These assumptions were justified in [19] through a large list of examples and related conditions. For example, if the restriction of \(T\in DS^{+}(\mathcal{M},\tau)\) to \(L_{2}\) is self adjoint, or, more generally, if \(T\) is normal as a Hilbert space operator on \(L_{2}(\mathcal{M},\tau)\) with \(\sigma(T^{n}|_{L_{2}})\subset[0,1]\) for some \(n\in\mathbb{N}\), then both assumptions hold for every \(1<r<\infty\). In this case, Theorem 3.2 below doesn't actually add anything new as all sequences in \(W_{1^{+}}\cap H\) can be used.
However, the types of operator considered in Theorems 2.7 and 4.6 of [2] only satisfy the above assumptions on \(L_{2}(\mathcal{M},\tau)\). Due to this, the b.a.u. version of the Wiener-Wintner type ergodic theorem in [19] thus only holds for weights in \(W_{1^{+}}\cap H\) on the corresponding noncommutative \(L_{2}\)-space. After reformulating Proposition 3.1 to a form more suitable to the methods of [19], in Theorem 3.2 below we will improve this result on \(L_{p}\) and extend the class of weights from \(W_{\infty}\cap H\) to \(W_{q}\cap H\) (with \(\frac{1}{p}+\frac{1}{q}=1\)).
In [19], given a semifinite von Neumann algebra \((\mathcal{M},\tau)\), \(1<p<\infty\), \(1\leq r\leq\infty\), \(\mathcal{W}\subseteq W_{r}\), and \(T\in DS^{+}(\mathcal{M},\tau)\), the family \((M_{n}^{\alpha}(T))_{n=1}^{\infty}\) was called \(\mathcal{W}\)_-b.u.e.m. at zero on \(L_{p}\)_ if there exists an increasing function \(h:[0,\infty)\to[0,\infty)\) such that, given \(\epsilon,\delta>0\), there exists \(\gamma>0\) such that \(\|x\|_{p}<\gamma\) implies the existence of \(e\in\mathcal{P}(\mathcal{M})\) such that
\[\tau(e^{\perp})\leq\epsilon\ \text{ and }\ \sup_{n}\|eM_{n}^{\alpha}(T)(x)e\|_{ \infty}\leq h(|\alpha|_{W_{r}})\delta.\]
In that article, this notion was used to prove a noncommutative Banach principle that allowed unbounded weights. For example, if the iterates \(\{T^{n}\}_{n=0}^{\infty}\) are b.u.e.m. at zero on \((L_{p},\|\cdot\|_{p})\), then this condition is satisfied with \(r=1\) and \(\mathcal{W}=W_{1}\). We will show that this condition is satisfied in general for \(r=q\) and \(\mathcal{W}=W_{q}\), where \(\frac{1}{p}+\frac{1}{q}=1\).
**Lemma 3.2**.: _Let \((\mathcal{M},\tau)\) be a semifinite von Neumann algebra, \(T\in DS^{+}(\mathcal{M},\tau)\), and \(1<p,q<\infty\) satisfy \(\frac{1}{p}+\frac{1}{q}=1\). Then the family of averages \((M_{n}^{\alpha}(T))_{n,\alpha}\) are \(W_{q}\)-b.u.e.m. at zero on \((L_{p},\|\cdot\|_{p})\)._
Proof.: Given \(\epsilon,\delta>0\), using \(\lambda=\delta\) in Proposition 3.1 and assuming \(\|x\|_{p}<\gamma\), where \(\gamma=(\epsilon\delta^{p}/(4^{1+2p}C))^{1/p}\), it follows that there exists \(e\in\mathcal{P}(\mathcal{M})\) such that
\[\tau(e^{\perp})\leq\epsilon\text{ and }\sup_{(n,\alpha)\in\mathbb{N}\times W_{q}} \frac{1}{|\alpha|_{W_{q}}}\left\|eM_{n}^{\alpha}(T)(x)e\right\|_{\infty} \leq\delta,\]
Fix \(\alpha\in W_{q}\). Then, multiplying the right inequality by \(|\alpha|_{W_{q}}\) shows that
\[\sup_{n}\|eM_{n}^{\alpha}(T)(x)e\|_{\infty}\leq|\alpha|_{W_{q}}\delta.\]
Since \(\alpha\in W_{q}\) was arbitrary, the result follows by using \(h(s)=s\).
Given a semifinite von Neumann algebra \((\mathcal{M},\tau)\), \(1\leq p<\infty\), \(T\in DS^{+}(\mathcal{M},\tau)\), and \(\mathcal{W}\subseteq W_{1}\), we will write
\[bWW_{p}(\mathcal{W}):=\Big{\{}x\in L_{p}(\mathcal{M},\tau):\forall\epsilon>0\ \exists e\in\mathcal{P}(\mathcal{M})\text{ such that }\tau(e^{\perp})\leq\epsilon\text{ and }\]
\[(eM_{n}^{\alpha}(T)(x)e)_{n=1}^{\infty}\text{ converges in }\mathcal{M}\text{ for each }\alpha\in\mathcal{W}\Big{\}}.\]
This is the set of operators which satisfy a Wiener-Wintner type ergodic theorem on \(L_{p}\) for weights in \(\mathcal{W}\). Notably, when \(x\in bWW_{p}(\mathcal{W})\), the averages \(M_{n}^{\alpha}(T)(x)\) converge b.a.u. for every \(\alpha\in\mathcal{W}\) with projections dependent only on \(\epsilon>0\) and the set \(\mathcal{W}\) (and not on any particular sequence in \(\mathcal{W}\)).
**Theorem 3.2**.: _Let \((\mathcal{M},\tau)\) be a semifinite von Neumann algebra, \(1<r<\infty\), and \(T\in DS^{+}(\mathcal{M},\tau)\) be such that \((T^{n})_{n=0}^{\infty}\) is b.u.e.m. at zero on \((L_{r},\|\cdot\|_{r})\) and \(T^{n}(x)\to 0\) b.a.u. for every \(x\in\mathcal{V}_{r}(T)\). Then \(L_{r}(\mathcal{M},\tau)=bWW_{r}(W_{1^{+}}\cap H)\). Furthermore, if \(1<p,q<\infty\) satisfies \(\frac{1}{p}+\frac{1}{q}=1\), then \(L_{p}(\mathcal{M},\tau)=bWW_{p}(W_{q}\cap H)\)._
Proof.: The claim regarding \(L_{r}(\mathcal{M},\tau)=bWW_{r}(W_{1^{+}}\cap H)\) is exactly [19, Theorem 4.3]. From this we deduce that \(L_{1}\cap\mathcal{M}\subseteq bWW_{r}(W_{1^{+}}\cap H)\), and looking at the definition of \(bWW_{r}(W_{1^{+}}\cap H)\) on sees that \(L_{1}\cap\mathcal{M}\subseteq bWW_{p}(W_{q}\cap H)\).
For \(1<p,q<\infty\) with \(\frac{1}{p}+\frac{1}{q}=1\), Lemma 3.2 above shows that \((M_{n}^{\alpha}(T))_{n,\alpha}\) is \(W_{q}\cap H\)-b.u.e.m. at zero on \((L_{p},\|\cdot\|_{p})\). As such, Theorem 3.4 of [19] says that \(bWW_{p}(W_{q}\cap H)\) is closed in \(L_{p}(\mathcal{M},\tau)\). Since \(L_{1}\cap\mathcal{M}\) is both dense in \(L_{p}\) and contained in \(bWW_{p}(W_{q}\cap H)\), it follows that \(bWW_{p}(W_{q}\cap H)=L_{p}(\mathcal{M},\tau)\).
**Remark 3.1**.: It was proven by Litvinov in [15, Theorem 5.2] that, if \(\mathcal{M}\) is a von Neumann algebra with a normal faithful tracial state \(\tau\), \(T:L_{1}\to L_{1}\) is a normal positive ergodic homomorphism (where \(T\) ergodic means \(T(x)=x\) with \(x\in L_{2}\) implies \(x=c\mathbf{1}\) for some \(c\in\mathbb{C}\)) such that \(\tau\circ T=\tau\) and \(\|T(x)\|_{\infty}\leq\|x\|_{\infty}\) for every \(x\in\mathcal{M}\), then \(L_{1}(\mathcal{M},\tau)=bWW_{1}(\mathcal{T})\).
Similar to Theorem 3.2, using Lemma 3.2 and an argument similar to Corollary 3.1, we can generalize the Wiener-Wintner ergodic theorem of [15] to allow \(q\)-Besicovich sequences and state that \(L_{p}(\mathcal{M},\tau)=bWW_{p}(B_{q})\) when \(1<p,q<\infty\) and \(\frac{1}{p}+\frac{1}{q}=1\).
## 4. Convergence of Modified Weighted Averages
In this section, we will study more general types of weights using similar techniques as above by considering the asymptotics of the partial sums of the sequence. For example, if one wanted to consider averages of \(T^{k}(x)\) when weighted by \(k\), dividing the weighted sums by \(n\) might not lead to the most natural interpretation of the results. Indeed, since \(\sum_{k=0}^{n-1}k=\frac{n(n-1)}{2}\) for every \(n\), it may make more sense to consider averages of the form
\[\frac{2}{n(n-1)}\sum_{k=0}^{n-1}kT^{k}(x).\]
This modification allows for nicer behavior of the averages. For example, if one knew that \(T(\mathbf{1})=\mathbf{1}\), then each of modified weighted average would also be \(\mathbf{1}\), while the standard averages (i.e. just dividing by \(n\)) would be unbounded. In other words, by taking properties of the asymptotics of the modulating sequence into consideration, we can allow even more general weights than in the previous section.
We will not consider the weights \((k)_{k=0}^{\infty}\) in this section. Instead, the weights we will consider in this section are inspired by the those arising from number theory. In the commutative setting, this was first done by El Abdalaoui, Kulaga-Przymus, Lemanczyk, and de la Rue in [1], and later by Cuny and Weber in [7].
To setup the notation for this, consider a sequence \(\alpha=(\alpha_{n})_{n=0}^{\infty}\subset[0,\infty)\), and let \(A_{n}=\sum_{k=0}^{n-1}\alpha_{k}\) (which we will assume is always nonzero without loss of generality). With this, given \(T\in DS^{+}(\mathcal{M},\tau)\), we will consider the modified weighted averages \(A_{n}^{\alpha}(T)\) determined by
\[A_{n}^{\alpha}(T):=\frac{1}{A_{n}}\sum_{k=0}^{n-1}\alpha_{k}T^{k}\text{ for every }n\in\mathbb{N}.\]
The results in previous sections always assumed \(A_{n}=n\) for every \(n\) and ignored any other properties of the sequence \(\alpha\). As previously mentioned, these modified averages allow for different types of sequences \((\alpha_{n})_{n}\) as weights at the cost of \(A_{n}\) needing to be different to account for this. The two are actually equivalent when \(\lim_{n\to\infty}\frac{A_{n}}{n}\) exists, is finite, and is not \(0\).
For the most part, the proofs of the results below actually follow the same reasoning as the corresponding results of Cuny and Weber in [7]. The main change occurs in Proposition 4.1, while minor changes are also needed to prove Proposition 4.2. A similar adjustment proves Theorem 4.1.
We note that, unlike [7], we work with arbitrary \(T\in DS^{+}(\mathcal{M},\tau)\) when possible (instead of only those induced by measure-preserving transformations, which are the \(*\)-homomorphisms in the noncommutative setting) and we do not assume finiteness of \(\tau\). We also mention that we only generalize _some_ of the results in [7] to the noncommutative setting, and mainly ones in Section 2 of that paper; this is due to them following the reasoning of Proposition 3.1 above and similar arguments to the commutative setting. This does not mean that the rest of that paper follows similarly. Finally, for our purposes proving convergence for one sequence at a time will suffice, so we will prove results with versions that hopefully make the arguments simpler.
**Proposition 4.1**.: _(Cf. [7, Lemma 2.1]) Let \((\mathcal{M},\tau)\) be a semifinite von Neumann algebra, \(T\in DS^{+}(\mathcal{M},\tau)\), and \(1<p,q<\infty\) be such that \(\frac{1}{p}+\frac{1}{q}=1\). Let \(\alpha=(\alpha_{k})_{k=0}^{\infty}\subset[0,\infty)\), and assume that \(A_{n}:=\sum_{k=0}^{n-1}\alpha_{k}\neq 0\) and that there exists \(C>0\) such that \(\sum_{k=0}^{n-1}\alpha_{k}^{q}\leq Cn\widetilde{A}_{n}^{q}\) for every \(n\in\mathbb{N}\), where \(\widetilde{A}_{n}=\frac{A_{n}}{n}\). Then the weighted averages_
\[A_{n}^{\alpha}(T):=\frac{1}{A_{n}}\sum_{k=0}^{n-1}\alpha_{k}T^{k}\]
_are b.u.e.m. at zero on \((L_{p},\|\cdot\|_{p})\)._
Proof.: The proof follows almost identically to that of Lemma 3.1 and Proposition 3.1. The main difference is that, in the proof of Lemma 3.1, after the inequality
\[\sum_{k=0}^{n-1}\alpha_{k}T^{k}(x)\leq\left(\sum_{k=0}^{n-1}\alpha_{k}^{q} \right)^{1/q}\left(\sum_{k=0}^{n-1}T^{k}(x^{p})\right)^{1/p}\]
(where the \(\frac{1}{n}\) terms were removed by multiplying both sides by \(n=n^{1/q}n^{1/p}\)) we can use the assumption \(\sum_{k=0}^{n-1}\alpha_{k}^{q}\leq Cn\widetilde{A}_{n}^{q}\) and to find that
\[\left(\sum_{k=0}^{n-1}\alpha_{k}^{q}\right)^{1/q}\left(\sum_{k=0}^{n-1}T^{k}(x ^{p})\right)^{1/p}\leq Cn^{1/q}\widetilde{A}_{n}(nM_{n}(T)(x^{p}))^{1/p}.\]
Since \(n^{1/q}=\frac{n}{n^{1/p}}\) and \(n\widetilde{A}_{n}=A_{n}\), we see that
\[A_{n}^{\alpha}(T)(x)=\frac{1}{A_{n}}\sum_{k=0}^{n-1}\alpha_{k}T^{k}(x)\leq C( M_{n}(T)(x^{p}))^{1/p}.\]
With this change, the conclusion follows by exactly the same reasoning.
Recall the little-o notation in asymptotics: if \(f,g:\mathbb{N}_{0}\to[0,\infty)\), then we write \(f(n)=o(g(n))\) if \(\lim_{n\to\infty}\frac{f(n)}{g(n)}=0\). The commutative version of the following is Corollary 2.4 in [7].
**Proposition 4.2**.: _Let \((\mathcal{M},\tau)\) be a semifinite von Neumann algebra, \(T\in DS^{+}(\mathcal{M},\tau)\), and \(1<p,q<\infty\) be such that \(\frac{1}{p}+\frac{1}{q}=1\). Let \(\alpha=(\alpha_{k})_{k=0}^{\infty}\subset[0,\infty)\), and assume that \(A_{n}:=\sum_{k=0}^{n-1}\alpha_{k}\neq 0\) and \(\widetilde{A_{n}}=\frac{A_{n}}{n}\) for every \(n\in\mathbb{N}\), and that \(\sum_{k=0}^{n-1}|\alpha_{k}-\widetilde{A_{n}}|^{q}=o(n\widetilde{A}_{n}^{q})\). Then the weighted averages_
\[A_{n}^{\alpha}(T)(x)=\frac{1}{A_{n}}\sum_{k=0}^{n-1}\alpha_{k}T^{k}(x)\]
_converge b.a.u. for every \(x\in L_{p}(\mathcal{M},\tau)\)._
Proof.: It was shown in [7] that \(\alpha\) satisfies the assumption mentioned in Proposition 4.1. As such, by the noncommutative Banach principle, we can consider a dense subspace of \(L_{p}\) to prove this result; in particular, we will assume that \(x\in L_{p}\cap\mathcal{M}\).
By the triangle inequality and the fact that \(T\) is a contraction on \(\mathcal{M}\), we see that
\[\left\|\frac{1}{A_{n}}\sum_{k=0}^{n-1}(\alpha_{k}-\widetilde{A_{n}})T^{k}(x) \right\|_{\infty}\leq\frac{1}{A_{n}}\sum_{k=0}^{n-1}|\alpha_{k}-\widetilde{A_{ n}}|\|T^{k}(x)\|_{\infty}\leq\frac{\|x\|_{\infty}}{A_{n}}\sum_{k=0}^{n-1}| \alpha_{k}-\widetilde{A_{n}}|.\]
By Holder's inequality on \(\mathbb{R}^{n}\) and the little-o assumption, we find that
\[\frac{\|x\|_{\infty}}{A_{n}}\sum_{k=0}^{n-1}|\alpha_{k}-\widetilde{A_{n}}| \leq\frac{\|x\|_{\infty}n^{1-1/q}}{A_{n}}\left(\sum_{k=0}^{n-1}|\alpha_{k}- \widetilde{A_{n}}|^{q}\right)^{1/q}=o(1).\]
Thus \(\frac{1}{A_{n}}\sum_{k=0}^{n-1}(\alpha_{k}-\widetilde{A_{n}})T^{k}(x)\to 0\) uniformly as \(n\to\infty\).
Since \(L_{p}\cap\mathcal{M}\) is dense in \(L_{p}\), it follows that there exists \(\widehat{x}\in L_{p}\) such that \(M_{n}(T)(x)\to\widehat{x}\) b.a.u. as \(n\to\infty\). Since
\[\frac{1}{A_{n}}\sum_{k=0}^{n-1}\widetilde{A_{n}}T^{k}(x)=\frac{\widetilde{A_{ n}}}{A_{n}}\sum_{k=0}^{n-1}T^{k}(x)=\frac{1}{n}\sum_{k=0}^{n-1}T^{k}(x)=M_{n}(T)(x),\]
we find that \(\frac{1}{A_{n}}\sum_{k=0}^{n-1}\widehat{A_{n}}T^{k}(x)\to\widehat{x}\) b.a.u. as \(n\to\infty\).
Consequently, for any \(e\in\mathcal{P}(\mathcal{M})\) we find that
\[\left\|e\left(\frac{1}{A_{n}}\sum_{k=0}^{n-1}\alpha_{k}T^{k}(x)- \widehat{x}\right)e\right\|_{\infty}\leq \left\|\frac{1}{A_{n}}\sum_{k=0}^{n-1}(\alpha_{k}-\widehat{A_{n}}) T^{k}(x)\right\|_{\infty}\] \[+\|e(M_{n}(T)(x)-\widehat{x})e\|_{\infty}\,.\]
Thus, for any \(\epsilon>0\), let \(e\in\mathcal{P}(\mathcal{M})\) be a projection such that \(\tau(e^{\perp})\leq\epsilon\) and \(\|e(M_{n}(T)(x)-\widehat{x})e\|_{\infty}\to 0\). Consequently, the weighted averages \((A_{n}^{\alpha}(T)(x))_{n=1}^{\infty}\) converge b.a.u. to \(\widehat{x}\) for every \(x\in L_{p}\cap\mathcal{M}\).
A function \(g:\mathbb{N}\to\mathbb{C}\) is called additive if \(g(mn)=g(m)+g(n)\) whenever \(m,n\in\mathbb{N}\) are coprime.
**Theorem 4.1**.: _Let \((\mathcal{M},\tau)\) be a semifinite von Neumann algebra, \(T\in DS^{+}(\mathcal{M},\tau)\), and \(1<p,q<\infty\) be such that \(\frac{1}{p}+\frac{1}{q}=1\). Let \(g:\mathbb{N}\to\mathbb{N}\) be an additive function such that \(g(r)=1\) for every prime \(r\). Assume further that there exists \(\beta>0\) such that, for every \(m\in\mathbb{N}\) and prime \(r\),_
\[g(r^{m})\leq\beta m\ln(r).\]
_If \(\alpha_{0}=0\), \(\alpha_{n}=g(n)\) for every \(n\in\mathbb{N}\), and \(A_{n}=\sum_{k=0}^{n-1}\alpha_{k}\), then the weighted averages_
\[A_{n}^{\alpha}(T)(x)=\frac{1}{A_{n}}\sum_{k=0}^{n-1}\alpha_{k}T^{k}(x),\ n\geq 3,\]
_converge b.a.u. for every \(x\in L_{p}(\mathcal{M},\tau)\)._
As previously mentioned, the proof follows the exact same reasoning as Theorem 2.6 of [7] so long as one makes similar modifications to what was done in Proposition 4.2 above (namely, replacing a.e. pointwise arguments, like \(|f\circ\tau^{n}|\), with operator norms, like \(\|T^{k}(x)\|_{\infty}\), etc.). As such, we omit the proof of this.
As in [7], the assumptions in the above result shows that properties of the weighting sequences are all that is needed to get convergence of the corresponding averages. As the preliminary results have now been generalized to the noncommutative setting, the same sequences are good here as well. Some functions of note satisfying the above assumptions are given by, for every \(n\in\mathbb{N}\),
\[\omega(n):=\sum_{\begin{subarray}{c}p:\text{prime}\\ p|n\end{subarray}}1\ \text{ and }\ \Omega(n):=\sum_{\begin{subarray}{c}p:\text{prime}\\ \max\{k:p^{k}|n\}\end{subarray}}k.\]
In other words, \(\omega\) counts the number of distinct prime factors of \(n\), while \(\Omega\) counts the total number of prime factors of \(n\) with multiplicity. In addition to these being good weights, for the same reasoning as in [7], one can actually replace \(\omega(n)\) (or \(\Omega(n)\)) with \(\omega(n)^{m}\) (respectively, \(\Omega(n)^{m}\)) for each \(m\in\mathbb{N}\) and still obtain a similar conclusion. In fact, the sequence generated by any \(g\) in Theorem 4.1 may be replaced by \(g(n)^{m}\) for some \(m\in\mathbb{N}\).
**Acknowledgement.** The author would like to express his gratitude to Dr. Semyon Litvinov for his feedback in earlier versions of this paper. The author would also like to thank Dr. Leonard Cadilhac for his input and suggestions that greatly improved the results of the paper.
|
2308.11756
|
Quasinormal modes and shadow in Einstein Maxwell power-Yang-Mills black
hole
|
In the present paper, we investigate the quasinormal modes of an
Einstein-Maxwell power-Yang-Mills black hole in four dimensions, considering a
specific value of the power parameter $p = 1/2$. This particular case
represents a black hole with both Abelian and Non-Abelian charges and is
asymptotically non-flat. We begin by deriving the effective potential for both
a neutral massless particle and a neutral Dirac particle using the
aforementioned black hole solution. Subsequently, employing the sixth-order WKB
approximation method, we calculate the (scalar) quasinormal modes. Our
numerical analysis indicates that these modes are stable within the considered
parameter range. This result is also confirmed using the eikonal approximation.
Furthermore, we calculate the shadow radius for this class of BH and derive
constraints on the electric and Yang-Mills charges ($Q, Q_{\rm YM}$) by using
imaging observational data for Sgr A${^\star}$, provided by the Event Horizon
Telescope Collaboration. We observe that as the electric charge $Q$ increases,
the allowed range shifts towards negative values of $Q_{\rm YM}$. For instance,
for the maximum value $Q\approx 1.1$ obtained, the allowed range becomes
$-0.171 \lesssim Q_{\rm YM} \lesssim -0.087$ consistent with KECK and VLTI
data, while still retaining a non-vanishing horizon.
|
Angel Rincon, Gabriel Gómez
|
2023-08-22T19:57:16Z
|
http://arxiv.org/abs/2308.11756v2
|
# Quasinormal modes and shadow in Einstein Maxwell power-Yang-Mills black hole
###### Abstract
In the present paper, we investigate the quasinormal modes of an Einstein-Maxwell power-Yang-Mills black hole in four dimensions, considering a specific value of the power parameter \(p=1/2\). This particular case represents a black hole with both Abelian and Non-Abelian charges and is asymptotically non-flat. We begin by deriving the effective potential for a neutral massless particle using the aforementioned black hole solution. Subsequently, employing the sixth-order WKB approximation method, we calculate the (scalar) quasinormal modes. Our numerical analysis indicates that these modes are stable within the considered parameter range. This result is also confirmed using the eikonal approximation. Furthermore, we calculate the shadow radius for this class of black hole and derive constraints on the electric and Yang-Mills charges \((Q,Q_{\rm YM})\) by using imaging observational data for Sgr A\({}^{*}\), provided by the Event Horizon Telescope Collaboration. We observe that as the electric charge \(Q\) increases, the allowed range shifts towards negative values of \(Q_{\rm YM}\). For instance, for the maximum value \(Q\approx 1.1\) obtained, the allowed range becomes \(-0.171\lesssim Q_{\rm YM}\lesssim-0.087\) consistent with KECK and VLTI data, while still retaining a non-vanishing horizon.
General relativity; Black holes; Quasinormal modes; Perturbations; shadow size
## I Introduction
Black hole (BH) solutions play an essential role in classical and alternative theories of gravity, which seek to describe the properties of spacetime [1; 2]. Any observational feature of BH can serve as a compelling tool for testing gravity theories, particularly at the event horizon scale, and thus contribute to establishing the true nature of gravity. Interestingly, we have today convincing probes about the existence of BHs provided by the Event Horizon Telescope (EHT) and the Very Large Telescope global networks [3; 4; 5; 6], the GRAVITY collaboration [7], and the LIGO-Virgo collaboration [8; 9] among others observational evidences. What can we conclude about the nature of gravity from these results? First, the predictions of General Relativity (GR) are consistent with all observational data within the current uncertainties [10]. Second, theories beyond Einstein's theory can also explain the observed phenomena [11; 12; 13; 14]. Hence, these results are encouraging for theories beyond GR, but no conclusive evidence has been found thus far to decisively support one theory over another.
In order to get further insights into the nature of gravity, it is convenient to investigate the so-called _quasinormal modes_. Roughly speaking, quasinormal modes (QNM) are energy dissipation modes of a perturbed black hole. These modes characterize perturbations within a field that gradually diminish over time [15; 16]. In simpler terms quasinormal modes of a BH correspond to perturbed solutions of the field equations with complex frequencies, and their characteristics depend on the specific theoretical model under consideration. Consequently, a phenomenological strategy involves examining the properties and stability of quasinormal modes associated with a particular black hole solution. Notice that, from a theoretical perspective, perturbations within the spacetime of a black hole can be examined through two distinct approaches. The initial approach involves introducing additional fields into the black hole spacetime. Alternatively, the second method involves perturbing the underlying metric of the black hole (referred to as the background). Furthermore, in the linear approximation, the first perturbation scenario can be simplified to the propagation of fields in the background of a black hole. In particular, for a given scalar field \(\Phi\) with a given mass \(\mu\) in the background of the metric \(g_{\mu\nu}\), the master differential equation is the Klein-Gordon equation [16]. Solving such a differential equation analytically is generally quite challenging because the effective potential must be in a simplified form to obtain the corresponding solution. A few examples where the QNMs can be obtained analytically can be consulted at [17; 18; 19; 20; 21]. Therefore, in order to make progress, numerical/semi-analytical approaches should be considered. Although in the next section we will briefly mention some approaches to obtain the QNMs, we can highlight a few conventional methods for finding the corresponding solutions. For instance: i) the WKB semi-analytical approach, ii) the Mashhoon method, iii) the Chandrasekhar-Detweiler method, the Shooting method, among
others. So, even though the literature is vast, we can mention some recent works in which the QNMs, in GR and beyond, are calculated. For instance, the reader can consult [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33] and references therein.
Another intriguing feature of BHs that has gained attention, particularly following the release of imaging observations of the Sgr A\({}^{*}\) and M87\({}^{*}\) BHs, is their shadow. The shadow refers to the dark region observed in the vicinity of a black hole, surrounded by circular orbits of photons. This distinctive feature has been utilized as a potential discriminator to distinguish and differentiate between different BH solutions [14].
In the framework of GR, BHs are characterized solely by three physical parameters: mass \(M\), angular momentum \(J\), and (electric) charge \(Q\). This statement is known as the "no-hair theorem" [34]. The more general case of BH solutions corresponds to the Kerr-Newman solution. However, BHs are commonly assumed t o be uncharged due, for instance, to charge neutralization process of astrophysical plasma. Nevertheless, recent observations provided by the EHT collaborations do not rule out the possibility that BH may carry some degree of charge [6; 35], even in the context of more general theories of gravity [36]. Building a viable BH solution beyond GR is a non-trivial task since the theory itself must prevent any pathological behavior. This includes preserving the hyperbolic character of the field equations, avoiding the propagation of unwanted perturbation modes, and addressing other issues that may arise at the theoretical level. This scientific program has been a highly active topic of research, driven by the possibility of detecting deviations from GR, which would provide valuable insights into the nature of gravity.
We do not pretend to discuss here all classes of BH solutions. Instead, the main subject of this paper is on BH solutions involving non-Abelian gauge fields. The initial motivation for considering the coupling of the Yang-Mills theory to Einstein's gravity stems from the fact that they together provide a suitable framework for the existence of stationary, localized, and non-singular solutions known as solitons [37] (for soliton solutions in a more general massive Yang-Mills theory, see, for example, Ref. [38]). This is not achievable in separate scenarios. Subsequently, this idea was extended to construct BH solutions, resulting in BH with a Yang-Mills hair [39; 40]. Additional solutions involving the Yang-Mills theory can be found in [41; 42; 43; 44; 45; 46; 47; 48] and references therein.
Following the same principle employed in the study of nonlinear (Maxwell) electrodynamics [49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66], the Einstein-Yang-Mills BH solutions were further extended to include power Yang-Mills solutions characterized by a non-Abelian topological charge [42] (see also [67; 68; 69] for further investigations). From a theoretical perspective, it is possible to couple the standard Maxwell theory and the power-law Yang-Mills theory to Einstein's gravity, thereby allowing for the existence of a more general class of BH with appealing features that can be potentially contrasted with current observational facilities. This proposal leads to BHs with both Abelian and non-Abelian charges or, equivalently, a modified version of the well-known Reissner-Nordstrom BH solution. In a preliminary paper, we have investigated the impact of the non-Abelian charge on the BH properties and established certain relationships between the charges. In this paper, we further investigate the quasinormal modes and shadow size of this type of BH, motivated by current observations, as discussed earlier. Specifically, the imaging observations provided by the EHT allow us to establish a more stringent constraint on the non-Abelian charge. Concretely, our findings indicate that a slightly larger electric charge is allowed compared to the standard case for a Yang-Mills charge \(Q_{\rm YM}\sim\mathcal{O}(-0.1)\), allowing the BH to maintain an event horizon. This paper is structured as follows. In Section 2, we review the main elements of the model and discuss the key properties of the resulting BH. Within this section, we analyze the behavior of massless scalar fields propagating in the spherically symmetric gravitational background. We employ the WKB method and investigate the eikonal limit to gain further insights into the dynamics. Additionally, we calculate the size of the BH shadow and set bounds on both charges from imaging observations. We adopt the metric signature \(-,+,+,+\), and work in geometrical units where the speed of light in vacuum and Newton's constant are set to unity, \(G=1=c\).
## II Background and scalar perturbations
### Charged black hole solutions in EMPYM theory
In this section, we will outline the key ingredients of the theory that leads to a novel non-linear black hole solution. Our investigation is performed within a 4-dimensional spacetime, incorporating three crucial elements: i) The Einstein-Hilbert term, ii) The Maxwell invariant, and iii) the Power Yang-Mill invariant. Thus, The action that represents our scenario is:
\[S_{0}=\int\sqrt{-g}\ {\rm d}^{4}x\Bigg{[}\frac{1}{2\kappa}R-F_{\mu\nu}F^{\mu \nu}-(F^{(a)}_{\mu\nu}F^{\mu\nu}_{(a)})^{\rm p}\Bigg{]}. \tag{1}\]
We have considered the usual definitions, namely: i) \(G\) is Newton's constant, ii) \(\kappa\equiv 8\pi G\) is Einstein's constant, iii) \(g\) is the determinant of the metric tensor \(g_{\mu\nu}\), iv) \(R\) is the Ricci scalar, and v) \(p\) is a real parameter that introduces non-linearities. In addition, we have two extra tensors: i) the electromagnetic field strength \(F_{\mu\nu}\), and ii) the gauge strength tensor \(F^{(a)}_{\mu\nu}\)
both defined in terms of the potentials \(A_{\nu}\) and \(A_{\nu}^{(a)}\), respectively, and their corresponding expressions are:
\[F_{\mu\nu} \equiv\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\,, \tag{2}\] \[F_{\mu\nu}^{(a)} \equiv\partial_{\mu}A_{\nu}^{(a)}-\partial_{\nu}A_{\mu}^{(a)}+ \frac{1}{2\sigma}C_{(b)(c)}^{(a)}A_{\mu}^{(b)}A_{\nu}^{(c)}\,, \tag{3}\]
Should be mentioned that the Greek indices run from 0 to \(3\) and \(a\) is the internal gauge index running from \(1\) to \(3\). Even more, \(C_{(b)(c)}^{(a)}\) represents the structure constants of 3 parameter Lie group \(\mathcal{G}\), \(A_{\mu}^{(a)}\) are the \(SO(3)\) gauge group Yang-Mills potentials, \(\sigma\) is an arbitrary coupling constant, and finally \(A_{\mu}\) is the conventional Maxwell potential. At this point, it is essential to point out the concrete form of \(\mathbf{A}^{(a)}\) and \(\mathbf{A}\). Thus, the first object is then defined as
\[\mathbf{A}^{(a)}=\frac{q_{\mathsf{YM}}}{r^{2}}(x_{i}dx_{j}-x_{j}dx_{i}), \tag{4}\]
where \(2\leq j+1\leq i\leq 3\) and \(1\leq a\leq 3\). In addition, the radial coordinate is connected to \(x_{i}\) according to \(r^{2}=\sum_{i=1}^{3}x_{i}^{2}\). The second object, the Maxwell potential 1-form, is therefore given by
\[\mathbf{A}=\frac{Q}{r}dt, \tag{5}\]
where \(Q\) represents the electric charge and \(q_{\mathsf{YM}}\) denotes the YM charge. Varying the action with respect to the metric field we obtain Einstein's field equations, i.e.,
\[G_{\mu\nu}+\Lambda g_{\mu\nu}=T_{\mu\nu}, \tag{6}\]
where the energy-momentum tensor has two expected contributions: i) the matter content, \(T_{\mu\nu}^{\mathsf{M}}\), and ii) the Yang-Mills contribution, \(T_{\mu\nu}^{\mathsf{YM}}\), i.e.,
\[T_{\mu\nu}\equiv T_{\mu\nu}^{\mathsf{M}}+T_{\mu\nu}^{\mathsf{YM}}. \tag{7}\]
The last two contributions are defined in terms of \(F_{\mu\nu}\) and \(F_{\mu\nu}^{(a)}\) as follow
\[T_{\mu\nu}^{\mathsf{M}} =2F_{\mu}^{\lambda}F_{\nu\lambda}-\frac{1}{2}F_{\lambda\sigma}F^ {\lambda\sigma}g_{\mu\nu}, \tag{8}\] \[T_{\mu\nu}^{\mathsf{YM}} =-\frac{1}{2}g_{\alpha\mu}\left[\delta_{\nu}^{\alpha}\mathcal{F}_ {\mathsf{YM}}^{p}-4p\mathbf{Tr}\Big{(}F_{\nu\lambda}^{(a)}F^{(a)\alpha \lambda}\Big{)}\mathcal{F}_{\mathsf{YM}}^{p-1}\right]. \tag{9}\]
Varying the action with respect to the gauge potentials \(\mathbf{A}\) and \(\mathbf{A}^{(\mathbf{a})}\), we obtain the Maxwell and Yang Mills equations respectively
\[\mathrm{d}\Big{(}{}^{\star}\mathbf{F}\Big{)} =0, \tag{10}\] \[\mathbf{d}\Big{(}{}^{\star}\mathbf{F}^{(a)}\mathcal{F}_{\mathsf{ YM}}^{p-1}\Big{)}{+}\frac{1}{\sigma}C_{(b)(c)}^{(a)}\mathcal{F}_{\mathsf{YM}}^{p-1} \mathbf{A}^{(b)}\wedge{}^{\star}\mathbf{F}^{(c)} =0, \tag{11}\]
where \(\star\) means duality. It is important to point out that the trace of the Yang-Mills gauge strength tensor takes the form:
\[\mathcal{F}_{\mathsf{YM}}=\frac{q_{\mathsf{YM}}^{2}}{r^{4}}, \tag{12}\]
which is positive, allowing us thus to consider all rational numbers for the \(p\)-values. It is evident that for \(p=1\), the formalism reduces to the standard Einstein Yang-Mills theory. In what follows, we will consider a spherically symmetric space-time (in Schwarzschild coordinates), and we will write the line element according to
\[ds^{2}=-f(r)dt^{2}+f(r)^{-1}dr^{2}+r^{2}(d\theta^{2}+\sin^{2} \theta d\phi^{2}), \tag{13}\]
where \(r\) is the radial coordinate. From the Effective Einstein's field equations and the Maxwell and Yang-Mills equations we obtain
\[f(r)=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}+\frac{Q_{\mathrm{YM}}}{ r^{4p-2}}. \tag{14}\]
From the previous equation, we immediately notice that the Yang-Mills charge \(q_{\rm YM}\) is related to its normalized version as [42] as follows
\[Q_{\rm YM}\equiv\frac{2^{p-1}}{4p-3}q_{\rm YM}^{2p}, \tag{15}\]
for \(p\neq 3/4\). The specific case of \(p=3/4\) poses certain challenges due to the emergence of a radial logarithmic dependency in the solution, making it impossible to obtain an analytical solution. Thus, we left with the case \(p\neq 3/4\) for simplicity. In a previous work [70] we have discussed some possible \(p\)-exponents. In what follows, we deal with the case \(p=1/2\) given that: i) It is in line with the established energy conditions of general relativity and the causality condition, as was shown in [42], and ii) It modifies the structure of the Reissner-Nordstrom spacetime in a non-trivial but still manageable manner, unlike other cases that have been explored. Even more, this case provides illuminating analytical solutions for the inner \(r_{-}\) and external (event horizon) \(r_{+}\) radii:
\[r\pm=\frac{M\pm\sqrt{M^{2}-Q^{2}(1+Q_{\rm YM})}}{1+Q_{\rm YM}}. \tag{16}\]
We call this solution henceforth modified Reissner-Nordstrom (MRN) solution with \(Q_{\rm YM}\neq-1\). In addition, notice that for this concrete value of the power \(\left(p=1/2\right)\), \(Q_{\rm YM}\) is a dimensionless parameter. The precise form of the event horizon radius holds significant importance as it establishes a well-defined relationship between the two charges, thereby preventing the occurrence of a naked singularity,1 among other astrophysical implications [70]. It yields
Footnote 1: The formation of a naked singularity in any gravitational theory is, however, not guaranteed by the vanishing of the horizon. Hence, a formal astrophysical collapse must be then carried out. This is of course beyond the scope of this paper.
\[Q_{\rm YM}>-1\ \wedge\ 0<\frac{Q}{M}<\sqrt{\frac{1}{1+Q_{\rm YM}}}. \tag{17}\]
The conventional restriction for the Reissner-Nordstrom black hole is covered in the previous expression and is consistently retrieved in the limit of \(Q_{\rm YM}\to 0\), giving \(Q/M<1\) as it should be. For the allowed range of values of both charges, the corresponding horizons \(r_{+}\) are completely determined.
### Wave equation for scalar perturbations
We begin by examining the propagation of a test scalar field, denoted as \(\Phi\), in a fixed gravitational background within a four-dimensional spacetime. Additionally, we assume that the field is real. By considering the corresponding action \(S[g_{\mu\nu},\Phi]\), we can derive the following expression.
\[S[g_{\mu\nu},\Phi]\equiv\frac{1}{2}\int{\rm d}^{4}x\sqrt{-g}\Big{[}\partial^{ \mu}\Phi\partial_{\mu}\Phi\Big{]}\,. \tag{18}\]
From here, we find the standard Klein-Gordon equation [71; 72; 73; 74; 75; 76; 77]
\[\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}g^{\mu\nu}\partial_{\nu}\Phi \right)=0. \tag{19}\]
To decouple and eventually solve the Klein-Gordon equation, we take advantage of the symmetries of the metric and propose as an ansatz the following separation of variables in spherical coordinates as
\[\Phi(t,r,\theta,\phi)=e^{-i\omega t}\frac{\psi(r)}{r}Y_{\ell m}(\theta,\phi). \tag{20}\]
Here, \(Y_{\ell m}(\theta,\phi)\) represents the spherical harmonics, which solely depend on the angular coordinates. The quasinormal frequency, denoted as \(\omega\), will be determined by selecting appropriate boundary conditions. Thus, the differential equation to be solved is:
\[\frac{\omega^{2}r^{2}}{f(r)}+\frac{r}{\psi(r)}\frac{d}{dr}\left[r^{2}f(r) \frac{d}{dr}\left(\frac{\psi(r)}{r}\right)\right]+\frac{1}{Y(\Omega)}\left[ \frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{ \partial Y(\Omega)}{\partial\theta}\right)\right]+\frac{1}{\sin^{2}\theta} \frac{1}{Y(\Omega)}\frac{\partial^{2}Y(\Omega)}{\partial\phi^{2}}=0. \tag{21}\]
The associated angular part can be recast as
\[\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial Y (\Omega)}{\partial\theta}\right)+\frac{1}{\sin^{2}\theta}\frac{\partial^{2}Y( \Omega)}{\partial\phi^{2}}=-\ell(\ell+1)Y(\Omega), \tag{22}\]
where \(\ell(\ell+1)\) is the corresponding eigenvalue, and \(\ell\) is the angular degree. Combining the last two equations, we obtain a second-order differential equation for the radial coordinate. Now, considering the definition of the "tortoise coordinate" \(r_{*}\)
\[r_{*}\equiv\int\frac{\mathrm{d}r}{f(r)}\,, \tag{23}\]
we can re-write the resulting differential equation in its Schrodinger-like form, namely
\[\frac{\mathrm{d}^{2}\psi(r_{*})}{\mathrm{d}r_{*}^{2}}+\left[\omega^{2}-V(r_{*} )\right]\psi(r_{*})=0. \tag{24}\]
Here \(V(r)\) is the effective potential barrier defined as
\[V(r)=f(r)\Bigg{[}\frac{\ell(\ell+1)}{r^{2}}+\frac{f^{\prime}(r)}{r}\Bigg{]}, \tag{25}\]
where the prime denotes derivative with respect to the radial variable. Last but not least, the wave equation must be supplemented by appropriated boundary conditions. In this case, such conditions are:
\[\Phi \rightarrow \exp(+i\omega r_{*}),\qquad r_{*}\rightarrow-\infty, \tag{26}\] \[\Phi \rightarrow \exp(-i\omega r_{*}),\qquad r_{*}\rightarrow+\infty. \tag{27}\]
Given the time dependence characterized by \(\Phi\sim\exp(-i\omega t)\), a frequency with a negative imaginary part indicates a decaying (stable) mode. Conversely, a frequency with a positive imaginary part indicates an increasing (unstable) mode. We show, in Fig. (1), the behavior of the effective potential barrier \(V(r)\) against the radial coordinate \(r\) for different values of the set of parameters \(\{\ell,Q,M,Q_{\mathsf{YM}}\}\). Thus, from Fig. (1), we can identify the following:
* Top-Left panel shows the \(V(r)\) for fixed \(\{\ell,Q,M\}\) and different values of the Yang-Mills charge \(Q_{\mathsf{YM}}\). We observe that when \(Q_{\mathsf{YM}}\) increases, the maximum of the potential increases, at the time it shifts to the left. All solutions converge at small radii because their associated horizons are equals in contract to the other cases depicted.
* Top-Right panel shows the \(V(r)\) for fixed \(\{Q,M,Q_{\mathsf{YM}}\}\) and different values of the angular degree \(\ell\). We observe that when \(\ell\) increases, the maximum of the potential increases, shifting equally to the right.
* Bottom-Left panel shows the \(V(r)\) for fixed \(\{\ell,M,Q_{\mathsf{YM}}\}\) and different values of the charge \(Q\). We observe that when \(Q\) increases, the maximum of the potential increases, at the time it shifts to the left. In addition, the potential tends to be overlapped for moderated and large values of \(r\).
* Bottom-Right panel shows the \(V(r)\) for fixed \(\{\ell,Q,Q_{\mathsf{YM}}\}\) and different values of the black hole mass \(M\). We observe that when \(M\) increases, the maximum of the potential decreases, and the potential shifts significantly to the right compared to the other panels.
### Numerical computation: WKB method
Exact analytical expressions for the quasinormal spectra of black holes can only be obtained in a limited number of cases. For example: i) When the effective potential barrier takes the form of the Poschl-Teller potential, as studied in references such as [78, 79, 80, 81, 82, 83]. ii) When the corresponding differential equation for the radial part of the wave function can be transformed into the Gauss' hypergeometric function, as explored in references [84, 85, 86, 87, 88, 89, 90]. Considering the complexity and non-trivial nature of the involved differential equation, it becomes necessary to rely on numerical or, at the very least, semi-analytical methods to compute the corresponding quasinormal frequencies. Consequently, numerous techniques have been developed for this purpose, some of which are commonly utilized. Specifically: i) The Frobenius method and its generalization, as referenced in [91, 92, 93]. ii) The method of continued fraction, along with its enhancements, is mentioned in [94, 17, 95]. iii) The asymptotic iteration method [96, 97, 98] among others. Additional details can be found in [16] for more comprehensive information. In the present paper, we will implement the well-known WKB semi-classical method to obtain the quasinormal
frequencies (see [99, 100, 101, 102, 103] for technical details). The WKB method is a commonly used semi-analytic approach for computing the quasinormal modes of black holes. The initial first-order computation was derived by Schutz and Will [99], followed by subsequent improvements made by Iyer and Will [100], who developed a semi-analytic formula incorporating second and third-order corrections. This method has demonstrated remarkable efficiency in determining the lowest overtones among the complex frequencies of an oscillating Schwarzschild black hole. The accuracy of the approximation improves with increasing values of the angular harmonic index \(\ell\), but deteriorates as the overtone index increases. Building upon these advancements, R.A. Konoplya extended the generalization up to the 6th order [104], while J. Matyjasek and M. Opala found the formulae from the 7th to the 13th order [105]. The method relies on the resemblance of (24) to the one-dimensional Schrodinger equation corresponding to a potential barrier. The WKB formula employs the matching of asymptotic solutions, which consist of a combination of ingoing and outgoing waves, along with a Taylor expansion centered around the peak of the potential barrier at \(x=x_{0}\). This expansion encompasses the region between the two turning points, which correspond to the roots of the effective potential \(U(x,\omega)\equiv V(x)-\omega^{2}\). In what follows, we will implement the WKB method to compute the QN spectra of 6th order, by means of the following expression
\[\omega_{n}^{2}=V_{0}+(-2V_{0}^{\prime\prime})^{1/2}\Lambda(n)-i\nu(-2V_{0}^{ \prime\prime})^{1/2}[1+\Omega(n)]\,, \tag{28}\]
where i) \(V_{0}^{\prime\prime}\) represents the second derivative of the potential at the maximum, ii) \(\nu=n+1/2\), iii) \(V_{0}\) symbolizes the maximum of the effective barrier, and iv) \(n=0,1,2...\) is the overtone number. In addition, \(\Lambda(n),\Omega(n)\) are long and intricate relations of \(\nu\) (and derivatives of the potential evaluated at the maximum), the reason why we avoid to show the concrete form of them. Instead, they can be found, for instance, in [102]. Thus, to perform our computations, we have used here a Wolfram Mathematica [106] notebook utilizing the WKB method at any order from one to six [107]. In addition, for a given angular degree, \(\ell\), we will consider values \(n<\ell\) only. For higher order WKB corrections (and recipes for simple, quick, efficient and accurate computations) see [107, 108]. Finally, notice that as was pointed out (for instance by R. Konoplya
Figure 1: Effective potential barrier for scalar perturbations against the radial coordinate for the parameters shown in the panels. **Top Left panel:** Effective potential for fixed values of \(\{\ell,Q,M\}\) and \(Q_{\mathsf{YM}}=\{0.0,0.1,0.2\}\). **Top Right panel:** Effective potential for fixed values of \(\{Q,M,Q_{\mathsf{YM}}\}\) and \(\ell=\{1,2,3\}\). **Bottom Left panel:** Effective potential for fixed values of \(\{\ell,M,Q_{\mathsf{YM}}\}\) and \(Q=\{0.1,0.6,0.8\}\). **Bottom Right panel:** Effective potential for fixed values of \(\{\ell,Q,Q_{\mathsf{YM}}\}\) and \(M=\{1.0,1.5,2.0\}\).
[107]), the WKB series converges asymptotically only, there is no mathematically strict criterion for the evaluation of an error. However, the sixth/seventh order usually produces the best result. We summarize our results in figures (2),(3),(4) and (5), as well as tables (1) and (1), where the frequencies have been calculated numerically for different angular degrees \(\ell=1,2,3\). Based on the outcomes obtained through the WKB approximation, our results indicate the stability of all modes for the given numerical values. This feature will be supported through an alternative approach, namely the eikonal approximation. Comprehensive insights into these findings can be found in the Conclusions section, where we delve into further details.
Figure 2: QNMs for all cases investigated with \(M=1\), \(\ell=\{3,2,1\}\) and \(n=\{0,1\}\). **Left column**: Real part of \(\omega\) against the Yang-Mills charge \(Q_{\rm YM}\). **Right column**: Imaginary part of \(\omega\) against the Yang-Mills charge \(Q_{\rm YM}\). The color code is: i) solid black line for \(n=0\) and ii) dashed red line for \(n=1\). iii) dot-dashed cyan line \(n=2\). We have assumed \(Q=0.1\).
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \(Q_{\text{VM}}\) & \(n\) & \multicolumn{2}{c}{\(\ell=1\)} & \multicolumn{2}{c}{\(\ell=2\)} & \multicolumn{2}{c}{\(\ell=3\)} \\ \hline & 0 & 0.0086756 - 0.000963972 1 & 0.0149487 - 0.000962866 1 & 0.0211130 - 0.000962586 1 \\ -0.9 & 1 & & 0.0148804 - 0.002894360 1 & 0.0210645 - 0.002890650 1 \\ & 2 & & & 0.0209681 - 0.004827370 1 \\ \hline \hline & 0 & 0.0247323 - 0.0038626 1 & 0.0423986 - 0.0038539 1 & 0.0598047 - 0.00385168 1 \\ -0.8 & 1 & & 0.0420159 - 0.0116074 1 & 0.0595314 - 0.01157810 1 \\ & 2 & & & 0.0589934 - 0.01937310 1 \\ \hline \hline & 0 & 0.0457903 - 0.00870564 & 0.0781065 - 0.00867671 1 & 0.110030 - 0.00866926 1 \\ -0.7 & 1 & & 0.0770624 - 0.02618310 1 & 0.109281 - 0.02608530 1 \\ & 2 & & & 0.107817 - 0.04373080 1 \\ \hline \hline & 0 & 0.0710409 - 0.0155026 1 & 0.120584 - 0.0154348 1 & 0.169651 - 0.0154173 1 \\ -0.6 & 1 & & 0.118462 - 0.0466639 1 & 0.168121 - 0.0464348 1 \\ & 2 & & & 0.165156 - 0.0779912 1 \\ \hline \hline & 0 & 0.100036 - 0.0242629 & 0.168983 - 0.0241318 1 & 0.237442 - 0.0240978 1 \\ -0.5 & 1 & & 0.165313 - 0.0730911 1 & 0.234783 - 0.0726488 1 \\ & 2 & & & 0.229672 - 0.1222420 1 \\ \hline \hline & 0 & 0.132484 - 0.0349955 1 & 0.222740 - 0.034771 1 & 0.312583 - 0.0347126 1 \\ -0.4 & 1 & & 0.217007 - 0.105504 1 & 0.308409 - 0.1047490 1 \\ & 2 & & & 0.300454 - 0.1765700 1 \\ \hline \hline & 0 & 0.168181 - 0.047709 1 & 0.281447 - 0.0473559 1 & 0.394475 - 0.0472636 1 \\ -0.3 & 1 & & 0.273103 - 0.1439420 1 & 0.388370 - 0.1427580 1 \\ & 2 & & & 0.376830 - 0.2410550 1 \\ \hline \hline & 0 & 0.206975 - 0.0624116 1 & 0.344792 - 0.0618898 1 & 0.482659 - 0.0617529 1 \\ -0.2 & & & 0.333256 - 0.1884430 1 & 0.474177 - 0.1866950 1 \\ & 2 & & & 0.458208 - 0.3157790 1 \\ \hline \hline & 0 & 0.248747 - 0.0791103 1 & 0.412529 - 0.0783763 1 & 0.576767 - 0.0781822 1 \\ -0.1 & 1 & & 0.397196 - 0.2390410 1 & 0.565439 - 0.2365830 1 \\ & 2 & & & 0.544386 - 0.4008180 1 \\ \hline \hline & 0 & 0.293407 - 0.0978114 1 & 0.484455 - 0.0968185 1 & 0.676499 - 0.0965534 1 \\
0.0 & 1 & & 0.464698 - 0.2957730 1 & 0.661832 - 0.2924410 1 \\ & 2 & & & 0.634804 - 0.4962460 1 \\ \hline \hline & 0 & 0.340877 - 0.118520 1 & 0.560403 - 0.117220 1 & 0.781601 - 0.116689 1 \\
0.1 & 1 & & 0.535576 - 0.358672 1 & 0.763082 - 0.354298 1 \\ & 2 & & & 0.729246 - 0.602136 1 \\ \hline \hline & 0 & 0.391099 - 0.141240 1 & 0.640230 - 0.139584 1 & 0.891858 - 0.139129 1 \\
0.2 & 1 & & 0.609672 - 0.427770 1 & 0.868955 - 0.422148 1 \\ & 2 & & & 0.827468 - 0.718554 1 \\ \hline \hline & 0 & 0.444021 - 0.165975 1 & 0.723814 - 0.163913 1 & 1.007080 - 0.163338 1 \\
0.3 & 1 & & 0.686851 - 0.503099 1 & 0.979248 - 0.496037 1 \\ & 2 & & & 0.929258 - 0.845569 1 \\ \hline \hline & 0 & 0.499599 - 0.192726 1 & 0.811049 - 0.190212 1 & 1.12711 - 0.189495 1 \\
0.4 & 1 & & 0.766996 - 0.584689 1 & 1.09378 - 0.575975 1 \\ & 2 & & & 1.33443 - 0.983244 1 \\ \hline \hline & 0 & 0.557803 - 0.221494 1 & 0.901842 - 0.218482 1 & 1.2518 - 0.217604 1 \\
0.5 & 1 & & 0.850004 - 0.67257 1 & 1.2124 - 0.661981 1 \\ & 2 & & & 1.14284 - 1.13164 1 \\ \hline \hline & 0 & 0.618597 - 0.252281 1 & 0.996109 - 0.248728 1 & 1.38103 - 0.247667 1 \\
0.6 & 1 & & 0.935783 - 0.766769 1 & 1.33496 - 0.754074 1 \\ & 2 & & & 1.25433 - 1.29082 1 \\ \hline \hline & 0 & 0.681956 - 0.285086 1 & 1.09378 - 0.280953 1 & 1.51467 - 0.279684 1 \\
0.7 & 1 & & 1.02426 - 0.867312 1 & 1.46133 - 0.852272 1 \\ & 2 & & & 1.36878 - 1.46084 1 \\ \hline \hline & 0 & 0.747864 - 0.319906 1 & 1.19478 - 0.315159 1 & 1.65262 - 0.313657 1 \\
0.8 & 1 & & 1.11535 - 0.974224 1 & 1.5914 - 0.956592 1 \\ & 2 & & & 1.48609 - 1.64174 1 \\ \hline \hline & 0 & 0.816295 - 0.356742 1 & 1.29906 - 0.351349 1 & 1.79479 - 0.34959 1 \\
0.9 & 1 & & 1.20900 - 1.087530 1 & 1.72506 - 1.06705 1 \\ & 2 & & & 1.60616 - 1.83360 1 \\ \hline \hline \end{tabular}
\end{table}
Table
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \(Q_{\text{YM}}\) & \(n\) & \multicolumn{2}{c}{\(\ell=1\)} & \multicolumn{2}{c}{\(\ell=2\)} & \multicolumn{2}{c}{\(\ell=3\)} \\ \hline & 0 & 0.00871068 - 0.000965239 1 & 0.0150091 - 0.000964138 1 & 0.0211983 - 0.000963859 1 \\ -0.9 & 1 & & 0.0149411 - 0.002898150 1 & 0.0211499 - 0.002894460 1 \\ & 2 & & & 0.0210540 - 0.004833670 1 \\ \hline \hline & 0 & 0.0249345 - 0.00387261 & 0.0427446 - 0.00386398 1 & 0.0602924 - 0.00386178 1 \\ -0.8 & 1 & & 0.0423655 - 0.01163720 1 & 0.0600217 - 0.01160820 1 \\ & 2 & & & 0.0594888 - 0.01942250 1 \\ \hline \hline & 0 & 0.0463581 - 0.00873895 1 & 0.0790722 - 0.00871039 1 & 0.111389 - 0.00870304 1 \\ -0.7 & 1 & & 0.0780432 - 0.02628180 1 & 0.110651 - 0.02618540 1 \\ & 2 & & & 0.109209 - 0.04389380 1 \\ \hline \hline & 0 & 0.0722284 - 0.0155804 1 & 0.122592 - 0.0155138 1 & 0.172474 - 0.0154966 1 \\ -0.6 & 1 & & 0.120512 - 0.0468933 1 & 0.170973 - 0.0466689 1 \\ & 2 & & & 0.160667 - 0.073687 1 \\ \hline \hline & 0 & 0.102149 - 0.0244122 1 & 0.172537 - 0.0242842 1 & 0.242430 - 0.0242510 1 \\ -0.5 & 1 & & 0.168959 - 0.0735298 1 & 0.239837 - 0.0730991 1 \\ & 2 & & & 0.234855 - 0.1229610 1 \\ \hline \hline & 0 & 0.135881 - 0.0352487 1 & 0.228421 - 0.0350311 1 & 0.320544 - 0.0349743 1 \\ -0.4 & 1 & & 0.222865 - 0.1062450 1 & 0.316498 - 0.105515 1 \\ & 2 & & & 0.308788 - 0.177778 1 \\ \hline \hline & 0 & 0.173271 - 0.0481029 1 & 0.289911 - 0.0477632 1 & 0.406319 - 0.0476741 1 \\ -0.3 & 1 & & 0.281872 - 0.1450910 1 & 0.404037 - 0.1439510 1 \\ & 2 & & & 0.389321 - 0.2429190 1 \\ \hline \hline & 0 & 0.214218 - 0.0629859 1 & 0.356772 - 0.0624889 1 & 0.499397 - 0.0623572 1 \\ -0.2 & 1 & & 0.345728 - 0.1901120 1 & 0.491277 - 0.1884430 1 \\ & 2 & & & 0.476060 - 0.3184740 1 \\ \hline \hline & 0 & 0.258657 - 0.0799073 1 & 0.428832 - 0.0792154 1 & 0.599513 - 0.079030 1 \\ -0.1 & 1 & & 0.414250 - 0.2413500 1 & 0.588739 - 0.239019 1 \\ & 2 & & & 0.568719 - 0.404524 1 \\ \hline \hline & 0 & 0.306551 - 0.0988743 1 & 0.505966 - 0.0979492 1 & 0.706469 - 0.0976977 1 \\
0.0 & 1 & & 0.487306 - 0.2988410 1 & 0.692615 - 0.2957070 1 \\ & 2 & & & 0.667091 - 0.5011410 1 \\ \hline \hline & 0 & 0.357881 - 0.119892 1 & 0.588087 - 0.118696 1 & 0.820117 - 0.118365 1 \\
0.1 & 1 & & 0.564807 - 0.362617 1 & 0.802751 - 0.358530 1 \\ & 2 & & & 0.771026 - 0.608384 1 \\ \hline \hline & 0 & 0.412644 - 0.142963 & 0.675133 - 0.141459 1 & 0.940349 - 0.141034 \\
0.2 & 1 & & 0.646694 - 0.432702 1 & 0.919034 - 0.427505 1 \\ & 2 & & & 0.880425 - 0.726304 1 \\ \hline \hline & 0 & 0.470850 - 0.168088 1 & 0.767067 - 0.166242 1 & 1.067090 - 0.165709 1 \\
0.3 & 1 & & 0.732935 - 0.509113 1 & 1.041390 - 0.502645 1 \\ & 2 & & & 0.995229 - 0.854935 1 \\ \hline \hline & 0 & 0.532513 - 0.195268 1 & 0.863869 - 0.193046 1 & 1.20029 - 0.192388 1 \\
0.4 & 1 & & 0.823523 - 0.591860 1 & 1.16977 - 0.583959 1 \\ & 2 & & & 1.114541 - 0.094300 1 \\ \hline \hline & 0 & 0.597671 - 0.224495 1 & 0.965538 - 0.221869 1 & 1.33994 - 0.221070 1 \\
0.5 & 1 & & 0.918468 - 0.680945 1 & 1.30416 - 0.671445 1 \\ & 2 & & & 1.24098 - 1.144400 1 \\ \hline \hline & 0 & 0.66636 - 0.255765 1 & 1.07209 - 0.2527090 1 & 1.48601 - 0.251753 1 \\
0.6 & 1 & & 1.0178 - 0.77636100 1 & 1.44457 - 0.765098 1 \\ & 2 & & & 1.37197 - 1.305230 1 \\ \hline \hline & 0 & 0.738623 - 0.289068 1 & 1.18354 - 0.285559 1 & 1.63854 - 0.284430 1 \\
0.7 & 1 & & 1.12156 - 0.878081 1 & 1.59101 - 0.864899 1 \\ & 2 & & & 1.50844 - 1.476750 1 \\ \hline \hline & 0 & 0.814518 - 0.324389 1 & 1.29994 - 0.320411 1 & 1.179756 - 0.31902 1 \\
0.8 & 1 & & 1.22981 - 0.986085 1 & 1.74354 - 0.970822 1 \\ & 2 & & & 1.65047 - 1.658910 1 \\ \hline \hline & 0 & 0.894148 - 0.361698 1 & 1.42135 - 0.357252 1 & 1.96311 - 0.355727 1 \\
0.9 & 1 & & 1.34265 - 1.100300 1 & 1.90222 - 1.082830 1 \\ & 2 & & & 1.79815 - 1.851630 1 \\ \hline \hline \
### QNMs in the eikonal limit
The eikonal regime is obtained when \(\ell\gg 1\). In this situation, the WKB approximation becomes increasingly accurate (albeit the results appear to provide remarkably precise predictions, even for small values of \(\ell\)), this is the reason why we can get analytical expressions for the corresponding quasinormal frequencies. In concrete, when \(\ell\to\infty\), the angular momentum term dominates the expression for the effective potential, so the latter takes the form
\[V(r)\approx\frac{f(r)\ell^{2}}{r^{2}}\equiv\ell^{2}g(r), \tag{29}\]
where we have introduced a new function \(g(r)\equiv f(r)/r^{2}\) for simplicity. Now, it is required to obtain the point at which the potential takes its maximum, in this case, labeled by \(r_{1}\). To maintain the article self-contained, it is required to include a few details regarding the connection between (circular) null-geodesics and the eikonal approximation, to show how the point \(r_{1}\) can be obtained. The standard procedure used to compute the geodesics in the spacetime (13) can be consulted in [109]. Summarizing, we should restrict our attention to equatorial orbits, with a Lagrangian given by the form
\[2\mathcal{L}=-f(r)\,\dot{t}^{2}+\frac{1}{f(r)}\dot{r}^{2}+r^{2}\dot{\phi}^{2}, \tag{30}\]
where \(\phi\) is the angular coordinate. Notice that we have taken consistently the same signature \((-,+,+,+)\). The generalized momenta, coming from the latter Lagrangian, are
\[p_{t} =-f(r)\,\dot{t}\equiv-E=\mathrm{const}\,, \tag{31}\] \[p_{\phi} =r^{2}\,\dot{\phi}\equiv L=\mathrm{const}\,,\] (32) \[p_{r} =\frac{1}{f(r)}\dot{r}\,. \tag{33}\]
As the Lagrangian is independent of \(t\) and \(\phi\), then \(p_{t}\) and \(p_{\phi}\) are two integrals of motion. Solving (31)-(32) for \(\dot{t}\) and \(\dot{\phi}\), we get
\[\dot{\phi}=\frac{L}{r^{2}},\qquad\dot{t}=\frac{E}{f(r)}\,. \tag{34}\]
The Hamiltonian is given by
\[2\mathcal{H}=2\Big{(}p_{t}\dot{t}+p_{\phi}\dot{\phi}+p_{r}\dot{r}-\mathcal{L} \Big{)}, \tag{35}\]
or, equivalently
\[2\mathcal{H}=-E\dot{t}+L\dot{\phi}+\frac{1}{f(r)}\dot{r}^{2}=\delta_{1}= \mathrm{const}\,. \tag{36}\]
Figure 3: QNMs for all cases investigated with \(M=1\), \(\ell=\{3,2,1\}\) and \(Q=0.1\). The figures show the negative imaginary part of the frequency against the real part of the frequency for i) \(n=0\), left panel ii) \(n=1\), middle panel, and iii) \(n=2\), right panel.
Notice that \(\delta_{1}=0\) represents null geodesics and \(\delta_{1}=1\) describes massive particles. In what follows, we will restrict to the case \(\delta_{1}=0\), i.e., massless particles. So, replacing Eq. (34) in (36) and using the definition \(\dot{r}^{2}=\mathcal{V}(r)\), we obtain
\[\mathcal{V}(r)=E^{2}-f(r)\frac{L^{2}}{r^{2}}\,. \tag{37}\]
Figure 4: QNMs for all cases investigated with \(M=1\), \(\ell=\{3,2,1\}\) and \(n=\{0,1\}\). **Left column**: Real part of \(\omega\) against the Yang-Mills charge \(Q_{\text{YM}}\). **Right column**: Imaginary part of \(\omega\) against the Yang-Mills charge \(Q_{\text{YM}}\). The color code is: i) solid black line for \(n=0\) and ii) dashed red line for \(n=1\). iii) dot-dashed cyan line \(n=2\). We have assumed \(Q=0.5\).
The conditions \(\mathcal{V}(r)=0\) and \(\mathcal{V}^{\prime}(r)=0\) for circular null geodesics lead, respectively, to:
\[\frac{E}{L}=\pm\sqrt{\frac{f(r_{1})}{r_{1}^{2}}}\,, \tag{38}\]
and
\[2f(r_{1})-r_{1}\frac{df(r)}{dr}\Bigg{|}_{r_{1}}=0. \tag{39}\]
The last equation, Eq. (39), is precisely required to obtain the critical value \(r_{1}\).
The pioneering work on this topic, including the idea and formalism, can be found in Reference [110].2 The expression for the quasinormal modes in the eikonal regime reads:
Footnote 2: For the study of QNMs in the eikonal limit beyond Einstein Relativity, we refer the reader to reference [111].
\[\omega(\ell\gg 1)=\Omega_{c}\ell-i\left(n+\frac{1}{2}\right)|\lambda_{L}|, \tag{40}\]
where \(\lambda_{L}\) and \(\Omega_{c}\) are, respectively, the Lyapunov exponent and the coordinate angular velocity at the unstable null geodesic, defined as follows [110]
\[\lambda_{L} \equiv\sqrt{\frac{1}{2}f(r_{1})r_{1}^{2}\Bigg{(}\frac{\mathrm{d}^ {2}}{\mathrm{d}r^{2}}\frac{f(r)}{r^{2}}\Bigg{)}\Bigg{|}_{r=r_{1}}}=r_{1}^{2} \sqrt{\frac{g^{\prime\prime}(r_{1})g(r_{1})}{2}}, \tag{41}\] \[\Omega_{c} \equiv\frac{\dot{\phi}(r_{1})}{\dot{t}(r_{1})}=\frac{\sqrt{f(r_{1 })}}{r_{1}}=\sqrt{g(r_{1})}. \tag{42}\]
Notice that \(\lambda_{L}\) is a measure of the rate of convergence or divergence of null rays in the rings vicinity, or, in other words, \(\lambda_{L}\) is the decay rate of the unstable circular null geodesics. In particular, for this case, we can obtain analytic exact expressions for \(\{\lambda_{L},\Omega_{c}\}\). As they are quite involved, we show, instead, for the purpose of the present analysis approximated expressions at leading order in \(Q\) and \(Q_{\mathrm{YM}}\). These are given by
\[|\lambda_{L}| \approx\frac{1}{3\sqrt{3}M}\Bigg{[}\Bigg{(}1+\frac{Q^{2}}{18M^{2}} \Bigg{)}+\Bigg{(}2+\frac{Q^{2}}{6M^{2}}\Bigg{)}Q_{\mathsf{YM}}+\Bigg{(}1+ \frac{Q^{2}}{6M^{2}}\Bigg{)}Q_{\mathsf{YM}}^{2}\Bigg{]}+\mathcal{O}(Q^{3},Q_{ \mathsf{YM}}^{3}), \tag{43}\] \[\Omega_{c} \approx\frac{1}{3\sqrt{3}M}\Bigg{[}\Bigg{(}1+\frac{Q^{2}}{6M^{2} }\Bigg{)}+\Bigg{(}\frac{3}{2}+\frac{5Q^{2}}{12M^{2}}\Bigg{)}Q_{\mathsf{YM}}+ \Bigg{(}\frac{3}{8}+\frac{5Q^{2}}{16M^{2}}\Bigg{)}Q_{\mathsf{YM}}^{2}\Bigg{]} +\mathcal{O}(Q^{3},Q_{\mathsf{YM}}^{3}). \tag{44}\]
The WKB approximation of 1st order produces the same expression mentioned above for \(\{\Omega_{c},\lambda_{L}\}\), see for instance [112]. Be aware that photons, in the presence of nonlinear electromagnetic sources, follow null trajectories, but of an effective
Figure 5: QNMs for all cases investigated with \(M=1\), \(\ell=\{3,2,1\}\) and \(Q=0.5\). The figures show the negative imaginary part of the frequency against the real part of the frequency for i) \(n=0\), left panel ii) \(n=1\), middle panel, and iii) \(n=2\), right panel.
geometry [113; 114; 115]. Therefore, the formulas for \(\Omega_{c}\) and \(\lambda_{L}\) remain unchanged. From (40), we notice that the Lyapunov exponent determines the imaginary part of the modes while the angular velocity determines the real part of the modes. In concrete, analytic expressions for the spectrum are found to be
\[\omega_{R}(\ell\gg 1) \equiv\text{Re}(\omega)=\Omega_{c}\ell\,, \tag{45}\] \[\omega_{I}(\ell\gg 1) \equiv\text{Im}(\omega)=-\left(n+\frac{1}{2}\right)\left|\lambda_{ L}\right|, \tag{46}\]
To quantify better the impact of both charges on the angular velocity and the Lyapunov exponent, given by Eqs. (43)-(44) respectively, we vary such parameters for a fixed mass \(M=1\). The result is shown in Fig.(6). We can infer from the plot the following:
* The angular velocity, \(\Omega_{c}\), exhibits a monotonic increase with the Yang-Mills charge, \(Q_{\text{YM}}\), for the two electric cases considered here (\(Q=0.1\) and \(Q=0.5\)). Consequently, since \(\Omega_{c}\) is proportional to the real part of \(\omega\) (\(\Omega_{c}\propto\text{Re}(\omega)\)), it follows that the real part of \(\omega\) increases as well.
* The absolute value of the Lyapunov exponent, \(\left|\lambda_{L}\right|\), shows a monotonic increase for the considered numerical values of the charges (\(Q=0.1\) and \(Q=0.5\)) as the Yang-Mills charge is varied. Furthermore, we observe that the Lyapunov exponent remains relatively unchanged when the electric charge \(Q\) is varied for small values of \(Q_{\text{YM}}\), resembling the behavior observed in the case of the standard Reissner-Nordstrom black hole. Finally, since \(\left|\lambda_{L}\right|\) is proportional to the negative of the imaginary part of \(\omega\) (\(\left|\lambda_{L}\right|\propto-\text{Im}(\omega)\)), we can conclude that the black hole is stable against scalar perturbations, given that \(\text{Im}(\omega)<0\).
The features observed in the eikonal limit, where the spectra were analytically computed, are consistent with the trends depicted in Figures (2),(3),(4) and (5), where the frequencies were numerically computed for low angular degrees \(\ell=1,2,3\). Thus, we can conclude that the behavior is quite similar to the results computed using the WKB approach in the previous section.
### Black hole shadow
The BH shadow is a dark region surrounded by circular orbits of photons known as _photon sphere_. The radius of the photon orbit is defined as
\[r_{\text{ph}}=2g_{tt}(r_{\text{ph}})\left(\frac{dg_{tt}}{dr}\right)^{-1}\bigg{|} _{r=r_{\text{ph}}}. \tag{47}\]
Considering the metric solution Eq. (14), we can compute the photon radius. It reads
\[r_{\text{ph}}=\frac{3M+\sqrt{9M^{2}-8Q^{2}(1+Q_{\text{YM}})}}{2(1+Q_{\text{ YM}})}. \tag{48}\]
Figure 6: QNMs in the eikonal limit: **Left panel:** Angular velocity vs the Yang-Mills charge assuming \(Q=0.1\) and \(Q=0.5\) for \(M=1\). **Right panel:** Lyapunov exponent against the Yang-Mills charge assuming \(Q=0.1\) and \(Q=0.5\) for \(M=1\).
The radius of the BH shadow is defined as the minimal impact parameter of photons escaping from the BH [116]. Photons with smaller impact parameters will eventually cross the horizon and fall onto the singularity. The shadow radius can be calculated in terms of the photon sphere as [117]
\[r_{\rm sh}=\frac{r_{\rm ph}}{\sqrt{-g_{tt}(r_{\rm ph})}}. \tag{49}\]
Considering again the metric solution Eq. (14) and Eq. (48), the shadow radius for this class of modified RN black hole is
\[r_{\rm sh}=\frac{\sqrt{2}M\left(\sqrt{9-8Q^{2}(1+Q_{\rm YM})}+3\right)}{(1+Q_{ \rm YM})\sqrt{\frac{4Q^{2}(1+Q_{\rm YM})+\sqrt{9-8Q^{2}(1+Q_{\rm YM})-3}}{Q^{2 }}}}. \tag{50}\]
It is illustrative to see some limit cases. For instance, when \(Q_{\rm YM}\to 0\), we recover the standard RN BH solution
\[r_{\rm sh}=\frac{\sqrt{2}M\left(\sqrt{9-8Q^{2}}+3\right)}{\sqrt{\frac{4Q^{2}+ \sqrt{9-8Q^{2}}-3}{Q^{2}}}}, \tag{51}\]
while the limit \(Q\to 0\) leads to the purely power Yang-Mills case
\[r_{\rm sh}=\frac{3\sqrt{3}M}{1+Q_{\rm YM}}. \tag{52}\]
This can also be interpreted as a modification of the shadow radius for the Schwarzschild BH solution. The EHT collaboration has imaged the central BH at the center of the elliptical galaxy M87 [3, 5]. This data is consistent with theoretical predictions of GR for the shadow of the Kerr BH. These unprecedented observations were followed by the image of the Sgr A\({}^{\star}\)[4, 6], with a bright ring also consistent with a Kerr BH geometry. For Sgr A\({}^{\star}\), The constraints on the shadow size are
\[4.5M\lesssim r_{\rm sh}\lesssim 5.5M, \tag{53}\]
for the Keck, and
\[4.3M\lesssim r_{\rm sh}\lesssim 5.3M, \tag{54}\]
for VLTI telescope [6]. By using these observational values, we can place constraints on the \((Q,Q_{\rm YM})\) parameter space through Eq. (50). This is shown in the right panel of Fig. 7. The existence of a negative Yang-Mill charge \(Q_{\rm YM}\approx-0.17\) allows for a maximum electric charge \(Q\approx 1.1\) that is consistent with both VTLI and KECK data. It is worth noting that, in contrast, the maximum allowed charge for the standard RN case is \(Q\approx 0.9\) in agreement with [35]. To illustrate these findings, we present the behavior of the shadow radius (Eq.(50)) as a function of the Yang-Mill charge for specific values of the electric charge \(Q\) in the left panel of Fig 7. The dotted curve represents the case \(Q=0\), corresponding to the purely Yang-Mills scenario. This case yields an allowed range of \(-0.013\lesssim Q_{\rm YM}\lesssim 0.134\) and \(-0.037\lesssim Q_{\rm YM}\lesssim 0.100\), which is consistent with VLTI and KECK data, respectively. These data impose stringent constraints on the considered scenario. As the electric charge \(Q\) increases, the allowed range shifts towards negative values of \(Q_{\rm YM}\). For instance, for the maximum value \(Q\approx 1.1\), the allowed range becomes slightly wider with \(-0.171\lesssim Q_{\rm YM}\lesssim-0.087\). In general, for larger values of \(Q\), \(Q_{\rm YM}\) must take very small negative values to be in agreement with the observational data. Finally, notice that the intersection of the vertical solid line with all curves denote the standard RN case, except for the case \(Q=0\), which corresponds to the Schwarzschild case with a shadow radius of \(r_{\rm sh}/M=3\sqrt{3}\approx 5.196\).
While this paper was being prepared, a similar study of the quasinormal modes and black hole shadow has been done for the Einstein power Yang-Mill case with a positive cosmological constant[118], considering a power \(3/4<p<3/2\) and positive values of the associated charged. Consequently, direct comparison with our findings is not possible since we took \(p=1/2\) and included the Maxwell term. However, we do observe a similar trend in the quasinormal modes, as shown in all our plots, as \(Q\) increases in this case, even for larger values of the power.
## III Conclusions
Testing the properties of BH provides a valuable approach to unravel the nature of gravity theories in the strong field regime. In particular, the final stage of binary BH mergers is characterized by quasi-normal modes, which depend primarily
on the properties of the BH and, consequently, on the underlying gravitational theory. By studying these quasi-normal modes, we can gain insights into the fundamental nature of gravity and test the predictions of different gravitational theories in the extreme conditions near BHs. Thus, in the present paper, we have addressed to main issues: i) we investigate how an Einstein-Maxwell power-Yang Mills black hole responds to scalar perturbations in four-dimensional spacetime for the interesting case of a power \(p=1/2\). ii) we analyze the behavior of the black hole's shadow in terms of its electric and Yang-Mills charges, deriving observational constraints from data related to Sgr A\({}^{*}\).
In particular, we have depicted graphical representations of the effective potential barrier against the radial coordinate, \(r\), varying the set of parameters \(\{Q_{\text{YM}},\ell,Q,M\}\) individually while keeping the others fixed. Subsequently, we have computed the quasinormal modes (QNMs) of scalar perturbations both numerically (employing the WKB semi-analytic approximation) and analytically (in the eikonal limit as \(\ell\to\infty\)). We thoroughly examined the influences of the electric charge \(Q\), the Yang-Mills charge \(Q_{\text{YM}}\), the overtone number \(n\), and the angular degree \(\ell\). Our results reveal the following:
* From the QNMs computations, and for the range of parameters used, we can ensure the black hole is stable against scalar perturbations. The latter is true because \(\text{Im}(\omega)<0\).
* From Fig. (1) we observe that \(\text{Re}(\omega)\) is more sensitive to the changes when \(Q_{\text{YM}}\) increases, i.e., the impact of a Yang-Mills "charge" on QNMs is only relevant for positive values of \(Q_{\text{YM}}\).
On the other hand, we have also calculated the shadow radius for this class of BHs and examined particularly the influence of the Yang-Mills charge \(Q_{\text{YM}}\). We found that, for a given electric charge \(Q\), the shadow radius is a monotonically decreasing function of the Yang-Mills charge \(Q_{\text{YM}}\). Thus, large and negative values of \(Q_{\text{YM}}\) lead to an increase in the shadow size, while positive values significantly reduce the shadow radius. This effect can be constrained by comparing with the observed values of the shadow radius of Sgr A\({}^{*}\) obtained from VLTI and KECK telescopes. As both positive and negative values are allowed within the current precision, the shadow radius can be either larger or smaller compared to the Schwarzschild case. However, as the electric charge increases, \(Q_{\text{YM}}\) must take smaller negative values to remain consistent with the observational data. Moreover, by satisfying the current bound on the shadow radius, we are able to impose constraints on the parameter space \((Q,Q_{\text{YM}})\), as illustrated in Fig. 7. Therefore, the observational data does not rule out the possibility of a BH possessing both electric and gauge charges, with the latter being of a topological nature. Further investigations of BHs, particularly in the context of gravitational wave physics, will help to more robustly test the theoretical predictions associated with this class of BHs. The impact of both the electric and magnetic charges on the image formation of this BH, using various accretions models, will be valuable in distinguishing the distinct characteristics of this BH. This idea was recently explored in the context of pure power Yang-Mills case [119]. Thus, using current and future observational data of BHs is a promising strategy in the field of gravitational physics, enabling further insights and advancements in our understanding of BH phenomena.
## IV Acknowledgments
A. R. acknowledges financial support from the Generalitat Valenciana through PROMETEO PROJECT CIPROM/2022/13. A. R. is funded by the Mara Zambrano contract ZAMBRANO 21-25 (Spain). G. G. acknowledges financial support from
Figure 7: **Left panel:** shadow radius for the EMPYM solution Eq. (50) as a function of the Yang-Mill charge \(Q_{\text{YM}}\) for different values of the electric charge \(Q\), as indicated in the legend. Dotted (purple) curve corresponds to the purely power Yang-Mill case, i.e. with vanishing electric charge \(Q=0\), as given by Eq. (52). A maximum value of \(Q=1.1\), solid (red) curve, requires a very small negative value of \(Q_{\text{YM}}\). The interception with the vertical solid line represents the shadow radius of the standard RN BH Eq. (51). **Right panel:** Allowed parameter space, taking into considering the constraints given by Eqs. (53) and (54), for KECK and VLTI, respectively.
|
2308.09119
|
ICAR: Image-based Complementary Auto Reasoning
|
Scene-aware Complementary Item Retrieval (CIR) is a challenging task which
requires to generate a set of compatible items across domains. Due to the
subjectivity, it is difficult to set up a rigorous standard for both data
collection and learning objectives. To address this challenging task, we
propose a visual compatibility concept, composed of similarity (resembling in
color, geometry, texture, and etc.) and complementarity (different items like
table vs chair completing a group). Based on this notion, we propose a
compatibility learning framework, a category-aware Flexible Bidirectional
Transformer (FBT), for visual "scene-based set compatibility reasoning" with
the cross-domain visual similarity input and auto-regressive complementary item
generation. We introduce a "Flexible Bidirectional Transformer (FBT)"
consisting of an encoder with flexible masking, a category prediction arm, and
an auto-regressive visual embedding prediction arm. And the inputs for FBT are
cross-domain visual similarity invariant embeddings, making this framework
quite generalizable. Furthermore, our proposed FBT model learns the
inter-object compatibility from a large set of scene images in a
self-supervised way. Compared with the SOTA methods, this approach achieves up
to 5.3% and 9.6% in FITB score and 22.3% and 31.8% SFID improvement on fashion
and furniture, respectively.
|
Xijun Wang, Anqi Liang, Junbang Liang, Ming Lin, Yu Lou, Shan Yang
|
2023-08-17T17:55:54Z
|
http://arxiv.org/abs/2308.09119v1
|
# ICAR: Image-based Complementary Auto Reasoning
###### Abstract
Scene-aware Complementary Item Retrieval (CIR) is a challenging task which requires to generate a set of compatible items across domains. Due to the subjectivity, it is difficult to set up a rigorous standard for both data collection and learning objectives. To address this challenging task, we propose a _visual compatibility_ concept, composed of _similarity_ (resembling in color, geometry, texture, and etc.) and _complementarity_ (different items like table vs chair completing a group). Based on this notion, we propose a _compatibility learning_ framework, a category-aware Flexible Bidirectional Transformer (FBT), for visual "scene-based set compatibility reasoning" with the _cross-domain_ visual similarity input and auto-regressive complementary item generation. We introduce a "Flexible Bidirectional Transformer (FBT)," consisting of an encoder with flexible masking, a category prediction arm, and an auto-regressive visual embedding prediction arm. And the inputs for FBT are cross-domain visual similarity invariant embeddings, making this framework quite generalizable. Furthermore, our proposed FBT model learns the inter-object compatibility from a large set of scene images in a self-supervised way. Compared with the SOTA methods, this approach achieves up to **5.3%** and **9.6%** in FITB score and **22.3%** and **31.8%** SFID improvement on fashion and furniture, respectively.
## 1 Introduction
Online shopping catalogs provide great convenience, such as searching and comparing similar items. However, when customers can compare similar items, they often miss the browsing of complementary items in the e-shopping experience. Millions of online images offer a new opportunity to shop with inspirational home decoration ideas or outfit matching. But, to retrieve stylistically compatible products from these online images for set matching can be an overwhelming process.The ability to recommend visually complementary items becomes especially important, when shopping for home furniture and clothings. The subjectivity makes the visual compatibility even more difficult to model computationally.
In this work, we aim to address the _visual scene-aware Complementary Item Retrieval_ (CIR) [17] task. In this task (as shown in Figure 2), we attempt to model human's ability to select a set of objects from cross-domain pools, given a scene image, objects in the scene, and object categories. Therefore, we propose a _visual compatibility_ concept, consisting of two key elements: _similarity_ and _complementarity_. Visual similarity and complementarity, however, can contradict each other sometimes. Items that look similar (color, geometry, texture, and etc.) may not be complementary (different items like dinner table vs sofa) when putting them into a set. Items that complement each other do not necessarily look similar (e.g. an outfit set in contrasting colors). The ambiguous definition for visual complementarity is a major challenge. This ambiguity makes it difficult to rigorously define an objective and creates extra challenge for collecting such datasets, when designing a data-driven method.
To address these issues, we first propose a "compati
bility leaning" framework to model the visual _similarity_ and _complementarity_. To the best of our knowledge, we are among the first to show qualitatively that our model based on this framework can generalize to unseen domains (where the model is not trained with, as shown in Figure 1 and Figure 6). For the scene-based CIR task, it's complex to learn both the cross-domain similarity and complementarity. Therefore, we use cross-domain visual similarity invariant embeddings in our framework. Many previous CIR works Han et al. (2017); Kang et al. (2019) also start from some types of learned embedding. But failing to model the visual similarity creates extra complexity for the complementary learning. Secondly, we propose to use self-supervised learning for visual complementarity reasoning by introducing an auto-regressive transformer based architecture. Given the difficulty to define style complementarity mathematically, we propose a solution based on the assumption that the items exist in the inspirational scene images are compatible with each other.
Built upon the aforementioned premises, we present a novel self-supervised transformer-based learning framework (overview shown in Fig. 3). Our model effectively learns both the similarity and complementarity between a set of items. Our model does not require extra complementary labels. In addition, compared to the prior work that models complementary items as _pairs_ or a _sequence of items_, we model them as _unordered sets_. We carefully design our compatibility leaning model. First, we ensure that the learned embedding both contains and extracts all the necessary information for the compatibility learning. Second, we make full use of Transformer's ability in reasoning about the interactions between these learned embeddings. To model flexible-length unordered set generation with cross-domain retrieval, we propose a new Flexible Bidirectional Transformer (FBT). In this FBT model, we model the unordered set generation using random shuffle and masking technique. In addition, we introduce a category prediction arm and a cross-domain retrieval arm to the transformer encoder. The added category prediction branch helps the model to reason about the complementary item types. As pointed out in Vasileva et al. (2018), the category embedding representation of each item carries the notion of both similarity and complementarity. Our proposed model, compared with prior work that also models the multi-item compatibility using neural networks, such as works by Li et al. (2017), Han et al. (2017) and Sarkar et al. (2022), does not require the partial sets to be given.
We validate our method on the CIR benchmark datasets include, Shop The Look (STL) Kang et al. (2019), Exact Street2Shop Hadi Kiapour et al. (2015) and Deep-Room Gadde et al. (2021). Our method consistently outperforms the state-of-the-art methods. More importantly, we notice that most of the CIR prior work are evaluated via the Fill-In-The-Blank (FITB) Han et al. (2017) metric or human in the loop. The FITB metric can reflect the model's ability in cross-domain retrieval but it does not measure the complementarity as a set. Human in the loop evaluation, however, is both limited to scale and biases, if not conducted thoughtfully. To address these issues, we propose a new CIR evaluation metric: "Style Frechet Inception Distance" (SFID) (see supplementary for details).
In summary, the key contributions of this work include:
* [leftmargin=*,noitemsep,topsep=0pt]
* **Visual compatibility** is defined based on _similarity_ and _complementarity_ for the Scene-aware Complementary Item Retrieval (CIR) task and a new _compatibility learning framework_ is designed to solve this task.
* For the **compatibility learning framework**, a category-aware _Flexible Bidirectional Transformer_ (_FBT_) is introduced for visual scene-based set compatibility reasoning with the _cross-domain_ visual similarity input and auto-regressive complementary item generation.
## 2 Related Work
Visual Similarity LearningVisual similarity learning has been a main computer vision topic. The goal is to mimic human's ability in finding visually similar objects or scenes. This is particularly studied in image retrieval El-Nouby et al. (2021); Radenovic et al. (2018); Teh et al. (2020); Cheng et al. (2021) - finding images with a certain definition of similarity and so on. Prior to retrieving similar clothing, researchers also studied how to detect and segment clothing from real-life images Yamaguchi et al. (2012); Yang and Yu (2011); Gallagher and Chen (2008). With clothing detection or segmentation, similar clothing retrieval is explored via style analysis Hsiao and Grauman (2017); Kiapour et al. (2014); Simo-Serra and Ishikawa (2016); Yu et al. (2012); Yamaguchi et al. (2013); Kiapour et al. (2014); Di et al. (2013). In the meantime, Liu Liu et al. (2012, 2012) pioneer ways to do cross-domain retrieval which retrieves similar clothing from real-life images and targets images from a different domain, such as well-staged product images. Recent fashion retrieval tasks can be further categorized based on the input information, such as images Kalantidis et al. (2013); Liu et al. (2016); Simo-Serra and Ishikawa (2016); Zhai et al. (2017); Hadi Kiapour et al. (2015); Tran et al. (2019), clothing attributes Ak et al. (2018); Di et al. (2013), videos Cheng et al. (2017).
Visual complementarity LearningVisual complementarity learning, unlike visual similarity learning, is much more
Figure 2: **Scene-aware Complementary Item Retrieval Task Illustration. Given a query scene image, (optional) scene objects and item categories, the task goal is to generate a cross-domain set of stylistically compatible items.**
ambiguous. There are a couple of research directions: pairwise complementary item retrieval [21, 22, 23, 24, 25, 26], set complementary prediction (no cross domain retrieval) [27, 28, 29, 27, 26], set complementary prediction (no cross domain retrieval) [28, 29, 27, 26, 25, 26], set complementary item retrieval [29, 27, 28, 29], personalized set complementary item prediction (requires user input) [26, 27, 28, 29, 27, 26, 25, 26] and multi-modal complementary item prediction [27]. All these prior work focus on feature representation learning. Another line of works [28, 29, 27, 26, 25, 26] focus on learning multiple sub-embedding based on different properties for both similarity and compatibility. More recently, Transformer [28] has demonstrated strong performance across various natural language processing and computer vision tasks. Kang [29] proposes to use CNN visual classification features and attention mechanism. Later on, Sarkar [2] uses Transformer and CNN-based image classification features for compatibility learning. Similarly, Chen [29] applies Transformer together with CNN image classification feature to learn a mapping between user picked item pool and a set of most compatible items from that pool. Unlike all the work above, we build our visual compatibility model which focuses on both similarity and complementarity.
**Learning Framework** Many researchers have studied and explored building a cascaded learning framework. The cascaded method here means learning how to encode the data then modeling the statistics of this encoding. Many of the methods proposed for CIR task can also be categorized as two-stage models. But almost all of them use the image classification training target as the first-stage feature extractor [10, 21, 22, 23]. Taraviya [26] propose a two-stage model for personalized pairwise complementary item recommendation where they learn a feature embedding specially designed for customer preferences in their first stage. In our compatibility learning, we set cross-domain visual similarity embedding as input, and design FTB for complementary set generation. We show empirically that the visual similarity feature, compared to image classification learned features, are better suited for CIR. Our design makes our model surpass the prior work in scene aware CIR task.
## 3 Method
**Problem Statement** Given a scene image \(\mathcal{I}\), a set of unordered objects \(\mathcal{O}=\{o_{i}\}_{i=0}^{N},o_{i}\in\mathcal{D}_{A}\) in the scene and a set of unordered object categories \(\mathcal{C}=\{c_{i}\}_{i=0}^{L}\), the problem is to retrieve cross-domain a set of complementary objects \(\mathcal{X}=\{x_{i}\}_{i=0}^{L},x_{i}\in\mathcal{D}_{B}\). This generated set of objects needs to be both visually compatible with each other and of visually similar style to the input scene image \(\mathcal{I}\). Here we use \(\mathcal{D}_{A}\) and \(\mathcal{D}_{B}\) to denominate the two different visual domains, \(L\) to represent the number of objects to retrieve during inference and \(N\) is the number of scene objects. The difference between the two domains \(\mathcal{D}_{A}\) and \(\mathcal{D}_{B}\) can be quantified as the Frechet distance \(\mathcal{F}\) larger than a certain threshold \(\theta\).
### Conditional Compatibility Auto Reasoning
We formulate the problem of generating a set of objects \(\mathcal{X}=\{x_{i}\}_{i=0}^{L}\) conditioned on the scene image \(\mathcal{I}\) and a specified set of categories as how to compute likelihood (Eq. 1) of creating the object set \(\mathcal{X}\) given the scene image \(\mathcal{I}\), objects in the scene \(\mathcal{O}\), and set of categories \(\mathcal{C}\). We model the probability of generating the unordered set \(\mathcal{X}\) as the sum of generating the set in any permutation \(\hat{\mathcal{X}}\):
\[p(\mathcal{X}_{i}|\mathcal{I},\mathcal{O},\mathcal{C})=\sum_{\hat{\mathcal{X}} \in\Phi(\mathcal{X}_{i})}p(x_{i}|x_{0},\dots,x_{i-1},\mathcal{I},\mathcal{O}, \mathcal{C}),i\leq L \tag{1}\]
where \(\Phi(\mathcal{X})\) includes all the permutation of the target object set \(\mathcal{X}\) given all the permutation of the categories \(\mathcal{C}\), and \(L\) is the maximum number of items to compose a set. For each permutation of \(\mathcal{X}\), the set generation becomes a sequence generation problem. We model the sequence generation as an auto-regressive process. In the auto-regressive process, the next item in the set is generated conditioned on the prior items. This auto-regressive process statistically formulated as the multiplication of the probabilities:
\[p(x_{i}|x_{0},\dots,x_{i-1})=\prod_{j}^{j<(i-1)}p(x_{j}|x_{0},\dots,x_{j-1}). \tag{2}\]
To learn to conditionally generate the best set of objects, our model learns to maximize the log likelihood of the probability, \(p(\mathcal{X}|\mathcal{I},\mathcal{O},\mathcal{C})\),
\[\log p(\mathcal{X}_{i}|\mathcal{I},\mathcal{O},\mathcal{C})=\sum_{\hat{ \mathcal{X}}\in\Phi(\mathcal{X}_{i})}\sum_{j}^{j<i}\log p(x_{i}|x_{<j}, \mathcal{I},\mathcal{O},\mathcal{C}) \tag{3}\]
To approximate the log likelihood Eq. 3, we propose a two-stage learning framework.
### Compatibility Learning Framework
**Visual Similarity Learning** To relieve the complexity of learning both the visual complementarity and similarity directly from pixel domain, we propose to separate them into two stages. In the first stage, our model focuses on the visual similarity learning. As shown in Figure 4, we apply a CNN-based (ResNet50) visual similarity model [10] with normalized softmax loss [10] and soft-margin triplet loss [11] (Hermans, Beyer, and Leibe 2017) (refer to Supplementary for more details). With this model, we project the scene image \(\mathcal{I}\), objects in the image \(\mathcal{O}\), and the item images in retrieval pool \(\mathcal{X}\) onto this embedding:
\[\{\mathbf{I},\mathbf{O},\mathbf{X}\}=g(\{\mathcal{I},\mathcal{O},\mathcal{X} \}),\mathcal{I}\in\mathbb{R}^{3},\mathcal{O}\in\mathbb{R}^{3},\mathcal{X}\in \mathbb{R}^{3} \tag{4}\]
where \(g\) is our visual similarity model. This projection helps our second stage model to converge faster, similar in spirit to sequential optimization. We also show empirically (refer
to Sec. Similarity Learning Results for details) that the visual similarity embedding is best suited for learning cross-domain visual compatibility.
**Complementarity Reasoning with Flexible Bidirectional Transformer** At the second stage, we propose a new Flexible Bidirectional Transformer (FBT) (see Figure 5 for _conditional cross-domain unordered set generation_). We choose Transformer model (Vaswani et al., 2017) as the core architecture to learn the inter-object compatibility. The vanilla Transformer model (Vaswani et al., 2017), as it is originally proposed for modeling ordered sequence structured data, such as languages and images, is insufficient for our task.
We introduce: (1) random shuffling together with random length sequence masking for set generation; (2) category prediction arm to better model the category distribution for a set of objects; and (3) visual embedding prediction arm for visual compatibility modeling. During inference, our FBT model generates an unordered set auto-regressively (Eq. 2 and shown in the green part of Figure 3). Inspired from the CLS token proposed in the Vision Transformer (ViT) (Dosovitskiy et al., 2020), we also use a trainable variable denoted as \(\mathbf{q}\) to extract inter-token relation,
\[\mathbf{q}^{\prime} =e(\mathbf{EI},\Phi(\mathbf{EO});\mathbf{Eq}) \tag{5}\] \[=\text{MLP}(\text{MSA}(\mathbf{EI},\Phi(\mathbf{EO}),\mathbf{E} \mathbf{e},\mathbf{Eq}))\] \[\mathbf{EO} =[\mathbf{E}\mathbf{o}_{1},\mathbf{E}\mathbf{o}_{2},\dots, \mathbf{E}\mathbf{o}_{M},\text{MASK}],\]
where \(\mathbf{q}^{\prime}\) denotes the corresponding output of the trainable input token \(\mathbf{q}\), \(\mathbf{e}\) is the end token, \(\Phi\) is the masking operation, \(e()\) represents the Transformer encoder with MSAs (the Multi-headed Self-Attention layers), MLPs (Multi-Layer Perception), \(\mathbf{E}\) is the linear projection and \(M\) is the unmasked sequence length. The output \(\mathbf{q}^{\prime}\) is then be used for predicting both the category \(\mathbf{c}_{M+1}\) (Eq. 6) and the visual embedding of the next item \(\mathbf{x}_{M+1}\) (Eq. 7).
\[\mathbf{\hat{c}} =\text{MLP}(\mathbf{q}^{\prime}) \tag{6}\] \[\mathbf{\hat{x}_{M+1}} =\text{MLP}[\mathbf{q}^{\prime},\mathbf{\hat{c}}] \tag{7}\]
The output category embedding \(\mathbf{\hat{c}}\) is supervised via the Cross-Entropy loss. And the visual feature embedding \(\mathbf{\hat{x}}\) is supervised using a triplet loss (Yang et al., 2019). To form a triplet, the anchor is the predicted embedding \(\mathbf{\hat{x}_{M+1}}\) with the target item's embedding \(\mathbf{x_{M+1}}\) as the positive and randomly selected same category object's embedding as the negative.
Figure 4: **VSIM: Visual Similarity Model.**
Figure 5: **FBT: Flexible Bidirectional Transformer.** We randomly sample \(M\in[0,N]\) items from the total item number \(N\) of items (in a scene) as input set, and the \((M+1)_{th}\) item not in the input set as output target. We put the scene embedding at the beginning of input set, and take the scene embedding as the start token \(EI\). We set a zero vector as the end token \(E_{e}\).
Figure 3: **ICAR Model Overview.** In similarity learning, we apply a CNN-based model(Jun et al., 2019) to learn the visual similarity features across two domains. The learned features are required for both complementary reasoning in the complementarity learning and the cross-domain retrieval. With the learned features, in the complementarity learning, we propose a Flexible Bidirectional Transformer (FBT) model to learn the multi-object visual compatibility.
One challenge in the feature learning is the space collapsing, where points in the embedding space get too close. This space collapse can lower the representation capacity. To avoid such an issue, we apply a differential entropy regularizer [14] to maximize the distance between each point and its closest neighbors in the embedding space. The regularizer is defined as follows:
\[\text{Reg}=-\frac{1}{N}\sum_{i}^{N}log(D_{min_{i\neq j}}(z_{i},z_{j})), \tag{8}\]
where we divide the L2 distance between sample i, j by 4 as \(D_{i,j}\) to make the distance between 0 and 1. \(z\) is the input feature maps.
## 4 Experiments
### Setup
**Benchmark Datasets:** In the following experiments, we evaluate our proposed ICAR using four datasets. DeepRooms [1] is a large-scale (1.4 million), high-quality human annotated object detection dataset covering a total of 81 fine-grained furniture and home product categories with 210K room-scene images. STL-Home [13] includes 24,022 interior home design images and 41,306 home decor items, which can generate 93,274 scene-product pairs. STL-Fashion [13] contains 72,189 fashion-product pairs from its 47,739 fashion images and 38,111 product images. And Exact Street2Shop [1] provides fashion-product 10,608 pairs from its 10,482 fashion images and 5,238 product images with the bounding box of products in scene images.
**Implementation:** All the models in our experiments are trained using AdamW [12] with cosine learning scheduler from an initial learning rate of \(2e-4\) to 0. Models are trained for 500 epochs with a batch size of 256. We choose 1 negative sample in the triplet loss. And we use 1.0, 1.0, and 0.05 as the weights for cross-entropy loss, triplet loss, and regularizer loss respectively.
### Evaluation Metrics
To evaluate our proposed model's performance for scene-aware cross-domain CIR, we benchmark our method using previously proposed Fill-In-The-Blank (FITB) [15] and our newly proposed SFID metrics. FITB accuracy is measured via counting percentage of times the model correctly picked the ground-truth object from the candidate pool. Following [13], we set the number of candidate to 2 for STL and Street2Shop and 3 for DeepRooms. For fair comparison, we apply the same method in setting up the FITB candidate pool across all the experiments. While FITB estimates the model's ability to retrieve the best object, it fails to capture the compatibility among a set of items. To compensate, we propose a new distribution distance-based metric: **Style FID (SFID)**. It is difficult to define visual style, as it covers various aspects, including color, texture, shape and so on. Instead, we use the _visual similarity distribution_ between the generated set and the designed sets of items as the measurement of stylistic compatibility. Similar to the FID [10], we apply the Frechet distance \(\mathcal{F}\) for distribution distance measurement. Instead of directly estimating the pixel value distribution, we apply a feature extractor that can project the pixel values onto an embedding. With the feature extractor, the _computed distribution_ can focus on style related features, including colors, edges, textures, patterns, and shapes. We define
\[\text{SFID Score}=\mathcal{F}(f(\mathcal{X}),f(\mathcal{Y})), \tag{9}\]
\[\mathcal{F}(\mathbf{X},\mathbf{Y})= |\,\mu_{X}-\mu_{Y}\mid+tr(\sigma_{X}+\sigma_{Y}-2\sqrt{\sigma_{X} *\sigma_{Y}}),\]
where \(\mathcal{X}\) is the generated set of objects, \(\mathcal{Y}\) is a set of well designed (ground-truth) objects, function \(f\) is a feature extractor, \(\mu\) and \(\sigma\) are the mean and variance. Please refer supplementary for more details about SFID.
### Compatibility Learning Results
**Quantitative Results** Here we present quantitative evaluation results on four different scales for benchmark datasets. We first evaluate our algorithms on the largest DeepHomes [1] dataset. As shown in Table 1 and 3, ICAR improves performance over the state-of-the-art by **9.5%** for FITB and 11.2 (about **23.3%**) for SFID score. To further test the effectiveness of our category embedding, we compare the FITB results when specify or not the category during inference time. If category is not specified, the FITB score drops from \(87.1\%\) to \(83.9\%\).
three datasets, the human annotators also label products that have a similar style to the observed product and are compatible with the scene. This study suggests that the scene images and the products may not have one-to-one correspondence, thereby increasing the difficulty of style matching. We show in Table 2 and Table 3, that ICAR outperforms SOTA by 5.3% on STL-Fashion and 9.6% on STL-Home in FITB metric, and outperforms SOTA by 3.4 (22.3%) on STL-Fashion and 2.9 (31.8%) on STL-Home in terms of SFID metric.
proach outperforms fixed length masking for unordered set generation with or without target category been given.
Visual Compatibility Embedding ValidationTo validate our model's ability in learning the visual style implicitly, we perform t-SNE analysis on the learned embedding of the scene images sampled from the STL-home and STL-fashion datasets. In Figure 7, we show the clustering results on the left and the scene images from the clusters in column 2-5. The cluster labels are computed using the k-means (\(K=6\)) method. We find some typical interior design styles such as the industrial style, rustic style and contemporary style in the clusters. More interestingly, we also find that our model pays attention to color. For example, in the C home scene-image cluster, our model learns to extract the Morandi type of color scheme. Given that our model is trained in a self-supervised way, we observe some mix in style within some of the clusters.
Human Perception ValidationWe conduct user studies by asking interior design experts to rate (Good: items are compatible, Neutral: one or two items are incompatible) 100 generated sets of furniture item images from the Random sample, OutfitTransformer(Sarkar et al., 2022), our method (ICAR), and Ground Truth for different datasets. As shown in Figure 8, ICAR outperforms all other methods, and those results are consistent with our SFID score (Table 3). We further compute the Pearson correlation coefficient to measure the association between SFID and human rating score (normalized as Ground truth score/Method score). We obtain a 0.7 average Pearson correlation value ([-1, 1]). This demonstrates that there is a strong positive association between our SFID score and human perception.
### Similarity Learning Results
Here we validate the effectiveness of focusing on visual similarity learning in the first stage. We compare our method against four different types of learning target i.e. image classification, image reconstruction, image generation and image representation. And for each type of learning target, we choose the state-of-the-art method (shown in the first column in Table 5) except the image representation. For image representation, we train the model in a contrastive learning manner. All the models in this experiment are trained using the DeepRooms (Gadde et al., 2021) dataset. As shown in Table 5, our model performs the best when the first stage focus on the visual similarity learning.
## 5 Conclusion, Limitation, and Future Work
In this paper, we introduce a _compatibility learning framework_ using a novel category-aware "Flexible Bidirectional Transformer" (FBT), based on _Visual similarity_ and _complementarity_ to effectively retrieve a set of stylistically compatible items across domains, given a scene-image query. This learning framework is also generalizable and can be extended to other types of conditional cross-domain CIR tasks.
While our results show a promising direction, there is more to be explored. First, there are other informations, like text descriptors and video contents, to be used as compatibility signals. Secondly, the metric to measure styles can be further broadened to encode regional preference and cultural influence, interpretation of styles can be expanded to include more global and diverse perspectives.
Figure 8: **Human Ratings on Different Datasets. Our SFID score correlates better than SOTA with human judgement.**
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & FITB \(\uparrow\) & Learning Target Type \\ \hline ICAR - VQGAN & \(73.1\) & Reconstruction \\ ICAR - Swin & \(80.5\) & Classification \\ ICAR - BEiT & \(73.3\) & Generation \\ ICAR - Contrastive Learning & \(75.0\) & Image Representation \\ ICAR - Visual Similarity & \(87.1\) & Retrieval \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Similarity Learning: visual similarity learning is the most suitable for scene-based CIR. VQGAN (Esser et al., 2021), Swin (Liu et al., 2022), BEiT (Bao et al., 2021)**
Figure 7: **Learned Scene Image Embedding Clustering Results. To validate the style implicitly learned through our network, First column is the t-SNE on the randomly sampled 2k STL-home and STL-fashion test in split scene images (Columns 2-5).**
|
2301.05228
|
FLARES IX: The Physical Mechanisms Driving Compact Galaxy Formation and
Evolution
|
In the FLARES (First Light And Reionisation Epoch Simulations) suite of
hydrodynamical simulations, we find the high redshift ($z>5$) intrinsic
size-luminosity relation is, surprisingly, negatively sloped. However, after
including the effects of dust attenuation we find a positively sloped UV
observed size-luminosity relation in good agreement with other simulated and
observational studies. In this work, we extend this analysis to probe the
underlying physical mechanisms driving the formation and evolution of the
compact galaxies driving the negative size-mass/size-luminosity relation. We
find the majority of compact galaxies ($R_{1/2, \star}< 1 \mathrm{pkpc}$),
which drive the negative slope of the size-mass relation, have transitioned
from extended to compact sizes via efficient centralised cooling, resulting in
high specific star formation rates in their cores. These compact stellar
systems are enshrouded by non-star forming gas distributions as much as
$100\times$ larger than their stellar counterparts. By comparing with galaxies
from the EAGLE simulation suite, we find that these extended gas distributions
`turn on' and begin to form stars between $z=5$ and $z=0$ leading to increasing
sizes, and thus the evolution of the size-mass relation from a negative to a
positive slope. This explicitly demonstrates the process of inside-out galaxy
formation in which compact bulges form earlier than the surrounding discs.
|
William J. Roper, Christopher C. Lovell, Aswin P. Vijayan, Dimitrios Irodotou, Jussi K. Kuusisto, Jasleen Matharu, Louise T. C. Seeyave, Peter A. Thomas, Stephen M. Wilkins
|
2023-01-12T18:59:59Z
|
http://arxiv.org/abs/2301.05228v2
|
# FLARES IX: The Physical Mechanisms Driving Compact Galaxy Formation and Evolution
###### Abstract
In the Flares (First Light And Reionisation Epoch Simulations) suite of hydrodynamical simulations, we find the high redshift (\(z>5\)) intrinsic size-luminosity relation is, surprisingly, negatively sloped. However, after including the effects of dust attenuation we find a positively sloped UV observed size-luminosity relation in good agreement with other simulated and observational studies. In this work, we extend this analysis to probe the underlying physical mechanisms driving the formation and evolution of the compact galaxies driving the negative size-mass/size-luminosity relation. We find the majority of compact galaxies (\(R_{1/2,\star}<1\)pkpc), which drive the negative slope of the size-mass relation, have transitioned from extended to compact sizes via efficient centralised cooling, resulting in high specific star formation rates in their cores. These compact stellar systems are enshrouded by non-star forming gas distributions as much as \(100\times\) larger than their stellar counterparts. By comparing with galaxies from the Eagle simulation suite, we find that these extended gas distributions 'turn on' and begin to form stars between \(z=5\) and \(z=0\) leading to increasing sizes, and thus the evolution of the size-mass relation from a negative to a positive slope. This explicitly demonstrates the process of inside-out galaxy formation in which compact bulges form earlier than the surrounding discs.
keywords: galaxies: high-redshift - galaxies: formation - galaxies: evolution - galaxies: star formation
## 1 Introduction
Galaxy sizes are the macroscopic culmination of a myriad of internal and external multi-scale physical mechanisms, such as galaxy mergers, instabilities, gas accretion, gas transport, star formation and feedback processes (Conselice, 2014). Many of these processes take place below the resolution of cosmological hydrodynamic simulations that have large enough periodic volumes to yield statistically significant galaxy samples at high redshift. Models describing these small-scale physical mechanisms are so-called'sub-grid' models (Somerville and Dave, 2015). Galaxy sizes are a powerful diagnostic of the performance of a sub-grid model since, unlike stellar masses or luminosities, they are not only dependent on how much stellar mass is formed but also where this mass is distributed within a galaxy, and thus the properties of the local environment in the Interstellar Medium (ISM).
The intrinsic size-luminosity relation and the stellar size-mass relation both describe the underlying distribution of stellar mass in galaxies. At low redshift (\(z<2\)), simulations produce a positive relation between increasing size and increasing mass. Furlong et al. (2017) measured the size-mass relation in Eagle finding a positive relation which flattens at \(z=2\), and an increase in size with decreasing redshift over the range \(z=0-2\). Both findings are in good agreement with the observations referenced therein. Furlong et al. (2017) found that passive galaxies at \(z<2\) evolved in size by migration of stellar particles, rather than by star formation or merging mechanisms.
Observations have shown a similar redshift evolution in the size of galaxies from the Epoch of Reionisation to the present day. In the low redshift Universe (\(z<3\)), star-forming galaxies typically have sizes of the order \(1-30\) pkpc (Zhang and Yang, 2019; Kawinwanichakji et al., 2021), with a positive size-luminosity relation (van der Wel et al., 2014; Suess et al., 2019; Kawinwanichakij et al., 2021). However, van der Wel et al. (2014) found a substantial population of compact and massive (\(R<2\) pkpc, \(M_{\star}>10^{11}\)M\({}_{\odot}\)) galaxies at \(z=1.5-3\).
At \(z>5\) a number of Hubble Space Telescope (HST) studies have found bright star-forming galaxies with compact half-light radii of 0.5-1.0 pkpc (Oesch et al., 2010; Grazian et al., 2012; Mosleh
et al., 2012; Ono et al., 2013; Huang et al., 2013; Holwerda et al., 2015; Kawamata et al., 2015; Shibuya et al., 2015; Kawamata et al., 2018; Holwerda et al., 2020). The aforementioned massive compact galaxies found in van der Wel et al. (2014) could be descendants of these early compact systems that are yet to undergo a process driving size increase, such as stellar migration or inside-out star formation. These HST studies produced size-luminosity relations with positive slopes, but there is appreciable scatter in the reported slopes. The subset of these studies that include lensed sources (e.g. Kawamata et al., 2018) produce steeper slopes and resolve compact galaxies at lower luminosities than accessible to non-lensing surveys, which yield flatter size-luminosity relations.
In Roper et al. (2022) we found a negative intrinsic stellar size-luminosity relation at \(z>5\) in the First Light And Reionisation Epoch Simulations (Flares, Lovell et al., 2021; Vijayan et al., 2021), a suite of 40 cosmological hydrodynamical zoom simulations using the Evolution and Assembly of Galaxies and their Environments (Eagle) model (Schaye et al., 2015; Crain et al., 2015). However, after including the effects of dust attenuation, the observed size-luminosity relation exhibited a positive slope, in good agreement with current HST observations (Oesch et al., 2010; Grazian et al., 2012; Mosleh et al., 2012; Ono et al., 2013; Huang et al., 2013; Holwerda et al., 2015; Kawamata et al., 2015; Shibuya et al., 2015; Kawamata et al., 2018; Holwerda et al., 2020). This increase in slope and reversal of the trend between size and luminosity was found to be caused by the concentration of compact dust distributions obscuring the brightest regions of compact galaxies. We also probed the size-luminosity relation as a function of wavelength, finding a flattening of the relation (and indeed a shift to a negative trend) with reddening wavelength, reflecting the underlying size-mass relation. A negative high redshift intrinsic UV size-luminosity/size-mass relation is not unique to the Eagle model, with a negative intrinsic stellar size-luminosity relation present in the BlueTides simulation (Feng et al., 2016) at \(z\geq 7\)(Marshall et al., 2021), and the Illustris-TNG simulations (Pillepich et al., 2018) at \(z=5\)(Popping et al., 2021), while a constant or negative stellar size-mass relation, dependent on the measurement method, was found in the Straa simulation (Dave et al., 2019) at \(z=6\)(Wu et al., 2020).
Explicit confirmation or contradiction of a negative high redshift size-mass relation by observations remains an open question. With the infra-red capabilities of JWST we should soon be able to probe the less obscured rest-frame optical emission of galaxies at high redshift. Roper et al. (2022) showed it is possible to probe a constant observed size-luminosity relation at \(z<5\) using JWST's F444W filter on NIRCam, while MIRI can probe the the negative regime of the observed size-luminosity relation. Using this observational power we will soon ascertain if the negative size-mass relation in simulations is representative of the true Universe.
The evolution from the compact high redshift Universe to the extended low redshift Universe has been studied extensively in both simulations and observations. For star-forming galaxies at \(0.5\lesssim z\lesssim 2.2\), it has been shown that they predominantly grow inside-out via star formation as opposed to growth via merger driven accretion. This trend is particularly evident at the massive end (log\(M_{*}\) / M\({}_{\odot}\))\(\gtrsim 10\)), with evidence that they may be concurrently quenching from the inside-out and building bulges (Tacchella et al., 2015, 2018; Nelson et al., 2016, 2019; Wilman et al., 2020; Matharu et al., 2022). At \(z\sim 0\), the picture is less clear, with some evidence of inside-out growth (e.g. Munoz-Mateos et al., 2007), but also a tendency for star-forming regions and stellar disks to be found to have the same size (James et al., 2009; Fossati et al., 2013). Upcoming JWST data should be able to complete this picture at least out to \(z\sim 5\).
Given the surprising high redshift size-mass/luminosity relation results consistently found across various simulations at high redshift, it is imperative we understand the mechanisms that yield these unexpected trends. Regardless of whether future observations confirm or rule out this behaviour, a thorough investigation is necessary to elucidate the processes taking place. Such an investigation will aid the interpretation of observations and form the starting point for modifications to the theoretical models should this behaviour be unique to simulations. In this work, we address this by probing the physical mechanisms driving the formation and evolution of compact galaxies, utilising the environmental coverage of Flares to yield a representative description of size evolution from the high redshift Universe (\(z>5\)) to the present day.
This article is structured as follows: In Section 2 we detail the Flares simulations, in Section 3 we discuss stellar and gas half-mass radii and their size-mass relations, in Section 4 we investigate the formation of compact stellar systems, and in Section 5 we explore the physical mechanisms driving size evolution from the Epoch of Reionisation to the present day. Finally, in Section 6 we present our conclusions. Throughout this work, we assume a Planck year 1 cosmology (\(\Omega_{m}=0.307\), \(\Omega_{\Lambda}=0.693\), \(h=0.6777\), Planck Collaboration et al. (2014)).
## 2 First Light and Reionisation Epoch Simulations (Flares)
Flares is a suite of 40 cosmological hydrodynamical zoom simulations focusing on the Epoch of Reionisation (Lovell et al., 2021; Vijayan et al., 2021). Each resummation is a spherical region with radius 14 cMpc/h, selected from a (3.2 Gpc)\({}^{3}\) "parent" dark matter only (DMO) simulation (Barnes et al., 2017). The size of the parent simulation enables a region selection methodology capturing the rarest most over-dense regions in the Universe where it is thought the most massive galaxies form (Chiang et al., 2013; Lovell et al., 2018). We select the 40 regions at \(z=4.67\), at which point the most extreme overdensities are only mildly non-linear, thus preserving the over-density hierarchy across the full resimulation redshift range. The resimulations span a wide range of overdensities (\(\delta=-0.479\to 0.970\); see Table A1 of Lovell et al., 2021) with a bias towards extreme overdensities to guarantee a statistically significant sample of galaxies accessible to the JWST and other upcoming next generation telescopes such as the Euclid Space Telescope (Euclid Collaboration et al., 2022), the Nancy Grace Roman Space Telescope (Wang et al., 2022), and the Extremely Large Telescope (Ramsay et al., 2021). We store outputs (snapshots) at integer redshifts from \(z=15\) to \(z=5\), inclusive.
Flares employs the AGNdT9 variant of the Eagle sub-grid model (Schaye et al., 2015; Crain et al., 2015). We choose the AGNdT9 variant since it produces similar mass functions to the fiducial model while better reproducing the hot gas of groups and clusters (Barnes et al., 2017), environments which are more prevalent in the extreme overdensities focused on in Flares. We adopt the fiducial resolution of the Eagle model with a dark matter and an initial gas particle mass of \(m_{\rm dm}=9.7\times 10^{6}\) M\({}_{\odot}\) and \(m_{\rm g}=1.8\times 10^{6}\) M\({}_{\odot}\) respectively, and a gravitational softening length of \(2.66\) ckpc at \(z\geq 2.8\). We present a discussion of the features of the Eagle model important to this work in Section 2.1.
The Eagle model was calibrated using the \(z=0\) galaxy mass function, the mass-size relation for discs, and the gas mass-halo mass relation, but is in good agreement with a number of low redshift observables not used for calibration (e.g. Furlong et al., 2015; Trayford
et al., 2015; Lagos et al., 2015). Despite being calibrated at low redshift the Eagle model performs exceptionally well at high redshift. Indeed, previous Flares papers have shown a good agreement with: the galaxy stellar mass function (Lovell et al., 2021), the observed UV luminosity function at \(z\geq 5\)(Vijayan et al., 2021, 2022), HST constraints on galaxy sizes at \(z\geq 5\)(Roper et al., 2022), and the evolution of galaxy colours with redshift (Wilkins et al., 2022). In addition, we have also presented galaxy populations at the redshift 'frontier' (\(z>10\); Wilkins et al., 2022) and the behaviour of star formation and metal enrichment histories during the Epoch of Reionisation (Wilkins et al., 2022).
Since the 40 resimulations are both a sub-sample of the environments in the parent DMO simulation and biased to extreme overdensities we apply a statistical weighting scheme to reproduce the true distribution of environments in the universe. We achieve this by applying an overdensity based weighting to each region and then producing composite distribution functions. For a detailed description of this weighting scheme please refer to the introductory Flares paper, Lovell et al. (2021).
### The Eagle Model
The Eagle model used in Flares is described in detail in Schaye et al. (2015) and Crain et al. (2015); here we give a brief explanation of the elements of the model that are pertinent to this work and refer the reader to the original works for more detail.
#### 2.1.1 Radiative cooling
Radiative cooling and photoheating are implemented on an element-by-element basis following Wiersma et al. (2009) via a look up table. This includes all 11 elements found to be important for radiative cooling and photoheating: H, He, C, N, O, Ne, Mg, Si, S, Ca, and Fe. Wiersma et al. (2009) tabulated the rates as a function of density, temperature and redshift assuming the gas to be in ionisation equilibrium and exposed to the Cosmic Microwave Background and a UV/X-ray background from galaxies and quasars (Haardt & Madau, 2001). These look up tables were produced using cloudy version 07.02 (Ferland et al., 1998).
This radiative cooling implementation comes with some caveats that are important to keep in mind in the context of this work. The assumption of ionisation equilibrium and the absence of ionising radiation from local sources may cause overestimated cooling rates in rapidly cooling gas (Oppenheimer & Schaye, 2013) and gas that has recently been irradiated by an AGN Oppenheimer & Schaye (2013). The model also ignores self-shielding, which could yield underestimated cooling rates in dense gas. Given that much of the star formation during the Epoch of Reionisation happens in pristine dense gas the underestimate due to self-shielding somewhat counteracts the overestimate introduced by assuming ionisation equilibrium in these regions.
#### 2.1.2 Star formation
Star formation is implemented following Schaye & Dalla Vecchia (2008), adopting the metallicity dependent density threshold of Schaye (2004). It utilises the observed Kennicutt-Schmidt star formation law (Kennicutt, 1998) but reformulated in terms of pressure to avoid dependence on the equation of state. This modified observed Kennicutt-Schmidt star formation law is implemented stochastically assuming a Chabrier initial mass function (IMF) Chabrier (2003).
Due to resolution limitations, it is necessary to invoke a star formation density threshold which defines the minimum density at which a stellar particle can be formed, this is defined as
\[n_{\rm H}^{*}(Z)=10^{-1}\left[\rm cm^{-3}\right]\left(\frac{Z}{0.002}\right)^ {-0.64} \tag{1}\]
where \(n_{H}=10^{-1}\) cm\({}^{-3}\) is the critical volume density and \(Z\) is the gas metallicity. For low metallicities this diverges, so an upper limit of \(n_{\rm H}^{*}=10\) cm\({}^{-3}\) is imposed. Furthermore, a gas density threshold of 57.7 times the cosmic mean is implemented to prevent star formation in low overdensity gas at very high redshift.
#### 2.1.3 Stellar feedback
Stars, particularly those that are massive and short lived, interact with the ISM by injecting energy via stellar winds, radiation and supernovae. In the Eagle model these energetic events, or stellar feedback events, are implemented following the stochastic thermal feedback method proposed by Dalla Vecchia & Schaye (2012). In this implementation, the temperature increment (\(\Delta T\)) of a heated gas particle is specified explicitly and the contribution to a neighbouring gas particle is defined probabilistically.
Stellar feedback happens only once: when a stellar particle reaches an age of 30 Myrs, a neighbouring SPH particle can be heated with a probability dependent on the fraction of the total amount of energy from core collapse supernovae per unit stellar mass that is injected. On average,
\[f_{\rm th}=f_{\rm th,min}+\frac{f_{\rm th,max}-f_{\rm th,min}}{1+\left(\frac{Z }{0.1\rm Z\zeta_{o}}\right)^{n_{H}}\left(\frac{n_{\rm th,min}}{n_{\rm th,0}} \right)^{-n_{n}}}, \tag{2}\]
where \(f_{\rm th,min}\) and \(f_{\rm th,max}\) are asymptotes which can be chosen to tune stellar feedback in the model. The inclusion of the metallicity term, where \(Z_{\odot}=0.0127\) is the solar metallicity and \(n_{Z}=2/\ln 10\), accounts for the metallicity dependence of thermal losses in the ISM due to metal-line cooling. The inclusion of the density term, where \(n_{\rm th,birth}\) is the stellar birth density (the density inherited by the stellar particle from their parent gas particle), \(n_{n}=n_{Z}=2/\ln 10\), and the density pivot \(n_{\rm H,0}=0.67\) cm\({}^{-3}\), helps mitigate the inefficiency of feedback in highly enriched dense environments, such as compact galaxy cores. For low metallicities and high densities, \(f_{\rm th}\) asymptotes to \(f_{\rm th,max}\) (i.e stellar feedback is maximal), while at high metallicities and low densities \(f_{\rm th}\) asymptotes to \(f_{\rm th,min}\) (i.e stellar feedback is minimal). In the fiducial EAGLE models \(f_{\rm th,max}=3\) and \(f_{\rm th,min}=0.3\). Crain et al. (2015) showed that the choice of \(f_{\rm th,max}\) is the more influential of the two, and choosing a value larger than unity better reproduces galaxy stellar mass functions at low stellar masses.
### Structure Finding
We follow the same structure extraction method used in the Eagle project and all previous Flares papers, explained in McAlpine et al. (2016).
* Dark Matter overdensities are identified using a Friends-Of-Friends (FOF) algorithm with a linking length of \(\ell=0.2\bar{x}\), where \(\bar{x}\) is the mean inter-particle separation.
* Baryonic particle species are then assigned to the halo containing their nearest dark matter neighbour.
* The FOF groups of both baryonic and dark matter particles are then refined by the Sufind algorithm (Springel et al., 2001; Dolag et al., 2009) to produce substructures (galaxies).
To refine FOF groups into galaxies, Subfind finds saddle points in the density field of a group. In pathological cases this can lead to undesirable splitting of genuine galaxies with regions of extreme density. These can lead to spurious galaxies which are often mainly comprised of a single particle type. Although these pathological cases make up \(<0.1\%\) of all galaxies with \(M_{\star}>10^{8}\) M\({}_{\odot}\) at \(z=5\), we nonetheless undo the erroneous splitting. To do so, we identify galaxies with no contributions in stellar, gas or dark matter components and recombine them into their nearest parent 'central' substructure from which they were excised, if there is such a substructure within 30 pkpc.
In rare instances, tidal stripping can cause diffuse populations of particles at large radii. These separated populations can have extreme effects on derived galaxy properties. To mitigate the effect of this, we only consider particles within a 30 pkpc aperture centred on the galaxy's centre of potential, in line with previous Eagle and Flares projects. All galaxy properties presented in this work are derived using this aperture unless explicitly stated otherwise.
## 3 Galaxy Half Mass Radii
In this section, we present 3-dimensional size-mass relations using half mass radii (\(R_{1/2}\)) derived from the particle distribution. The half mass radius is the radius of a sphere centred on the centre of potential enclosing half the total mass (of a particular particle species) in a 30 pkpc aperture. Throughout this work, we only present half mass radii for galaxies with at least 100 stellar particles within the 30 pkpc aperture (\(N_{\star}>100\)), unless stated otherwise. This translates to an approximate stellar mass threshold of \(M_{\star}\sim 10^{8}\)M\({}_{\odot}\) and ensures the stellar distribution is well sampled and the derived half mass radii are robust.
### Stellar Half Mass Radii
Figure 1 shows the stellar size-mass relation at \(z=5\) from Flares (top panel) and \(z=0\) from Eagle (bottom panel). The Flares galaxies in the upper panel are statistically weighted using the overdensity of their resimulation region to ensure the resimulated sample is representative of the population. For a detailed description of this weighting, we direct the reader to Lovell et al. (2021). At low redshift, the relation has a positive slope, as demonstrated in Furlong et al. (2017). However, in Flares we find a negatively sloped and bi-modal stellar size-mass relation. Although not shown here, the bi-modality and negative trend are evident for all snapshots at \(z\geq 5\)(Roper et al., 2022) and agree with the findings of other works (Wu et al., 2020; Marshall et al., 2021; Popping et al., 2021). For galaxies in Flares at \(z=5\) to become comparable in size to those found in Eagle at \(z=0\) they will have to become at least ten, or for the most massive galaxies at least 50, times larger. This bi-modality implies two scenarios: there are two distinct populations of galaxies with separate evolution tracks, or galaxies start as diffuse low mass systems which later evolve to become compact high mass systems. We investigate which scenario is applicable in Section 4.1.
### Gas Half Mass Radii
If high redshift stellar distributions are compact, then one might expect that the gas from which they form should also be compact. Figure 2 shows the gas size-stellar mass relation at \(z=5\) for all galaxies in Flares with \(N_{\star}>100\). In contrast to the stellar size-mass relation, we find the relation between gas size and stellar mass is constant with a large scatter to small gas radii at fixed stellar mass. For galaxies with \(M_{\star}/\mathrm{M}_{\odot}>10^{9}\) there again appears to be a bi-modality in gas size. Note, however, that this bi-modality is different to that present in Figure 1 where galaxies with \(M_{\star}/\mathrm{M}_{\odot}>10^{10}\) populate only the compact population.
To explicitly compare stellar distributions to gas distributions we compare in the upper panel of Figure 3 the gas half mass radii, including all gas present in the galaxy, to stellar half mass radii at \(z=5\) for all galaxies in Flares with \(N_{\star}>100\). From this, we can see that compact stellar distributions are associated with extended non-star forming gas distributions in most cases. These extended gas distributions can be as much as \(\sim 100\times\) larger than the stellar component. We can now see the bi-modality in Figure 2 is due to massive galaxies which are not associated with extended gas
Figure 1: Upper panel: The stellar size-mass relation at \(z=5\) for all galaxies in Flares with \(N_{\star}>100\). The hexbins are coloured by the weighted number density derived using the Flaries weighting scheme. Lower panel: The stellar size-mass relation at \(z=0\) for all galaxies in Eagle-REF with \(N_{\star}>100\). The hexbins are coloured by the number of galaxies in a bin. In both panels, the green curves show the 50h percentile of the distribution.
distributions but instead have compact gas components comparable to their stellar component.
One possible explanation for the difference between the gas distributions of compact galaxies could be that the galaxies with compact gas distributions are newly formed, and are yet to accrete the extended gas distributions of their gas enshrouded brethren. In the lower panel of Figure 3 we probe this by colouring the hexbins by the mean age of galaxies in each hexbin, where we have defined a galaxy's age as the initial stellar mass weighted average of stellar particle age. This shows no discernible difference between the age of galaxies with compact or extended gas distributions. We can therefore confidently throw out the accretion hypothesis.
A possible numerical explanation for the split between compact and diffuse gas distributions is that the halo finder has erroneously split galaxies apart from the extended gas distributions surrounding them. To ascertain if this is the case we show stacked images of the gas distribution in Figure 4 split by gas half mass radius (left column: \(R_{1/2,\rm gas}<1\,\rm kpc\), right column: \(R_{1/2,\rm gas}>1\,\rm kpc\)). Each panel has a width of 60 pkpc and a pixel resolution equal to the softening length of the simulation. The top row shows the difference between the compact and extended gas distributions as defined by SUBFIND, whereas the bottom row shows the stacks but includes all stellar particles within 30 pkpc of each galaxy's centre (defined by the centre of potential). If the compact gas population were the result of a misidentification by the halo finder, we would expect to see similarly extended profiles in both panels in the bottom row. However, the compact gas distribution stack is noticeably less extended. So we can conclude this bi-modality is not the result of numerical effects in structure definition.
These compact stellar distributions associated with compact gas distributions are therefore legitimate structures. However, of the 425 galaxies with compact gas distributions, 418 are satellites, leaving only 7 central galaxies. These satellites have likely undergone tidal stripping and are unlikely to persist to low redshift in their current compact gas-poor form without undergoing mergers. It is worth noting that although the majority of compact gas distributions are associated with satellites, it is not true that the majority of satellites have compact gas distributions. On the other hand, the centrals with compact stellar and gas distributions will not be able to accrete gas and continue forming stars in the near future, and could lead to gas-poor passive compact sources in the future. These could be indicative of one possible evolution track that leads to the quiescent red nuggets seen in observations at lower redshifts (e.g. van Dokkum et al., 2008; Damjanov et al., 2009; Taylor et al., 2010).
## 4 Compact galaxy formation
In this section we probe the formation of massive compact galaxies in the Eagle model. We showed in Roper et al. (2022) that the compact galaxies driving the negative slope of the size-mass relation (see Figure 1) produce a positively sloped size-luminosity relation due to the effects of dust. This size-luminosity relation is in good agreement with observations. We now explore the mechanism driving the formation of these massive compact galaxies, and explore why they only exist above a certain stellar mass threshold (\(M_{\star}>10^{9}\rm M_{\odot}\)).
Figure 3: A comparison of stellar half mass radii and gas half mass radii for all galaxies in Flares at \(z=5\) with more than 100 stellar particles. The hexbins in the upper panel show the number density weighted using the Flares weighting scheme. The lower panel shows the mean galaxy age in each hexbin, as defined by the initial stellar mass weighted age. The dashed lines show \(R_{\rm gas}/R_{\star}\) in powers of ten to aid interpretation.
Figure 2: The gas size-stellar mass relation at \(z=5\) for all galaxies in Flares with \(N_{\star}>100\). The hexbins are coloured by the weighted number density derived using the Flares weighting scheme and the solid green line shows the 50th percentile of the distribution.
To probe the size and mass evolution of individual galaxies we employ the MEGer Graph Algorithm (MEGA, Roper et al., 2020)1 to construct merger graphs from the SUBFIND halo catalogue. Throughout the following sections we define:
Footnote 1: MEGA has been updated to construct merger graphs for hydrodynamical simulations, in addition to purely \(N\)-body codes. It can also now use halos defined by other algorithms rather than those produced with its own phase space halo finder. Note, however, that merger graphs generated from other input halo catalogues will not benefit from the temporally motivated approach utilised in MEGA.
* A progenitor of a galaxy as any galaxy in the previous snapshot which contributed at least 10 particles (of any type) to the galaxy.
* The main progenitor as that which contributed the most particles (of any type) to the galaxy.
* The main branch as the chain of main progenitors starting with a galaxy and ending with it's earliest main progenitor.
* The root as the first galaxy in the main branch at \(z=5\).
For the purposes of graph construction, all SUBFIND halos are included down to a particle count of 20, unlike the sample used to probe morphological quantities which is limited to those with \(N_{\bullet}>100\). Throughout all subsequent plots, all galaxies considered in snapshot B obey the \(N_{\bullet}>100\) threshold but progenitors are presented without applying any limit. It is worth noting that the cadence of Flaares snapshots is sub-optimal for merger graph construction. However, this does not affect the robustness of main branch definitions and only poses a problem if the merger graphs were to be used for semi-analytical modelling or assessing the impact of mergers.
### Evolution across the size-mass relation
To identify the evolution of galaxies causing the bi-modality in the \(z=5\) size-mass relation (Figure 1) we present the evolution across the size-mass relation of galaxies with stellar half mass radii \(<1\) pkpc at \(z=5\) in Figure 5. To do so we plot each individual main branch for these galaxies rooted at \(z=5\), splitting into initial size and final mass bins to aid interpretation. These main branches yield two evolution paths for compact galaxies at \(z=5\).
In the first formation path, all progenitors along the main branch are compact including their earliest low mass progenitor. Although these galaxies remain compact throughout their evolution, they exhibit a positive size evolution. We denote this formation path the compact formation path.
In the second, more dominant formation path, galaxies begin as diffuse low mass galaxies which then decrease in size between adjacent snapshots. We denote this formation path the diffuse formation path. From this, we can conclude that in reality, both the formation scenarios proposed in Section 3 are applicable, but each is applicable in different redshift regimes.
The population with compact formation paths are galaxies forming at the earliest times (\(z>10\)). Gas at these redshifts has undergone minimal enrichment requiring a higher density threshold to form stellar particles (see Section 2.1.2). The higher density threshold leads to star formation delayed until gas has collapsed further, yielding concentrated starbursts and thus initial compact stellar half mass radii.
Once the gas distribution has been sufficiently enriched star formation can proceed at lower densities, giving rise to galaxies with the diffuse formation path. These galaxies enter the catalogue at \(z\leq 10\) as diffuse systems before undergoing a mechanism causing a decrease in size, yielding a compact galaxy. Compact galaxies in the lowest final mass bin (left hand column) undergo this transition between \(z=6\) and \(z=5\). Those in the higher final mass bins become compact at earlier redshifts, with many in the highest mass bin (right hand column) exhibiting a positive size evolution after joining the compact distribution. This relation between final mass and transition redshift shows that not only is this an ongoing process at \(z=5\) but also that the size transition takes place in a particular stellar mass regime. Once compact these galaxies continue to form stellar mass and increase in size as they do so.
In Figure 5 there are a small number of outliers with pathological increases in size and in some cases decreases in stellar mass, such as those in the top right panel with half mass radii \(>1\) pkpc. These are examples of structure finding issues discussed in detail in Srisawat et al. (2013). In cases such as these interacting galaxies have either been temporarily amalgamated leading to increased sizes and masses or mass associated with a galaxy has been temporarily misassigned to a neighbouring galaxy.
### Changes in size
The diffuse formation path in Figure 5 contains a transition between the diffuse and compact regimes evident in the size-mass relation shown in Figure 1. This transition is governed by a mechanism taking place at a particular stellar mass. In this section we investigate the mechanism driving this transition from diffuse stellar components to compact stellar components.
Figure 4: Stacked images comparing the gas distributions of galaxies in all Flaares regions at \(z=5\) with compact stellar distributions and extended or compact gas distributions. The left hand column shows galaxies with compact gas distributions, where \(R_{1/2,\mathrm{gas}}<1\) pkpc and \(R_{1/2,\mathrm{\bullet}}<1\) pkpc, while the right hand column shows galaxies with extended gas distributions, where \(R_{1/2,\mathrm{gas}}>=1\) pkpc and \(R_{1/2,\mathrm{\bullet}}<1\) pkpc. The top row shows the particles identified by SUBFIND to belong to the galaxy while the bottom row shows all particles within 30 pkpc of the galaxy’s centre of potential. Each image is 60 pkpc in width, the pixel resolution is the softening length of the simulation and gas particles have not been smoothed over their kernels.
Figure 5: The stellar size-mass relation evolution for galaxies which have compact stellar distributions at \(z=5\) (\(R_{1/2}<1\) pkpc). The galaxies are divided into size and mass bins, where the columns are binned by the stellar mass of the galaxy at \(z=5\), and the rows are binned by the initial stellar half mass radius of a galaxy when it enters the sample. Points are coloured by their redshift.
In Figure 6 we show the specific star formation rate (sSFR) of galaxies at \(z=5\) as a function of the change in stellar size between \(z=6\) and \(z=5\), defined as a ratio between a galaxy's size and their main progenitor's size at \(z=6\). We define the sSFR using all stellar particles formed within 100 Myr in a 30 pkpc aperture. A size ratio \(>1\) suggests a galaxy has increased in size while a size ratio \(<1\) suggests a galaxy has decreased in size.
We can see that galaxies which were diffuse at \(z=6\) and remain diffuse at \(z=5\) (left hand panel) have no clear trend between their sSFR and change in size. Galaxies which remain compact between \(z=6\) and \(z=5\) (right hand panel) show both decreases (\(R_{1/2}^{5}/R_{1/2}^{6}<1\)) and increases (\(R_{1/2}^{5}/R_{1/2}^{6}>1\)) in size, the majority of the latter have lower sSFRs than the former. In contrast, galaxies which transition from diffuse to compact (central panel) are highly star forming with a subtle negative trend between sSFR and \(R_{1/2}^{5}/R_{1/2}^{6}\). The tail of passive galaxies (sSFR \(<0.1\)) exhibit small increases in size (\(R_{1/2}^{5}/R_{1/2}^{6}\sim 1.2\)). This size increase is likely due to stellar migration, where stellar particles move into larger orbits, as found at low redshift in Furlong et al. (2017). These passive galaxies are explored in detail in Lovell et al. (2022).
All galaxies undergoing the transition between diffuse and compact regimes have high sSFRs, while galaxies not undergoing this transition can exhibit a range of behaviours. From this we can conclude that the mechanism driving this transition is centrally concentrated star formation rather than stellar migration.
We now probe the changes in the gas distribution in Figure 7. This shows the change in stellar half mass radius vs the change in gas half mass radius at \(z=5\). In each panel, we see little correlation between the change in stellar size and the change in gas size. A small number of galaxies' gas distributions shrink by a factor of \(\sim 10\), but these galaxies show no trends in change in stellar size. These shrinking gas distributions are those associated with the compact stellar distributions in Figure 3, explicitly showing the change in gas half mass radius due to tidal stripping of the satellites in that population (as discussed in Section 3.2). There is a subset of diffuse galaxies that remain diffuse (left hand panel) where a decrease in size of the stellar component is accompanied by a decrease in size of the gas component, this is driven by gas cooling leading to collapsing gas clouds and thus concentrated star formation causing a decrease in stellar size (we present more on this in Section 4.4).
The lack of gas half mass radius change in Figure 7 should come as no surprise given the extended nature of gas distributions detailed in Figure 3. The vast majority of the gas distributions are spatially distinct from the regions of compact star formation and are thus unaffected by the energy injected by stellar feedback.
#### 4.2.1 The role of stellar feedback
It was shown in Crain et al. (2015) that stellar feedback in the Eagle model is inefficient at high redshift, leading to high sSFR rates in regions of high density. However, the large localised SFRs necessary to transition from the diffuse regime to the compact regime shown in Figure 6 will dump disproportionate amounts of thermal energy into their dense surroundings. Here we probe the effect of stellar feedback during the size transition process and thus further probe the efficiency of stellar feedback in this regime.
In Figure 8 we show the effect of stellar feedback on both the stellar distributions and gas distributions. Here we plot the change in half mass radius as a function of the change in integrated stellar feedback energy between compact galaxies at \(z=5\) and their progenitors at \(z=6\). We define the integrated stellar feedback energy following Davies et al. (2020) (Eq. 4 therein) as
\[E_{\star\mathrm{fb}}=\sum_{l=0}^{i=N_{\star}}1.74\times 10^{49}\mathrm{erg} \left(\frac{m_{\star,\mathrm{init},i}}{1\mathrm{M}_{\odot}}\right)f_{\mathrm{ th},i}, \tag{3}\]
where \(m_{\star,\mathrm{init},i}\) is the initial mass of a stellar particle and \(f_{\mathrm{th},i}\) is the feedback fraction of individual stellar particles given by Equation 2. We obtain the integrated stellar feedback energy by summing over the contribution of all stellar particles in a galaxy within a 30 pkpc aperture. Note that, given this definition, the ratio \(E_{\star\mathrm{fb}}^{5}/E_{\star\mathrm{fb}}^{6}\) can be less than unity if stellar particles are lost between \(z=5\) and \(z=6\). Instances where \(E_{\star\mathrm{fb}}^{5}/E_{\star\mathrm{fb}}^{6}<1\) are either due to galaxy interactions or misidentification by the halo finder.
The upper row of Figure 8 clearly shows little correlation between
Figure 6: The specific star formation rate of galaxies as a function of the ratio between stellar half mass radii at \(z=5\) and their main progenitors’ at \(z=6\). Each panel shows galaxies undergoing different phases of size evolution, left to right: galaxies which were diffuse at \(z=6\) and remain diffuse at \(z=5\); galaxies which were diffuse at \(z=6\) and are compact at \(z=5\); and galaxies which were compact at \(z=6\) and remain compact at \(z=5\). Hexbins are coloured by weighted number density using the F-J-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E-E--E-E-E--E-E-E-E-E-E-E-E-E--E-E-E-E-E--E-E--E-E-E--E-E--E-E-E-E-E--E
the change in integrated stellar feedback and the change in gas half mass radius. This is due to the aforementioned extent of the gas distribution showing that the effects of stellar feedback remain localised in the core and have no effect on the greater distribution. However, the same is not true for the stellar distribution. In the lower row of Figure 8 we present the change in stellar half mass radius as a function of the change in integrated stellar feedback. In the central panel, containing galaxies undergoing the size transition, we find a clear trend where larger increases in \(E_{\star\mathrm{fb}}\) yield larger decreases in stellar half mass radius. For galaxies remaining compact (right hand panel) we find a shallower trend at larger change in size ratios with significant scatter at fixed feedback energy ratio. Notably, even in extreme cases the amount of stellar feedback is still too inefficient to be capable of slowing down star formation.
The galaxies which remain diffuse (left hand panel) exhibit no trend between change in size and integrated stellar feedback. In this regime not only are fewer stars forming, and therefore fewer stellar feedback events, but the gas is also significantly less dense yielding a lower \(f_{\mathrm{th}}\). It is therefore unsurprising we see no trend in this regime.
### Dynamical effects
Given the mechanism controlling the formation of compact galaxies detailed in the previous section, we now probe the dynamics and attempt to determine why the compact to diffuse transition takes place at \(M_{\bullet}/\mathrm{M}_{\odot}\sim 10^{9}\). To do so, we define the total binding energy of a galaxy as
\[E_{\mathrm{bind}}=G\sum_{i=0}^{i=N_{\bullet}}\sum_{j=i+1}^{j=N_{\bullet}}\frac {m_{i}m_{j}}{\sqrt{r_{ij}^{2}+\varepsilon^{2}}}, \tag{4}\]
where \(G\) is the gravitational constant, \(m_{i/j}\) are the masses of particles \(i\) and \(j\), \(r_{ij}\) is the distance between particles \(i\) and \(j\), and \(\varepsilon\) is the softening of the simulation in physical coordinates. We calculate this quantity including all particle species identified to be part of a galaxy.
In the upper panel of Figure 9 we present the stellar mass dependence of the total binding energy (Equation 4). There is a clear positive linear relation between total binding energy and stellar mass with a large scatter at low masses which decreases with increasing stellar mass. The binding energy for the massive galaxies at which we see the compact transition (\(M_{\bullet}/\mathrm{M}_{\odot}\sim 10^{9}\)) exhibits a small drop away from the linear trend of the lower mass galaxies. This can be attributed to the effects of stellar feedback in these highly star forming galaxies. Although stellar feedback is too inefficient and localised to strongly affect the morphology of a galaxy it nonetheless affects the gas distribution in the region where gravitational attraction between particles is greatest, and thus where the total binding energy is most sensitive to effects on the gas distribution.
We further investigate the interplay between total binding energy and stellar feedback in the lower panel of Figure 9, which shows the ratio between total binding energy and integrated stellar feedback (detailed in Section 4.2.1) as a function of stellar mass. Here we can explicitly see the drop in total binding energy relative to integrated stellar feedback for the highly star forming compact galaxies with \(M_{\bullet}/M_{\odot}>10^{9}\). Note that these galaxies are nonetheless bound, as the injected thermal energy from stellar feedback is radiated away by the efficient cooling in star forming regions on short enough timescales.
From Figure 9 we can see no obvious features in the overall dynamics of the entire galaxy that enable the efficient localised star formation that drives the compactification of galaxies at \(M_{\bullet}/M_{\odot}\sim 10^{9}\). We can thus conclude that this transition is not a dynamical mechanism affecting both baryonic and dark matter components of a galaxy.
### Cooling radii
To find the cause of the mass threshold at which the size transition takes place we need to probe the behaviour and distribution of star forming gas particles. To do this we employ a 'cooling radius' (\(R_{\mathrm{cool}}\)). We define a 'cooling radius' of a galaxy as the gas half mass radius weighted by gas density. This is effectively a measure of the size of the star forming gas distribution. We use this definition to account for variations in density within star forming regions and take into account gas particles which soon will be star forming.
We present the ratio between cooling radii and stellar half mass
Figure 7: Change in stellar half mass radii vs change in gas half mass radii between galaxies at \(z=5\) and their main progenitors at \(z=6\). Each panel shows galaxies undergoing different phases of size evolution, left to right: galaxies which were diffuse at \(z=6\) and remain diffuse at \(z=5\), galaxies which were diffuse at \(z=6\) and are compact at \(z=5\), and galaxies which were compact at \(z=6\) and remain compact at \(z=5\). Hexbins are coloured by weighted number density using the F_flares region weighting scheme.
radii in Figure 10. Here we see a number of features of note. Firstly, there is a low mass distribution (\(M_{\star}\lesssim 10^{8.8}\) M\({}_{\odot}\)) with a large scatter centred on a ratio of unity; these are the diffuse clumpy systems prevalent at low stellar masses that are not undergoing a transition in size. There is then a second distribution of galaxies with stellar masses in the range \(10^{8.8}\lesssim M_{\star}/M_{\odot}\lesssim 10^{9.8}\) with a negative relation between the ratio of cooling radius and stellar half mass radius; these are galaxies where efficient localised cooling has begun, allowing gas to collapse to high densities below the size of the stellar distribution. Once collapsed to sufficient density these localised cool regions enable high sSFRs, which lead to the significant stellar mass growth associated with decreases in stellar size (Figure 6). These galaxies are the galaxies with decreasing gas and stellar sizes evident in Figure 7. Once enough stellar mass has been formed in the compact star forming gas distribution the gas and stellar size measurements become comparable and the ratio tends towards unity. This process has been completed for most galaxies by the time they reach stellar masses of \(10^{10}\) M\({}_{\odot}\).
These distributions are exemplified by the curve showing the \(50^{\rm th}\) percentile. The trend starts at a ratio of unity at \(M_{\star}=10^{8}\) M\({}_{\odot}\) and then exhibits a clear dip in the stellar mass regime dominated by galaxies undergoing the transition from diffuse to compact stellar distributions. The trend then returns to a ratio of unity for galaxies at stellar masses \(>10^{10}\) M\({}_{\odot}\).
We can also see a number of outliers to this trend with \(R_{\rm gas,1/2}/R_{\star,1/2}>1\) and \(M_{\star}>10^{9.5}\)M\({}_{\odot}\). These are massive galaxies with compact bulges, where regions of their extended gas distribution have begun collapsing, leading to the transition from compact to extended sources. We discuss this process in detail in Section 5.
The reason for the specific stellar mass at which we see the diffuse to compact transition is now clear: it is this mass at which gas is capable of cooling efficiently enough to form highly localised regions of sufficient sSFR to form compact stellar systems. We note that, although we cannot show the effects of resolution on this transition2, it takes place significantly above the mass resolution of the simulation
Figure 8: Upper panel: Gas half mass radii ratios as a function of integrated stellar feedback ratios. In both rows, each panel shows galaxies undergoing different phases of size evolution, left to right: galaxies which were diffuse at \(z=6\) and remain diffuse at \(z=5\), galaxies which were diffuse at \(z=6\) and are compact at \(z=5\), and galaxies which were compact at \(z=6\) and remain compact at \(z=5\). The integrated feedback energy is defined by Equation 3, and the hexbins are coloured by weighted number density using the F\({}_{\rm James}\) region weighting scheme.
and thus should be robust to resolution effects. The compact distributions are, however, near the spatial resolution of the simulation; this could place a lower bound on the sizes that are possible in Flares. The resolution of the underlying dark matter distribution could also lead to spurious collisional heating, leading to kinematic and morphological effects (e.g. Wilkinson et al., 2022). We will investigate the effects of resolution explicitly in Flares-2 with a subset of high resolution simulations using the SWIFT open-source simulation code (Schaller et al., 2018).
## 5 Compact Galaxy Evolution
In this section, we explore how galaxies evolve from compact systems at high redshift, with a negative size-mass relation, to the extended systems prevalent at the present day, with a positive size-mass relation.
### Birth Property Evolution
Macroscopic changes in galaxy properties, such as size, are tracers of physical mechanisms taking place at much smaller scales within. Particularly those taking place in their cores at high redshift, as shown in Section 4. To probe the small scale physical mechanisms driving galaxy size evolution we present the stellar formation properties of all stellar particles in Flares3 and Eagle AGNdT9 in Figure 11. Here we bin these stellar particles by the redshift of their formation to show the redshift evolution of the density and enrichment of star forming environments. The background of Figure 11 shows the feedback fraction \(f_{\rm th}\) (described in Section 2.1.3) for each combination of birth density and metallicity, while the dashed line shows the star formation threshold.
Footnote 3: Note that Flares is biased towards regions of high overdensity in which the most massive galaxies form at \(z>5\). However, these massive galaxies are the subject of this investigation and thus the bias does not affect any conclusions.
In the right hand panel of Figure 11 we can see the first stars to form in the simulation do so at low density and low metallicity (\(Z_{\rm birth}\sim 10^{-3}\), \(n_{\rm H}\sim 10^{1}{\rm cm}^{-3}\)), with a small contribution by stellar particles forming in the earliest compact cores of massive galaxies at high metallicity and density (\(Z_{\rm birth}\sim 10^{-1.5}\), \(n_{\rm H}\sim 10^{4}{\rm cm}^{-3}\)). These early cores have already started to cool and collapse at these redshifts, further aiding their enrichment, and creating a feedback loop of star formation and enrichment.
Between \(7.5\leq z<10\) compact core star formation begins to dominate as the most massive galaxies begin to form their compact
Figure 10: The ratio between gas density weighted gas half mass radii (cooling radii, \(R_{\rm cool}\)) and stellar half mass radii as a function of stellar mass at \(z=5\). The solid line denotes the weighted 50th percentile of this distribution. Heavins are coloured by number density weighted using the Flares weighting scheme. Here we have placed an additional cut on the sample such that included galaxies have both \(N_{\bullet}>100\) and \(N_{\rm gas}>100\) to ensure size measurements are robust.
Figure 9: Upper panel: The gravitational binding energy as a function of stellar mass for all galaxies at \(z=5\). Lower panel: The ratio between gravitational binding energy and the integrated stellar feedback energy as a function of stellar mass for all galaxies at \(z=5\). In both panels the gravitational binding energy is defined by Equation 4, the integrated feedback energy is defined by Equation 3, and the hexbins are coloured by weighted number density using the Flares region weighting scheme. The solid line shows the 50\({}^{\rm th}\) percentile of the distribution.
cores in earnest. Here, we see a shift from the low metallicity and density locus to one of high density and metallicity corresponding to the star formation taking place in massive galaxies' compact cores. For stellar particles formed between \(5\leq z<7.5\), when the majority of compact galaxies undergo the transition from diffuse to compact, this high density and metallicity locus dominates with only a small contribution from lower densities. This tail of low density star formation has a tighter distribution in terms of metallicity, a reflection of the enrichment of the wider gas environment from previous star formation.
From \(z=5\) to \(z=0\) (two left hand panels of Figure 11) the locus of high density and metallicity star formation shifts to low density with a spread in metallicity from intermediate (\(Z_{\rm birth}\sim 10^{-2.5}\)) to high values (\(Z_{\rm birth}\sim 10^{-2}\)). By \(0\leq z<2.5\) this low density locus is well established, and has shifted to higher values of metallicity (\(Z_{\rm birth}\sim 10^{-2}\)). In this epoch, even gas in low density environments has been significantly enriched enough to enable star formation.
Recall Figure 3, where the majority of compact galaxies were shown to be enshrouded by extended non-star forming gas distributions. When combined with the evolution of stellar birth properties in Figure 11 we can see a clear correlation, where low density and low metallicity gas in the extended gas distributions around compact galaxies begins to form stellar particles after \(z=5\). This is enabled by the extended gas distributions not only reaching the required densities at a later epoch than the compact cores, but also due to sufficient mixing from the metal enriched core due to increases in AGN and stellar feedback efficiency. This is the onset of inside-out star formation as seen at lower redshift (Taccella et al., 2015, 2016; Nelson et al., 2016; Taccella et al., 2018; Nelson et al., 2019; Wilman et al., 2020; Matharu et al., 2022), but beginning earlier than yet observed at \(z=5\).
Focusing on the location of the stellar birth property locus relative to the feedback fraction, \(f_{\rm th}\), shown in the background of Figure 11, we can see that \(f_{\rm th}\) is maximal for the majority of high redshift star formation. Only at \(z<5\) does the locus of star formation include densities and metallicities at which \(f_{\rm th}\) is not maximal. Despite this, we have shown in Section 4.2.1 that even the maximal feedback fraction in the fiducial Eagle model is incapable of affecting the wider surroundings at high redshift. We will investigate the effects of allowing larger feedback fractions in this regime in a future work that modifies the subgrid model.
### Birth property dependence on large-scale environment
As the universe evolves, stellar particles enrich their local environment and stellar birth metallicity increases. One of the main strengths of the Flares approach is the high dynamic range of environments covered by the resimulations. Using this we can investigate the birth properties in specific large scale environments, defined by the overdensity of a region (overdensity smoothed over a spherical aperture with a radius of 15 cMpc h\({}^{-1}\)), and shed light on the onset of enrichment and local density in specific environments. In Figure 12 and Figure 13 we present the median redshift evolution of stellar birth metallicity and stellar birth density, respectively, for all stellar particles formed by \(z=5\) in Flares and all stellar particles formed by \(z=0\) for both Eagle REF and AGNdT9. The Flares sample is split into overdensity bins (environments). The Eagle samples are representative of mean overdensity environments.
In terms of stellar birth metallicity evolution (Figure 12) we find little distinguishes each environment at the earliest times (\(z\geq 17\)) beyond stochastic star formation where a small number of stars are formed. Following this, the most over-dense regions with \(\log_{10}(1+\delta)>0.05\) begin forming enriched stellar particles after \(z\sim 15\). From \(z\sim 10\) onwards the environmental dependence on stellar birth metallicity is well established, with the enrichment of new stellar particles at fixed redshift increasing with increasing overdensity. This environmental dependence can be interpreted as delayed enrichment of star forming gas in lower overdensity environments relative to more over-dense environments. The most massive and early forming galaxies are strongly biased towards the highest overdensities, hence this dependence on environment. The Eagle curves are both consistent with the mean overdensity bins from Flares, yielding a strong evolution in stellar birth metallicity which is mimicked at earlier times by the most over-dense regions.
In contrast to stellar birth metallicity, stellar birth density (Figure 13) exhibits environmental dependence at early times beyond stochasticity. As early as \(z=15\) we can see under-dense regions form stars at noticeably lower densities than over-dense environments. This naturally follows from the definition of an environment by overdensity; a more over-dense region contains more mass, and thus achieves higher densities earlier. By \(z=10\) this early environmental dependence is no longer present. Early enrichment in the over-dense regions (see Figure 12) aids low density star formation, allowing lower stellar birth densities. The early onset and loss of the environmental dependence of stellar birth density due to enrichment
Figure 11: The stellar formation properties that control star formation and feedback in the subgrid model. The contours show the distribution of star formation properties in each redshift bin for all stellar particles split into redshift bins. The background shows the feedback fraction given by Equation 2. The dashed line indicates the star formation threshold defined in Equation 1. Bins where \(z<5\) contain only stellar particles from the Eagle AGNdT9 sample while bins where \(z>5\) contain both stellar particles from Flares and Eagle AGNdT9, where the AGNdT9 variant has been used to best match the transition from Flares at high redshift to Eagle at low redshift.
at \(z>10\) shows a clear transition between regimes where gas density, and later gas metallicity, dominate the star formation law.
We then see a strong environmental dependence establish itself once again in Figure 13 between \(z\sim 8-9\) as the compact cores of the most massive galaxies, biased towards the most over-dense regions, begin to form in earnest. In the lowest density environments, these galaxies do not form by \(z=5\) and, even in mean density regions, they are few in number at this redshift4. This explains the strong dependence on environment present at \(z=5\), with the most over dense regions increasing in stellar birth density while lower density regions exhibit a decrease. From \(z=5\) onwards we can see a clear evolution towards lower stellar birth density as star forming gas is enriched, enabling star formation at lower densities. This is also the regime where the low density extended gas distributions begin to form stars, causing the increase in size of massive compact galaxies.
Footnote 4: Flares contains infinitely many more \(M_{\star}/M_{\odot}>10^{10}\) galaxies at \(z=5\) than the periodic Eagle simulations.
### Spatial Distribution Evolution
To explicitly link the shift of star formation to a low density and moderate metallicity locus at \(z<5\) to star formation in extended gas distributions around compact galaxies, we investigate the spatial distribution of star formation. To do so we calculate profiles of sSFR (relative to the stellar mass of the whole galaxy) at specific redshifts
Figure 14: Specific star formation rate (sSFR) profiles binned by redshift for all FLARES galaxies at \(z\geq 5\) (solid lines), all Eagle-REF galaxies at \(z\lesssim 5\) (dotted lines), and all Eagle-AGMRT9 galaxies at \(z\lesssim 4\) (dashed lines) with \(M_{\star}/M_{\odot}\geq 10^{9}\). Each curve represents the \(50^{\rm th}\) percentile of all sSFR profiles in a redshift bin. EAGIL-AGIL79 \(z=5\) has too few galaxies in this mass/redshift regime due to its volume and was thus omitted.
Figure 12: The redshift evolution of stellar birth metallicity in Flares (solid lines) and both Eagle REF (dotted line) and AGMRT9 (dashed line). The Flares galaxies are divided into overdensity bins to show the environmental dependence of stellar birth metallicity. Each curve represents the \(50^{\rm th}\) percentile of the underlying distribution. Unlike the plots of integrated galactic quantities, all stellar particles formed in the simulation are included. We truncate this plot at \(Z_{\rm birth}=0.005\) to better show the environmental dependence, both REF and AGMRT9 continue monotonically increasing to \(Z_{\rm birth}\sim 0.022\) at \(z=0\) with AGIL79 forming at marginally higher metallicities than REF.
Figure 13: The redshift evolution of stellar birth density in Flares (solid lines) and both Eagle REF (dotted line) and AGIL79 (dashed line). The Flares galaxies are divided into overdensity bins to show the environmental dependence of stellar birth density. Each curve represents the \(50^{\rm th}\) percentile of the underlying distribution. Unlike the plots of integrated galactic quantities, all stellar particles formed in the simulation are included.
by binning all stellar particles formed in the last 100 Myrs before a snapshot into annuli. We present the median specific star formation rate (sSFR) profiles for galaxies with \(M_{\bullet}/M_{\odot}\geq 10^{9}\) in each snapshot (\(0\leq z\leq 10\)) in Figure 14.
At (\(z>5\)) there is a clear peak of star formation at small radii, corresponding to compact core star formation in massive galaxies. This peak of star formation decreases with decreasing redshift as the compact cores become less dominant. For \(z<6\) there is an increase in the star formation taking place at \(R>1\) cpc. By \(z\leq 4\) the peak of core star formation has entirely disappeared and is replaced by a peak at \(R>1\) ckpc. This is a direct representation of the shift from the high redshift regime, where compact core star formation is dominant, to the low redshift regime, where extended gas distribution star formation dominates. This shift between regimes yields the transition shown in Figure 1 from a negative size-mass relation to a positive size-mass relation. We can summarise these two regimes as a high redshift (\(z>5\)) epoch of compact core (or bulge) star formation and a low redshift (\(z<5\)) epoch of inside-out star formation.
## 6 Conclusions
In this work, we have demonstrated the physical mechanisms in Flares and the Eagle sub-grid model that drive the formation of the compact galaxies which drive the negative slope of the size-mass relation at \(z\geq 5\). Specifically, we find:
* There are two formation paths for massive compact galaxies at \(z\geq 5\). The compact formation path for galaxies forming at the earliest epochs (\(z>10\)) and the diffuse formation path for galaxies forming at later times (\(z<10\)).
* In the compact evolution path, progenitors of massive compact galaxies at \(z=5\) form at the earliest times (\(z>10\)) in pristine environments with little to no metal enrichment. Due to the lack of enrichment they form stars at high densities and thus begin as low mass compact systems that stay compact for their entire evolution to \(z=5\).
* In the diffuse evolution path, progenitors of massive compact galaxies at \(z=5\), forming after gas has become partially enriched (\(z<10\)), are capable of forming stars at lower densities, and thus begin as low mass diffuse systems which become compact at later times.
* The transition from diffuse to compact galaxy is driven by runaway star formation in their cores. Gas in a region of the diffuse galaxy begins efficient cooling, reaching high densities, enabling highly localised efficient star formation. This star formation then enriches the gas in the core of the galaxy, further enhancing star formation. Once this process has taken place, the galaxy has a compact stellar core with a high specific star formation rate and thus a compact stellar distribution.
We have also presented an evolutionary path for compact galaxies forming at \(z\geq 5\), explaining how these galaxies evolve to yield extended sources seen at the present day, and the shift from the surprising negative high redshift stellar size-mass relation to the observed positive relation at lower redshifts. We find:
* Galaxies at \(z>5\) with compact stellar distributions are, in the majority of cases, surrounded by non-star forming gas distributions up to \(\sim 100\) times larger than the stellar component. These extended gas distributions become enriched at later times and become capable of forming stars, increasing the stellar size of galaxies.
* There are two broad regimes of star formation: a high redshift (\(z>5\)) regime dominated by compact core star formation and a low redshift (\(z<5\)) regime of inside-out star formation.
* Star formation in the compact core regime takes place in gas at high density and high metallicity between \(5\leq z\leq 10\).
* Star formation in the inside-out regime takes place in gas at low density and moderate to high metallicity between \(0\leq z\leq 5\).
* A minority of \(z>5\) compact stellar distributions are associated with comparably compact gas distributions. These galaxies are neither excessively young nor numerical artefacts of the halo finding process. The vast majority are satellite systems however a small number are central galaxies. These compact galaxies could be the progenitors of so called'red nuggets' observed at intermediate redshifts.
We probe the environmental dependence of stellar birth properties and find a strong dependence rooted in the environmental dependence of galaxy stellar mass. At fixed redshift, high overdensity regions form stellar particles at higher densities (starting at \(z\sim 8-9\)) and higher metallicities (starting at \(z\sim 10\)) than lower overdensity regions. The most overdense regions also undergo a period of high density star formation at early times (\(12\leq z\leq 20\)). At \(5\leq z\leq 7\) the highest overdensity regions strongly deviate from the mean and low overdensity regions, forming stars at increasing density while all other regions begin to form stars at decreasing density.
Given the wealth of observational data coming from JWST, this insight into the physical mechanisms governing the formation and evolution of galaxies in the Epoch of Reionisation is invaluable to interpreting upcoming observational samples, enabling the mapping from observational properties to underlying physical mechanisms. Combined with the predicted size-luminosity relations across the spectrum in Roper et al. (2022), the mechanisms detailed in this work can be used to shed light on the formation and evolution of galaxies at the highest redshifts, linking the low redshift Universe to the high redshift Universe.
Not only are these insights important to interpreting observations, they are also invaluable to future sub-grid models, particularly if the negative size-mass relation is found to be isolated to simulations. Current models are designed to work well at low redshift (e.g. Schaye et al., 2015; Crain et al., 2015; Furlong et al., 2017) and, as shown in previous Flares studies amongst many others, perform surprisingly well at high redshift. Regardless of this favourable performance, given the ever increasing frontier of high redshift astronomy, having robust high redshift motivated models for galaxy formation in simulations is imperative to keep up with the observed samples, and to aid their interpretation. Understanding the behaviour of modern sub-grid models in this epoch is an important first step towards achieving this.
## Acknowledgements
We thank the Eagle team for their efforts in developing the Eagle simulation code. We also wish to acknowledge the following open source software packages used in the analysis: scipy(Virtanen et al., 2020), Astropy(Robitaille et al., 2013), CMasher(van der Velden, 2020), and matplotlib(Hunter, 2007).
This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K00042X/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. The Eagle simulations were performed
using the DiRAC-2 facility at Durham, managed by the ICC, and the PRACE facility Curie based in France at TGCC, CEA, Bruyeres-le-Chatel.
WJR acknowledges support from an STFC studentship. CCL acknowledges support from a Dennis Sciama fellowship funded by the University of Portsmouth for the Institute of Cosmology and Gravitation. DI acknowledges support by the European Research Council via ERC Consolidator Grant KETIU (no. 818930). The Cosmic Dawn Center (DawnN) is funded by the Danish National Research Foundation under grant No. 140.
We list here the roles and contributions of the authors according to the Contributor Roles Taxonomy (CRediT)5. **William J. Roper**: Conceptualization, Data curation, Methodology, Investigation, Formal Analysis, Visualization, Writing - original draft. **Christopher C. Lovell, Aswin P. Vijayan**: Data curation, Writing - review & editing. **Dimitrios Irodotou, Jussi Kunusisto, Jasslen Matharu, Louise Segev**: Writing - review & editing. **Peter Thomas**: Conceptualization, Resources, Writing - review & editing. **Stephen M. Wilkins**: Conceptualization, Writing - review & editing.
Footnote 5: [https://credit.niso.org/](https://credit.niso.org/)
## Data Availability
A portion of the data used to produce this work can be found online: flaresimulations.github.io/data. Much of the analysis used the raw data produced by the simulation which can be made available upon request. All of the codes used for the data analysis are public and available at github.com/WillRoper/flares-sizes-phys.
|
2301.03881
|
Why People Skip Music? On Predicting Music Skips using Deep
Reinforcement Learning
|
Music recommender systems are an integral part of our daily life. Recent
research has seen a significant effort around black-box recommender based
approaches such as Deep Reinforcement Learning (DRL). These advances have led,
together with the increasing concerns around users' data collection and
privacy, to a strong interest in building responsible recommender systems. A
key element of a successful music recommender system is modelling how users
interact with streamed content. By first understanding these interactions,
insights can be drawn to enable the construction of more transparent and
responsible systems. An example of these interactions is skipping behaviour, a
signal that can measure users' satisfaction, dissatisfaction, or lack of
interest. In this paper, we study the utility of users' historical data for the
task of sequentially predicting users' skipping behaviour. To this end, we
adapt DRL for this classification task, followed by a post-hoc explainability
(SHAP) and ablation analysis of the input state representation. Experimental
results from a real-world music streaming dataset (Spotify) demonstrate the
effectiveness of our approach in this task by outperforming state-of-the-art
models. A comprehensive analysis of our approach and of users' historical data
reveals a temporal data leakage problem in the dataset. Our findings indicate
that, overall, users' behaviour features are the most discriminative in how our
proposed DRL model predicts music skips. Content and contextual features have a
lesser effect. This suggests that a limited amount of user data should be
collected and leveraged to predict skipping behaviour.
|
Francesco Meggetto, Crawford Revie, John Levine, Yashar Moshfeghi
|
2023-01-10T10:07:29Z
|
http://arxiv.org/abs/2301.03881v1
|
# Why People Skip Music? On Predicting Music Skips using Deep Reinforcement Learning
###### Abstract.
Music recommender systems are an integral part of our daily life. Recent research has seen a significant effort around black-box recommender based approaches such as Deep Reinforcement Learning (DRL). These advances have led, together with the increasing concerns around users' data collection and privacy, to a strong interest in building responsible recommender systems. A key element of a successful music recommender system is modelling how users interact with streamed content. By first understanding these interactions, insights can be drawn to enable the construction of more transparent and responsible systems. An example of these interactions is skipping behaviour, a signal that can measure users' satisfaction, dissatisfaction, or lack of interest. In this paper, we study the utility of users' historical data for the task of sequentially predicting users' skipping behaviour. To this end, we adapt DRL for this classification task, followed by a post-hoc explainability (SHAP) and ablation analysis of the input state representation. Experimental results from a real-world music streaming dataset (Spotify) demonstrate the effectiveness of our approach in this task by outperforming state-of-the-art models. A comprehensive analysis of our approach and of users' historical data reveals a temporal data leakage problem in the dataset. Our findings indicate that, overall, users' behaviour features are the most discriminative in how our proposed DRL model predicts music skips. Content and contextual features have a lesser effect. This suggests that a limited amount of user data should be collected and leveraged to predict skipping behaviour.
Spotify, Music, Skipping, User Behaviour, Prediction, Deep Reinforcement Learning +
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recommender systems.
+
Footnote †: ccs: Information systems Recomm
cursor (Steintein et al., 2017)) to predict satisfaction and engagement (Steintein et al., 2018; D'Amico et al., 2019). By understanding these interactions, insights can be drawn for the construction of more transparent and responsible systems. The skipping is a signal that can measure users' satisfaction, dissatisfaction or lack of interest, and engagement with the platform (Steintein et al., 2019). In a _lean-back_ formulation, the MRSs are often designed to be more conservative, prioritising _exploitation over exploration_ to minimise negative feedback (in this context, skips) (Steintein et al., 2019). Thus, one of their goals may be determined as recommending songs that yield the highest listening activity (i.e. no skip). However, understanding the users' skipping behaviour is still an under-explored domain (Steintein et al., 2018; Steintein et al., 2019; Steintein et al., 2019). It is a challenging problem due to its noisy nature: a skip may suggest a negative interaction, but a user may skip a song that they like because they recently heard it elsewhere. In this work, we aim to understand why people skip by comprehensively analysing the utility of users' historical data. In particular, we analyse the impact and effect of the users' behaviour (e.g., the user action that leads to the current playback to start), listening content (i.e., the listened song), and contextual (e.g., the hour of the day) features in the classification task of predicting the users' music skipping behaviour. We propose a novel approach that leverages and adapts DRL for this classification task. This is to most closely reflect how a DRL-based MRS could learn to detect music skips.
Prior works in analysing the skipping behaviour revealed an universal behaviour in skipping across songs, with geography, audio fluctuations or musical events, and contextual listening information affecting how people skip music (Steintein et al., 2019; Steintein et al., 2019; Steintein et al., 2019; Steintein et al., 2019). Recently, the effectiveness of deep learning models has also been explored for the task of predicting the users' sequential skipping behaviour in song listening sessions (Steintein et al., 2019; Stein et al., 2019; Stein et al., 2019; Stein et al., 2019; Stein et al., 2019; Stein et al., 2019). While they made a significant contribution towards this direction, their process is usually seen as an independent and static procedure. They may not account for the dynamic nature of the users' behaviour, and do not intuitively optimise for the long-term potential of user satisfaction and engagement (Steintein et al., 2019; Stein et al., 2019; Stein et al., 2019; Stein et al., 2019; Stein et al., 2019). Overall, this motivates the investigation of the DRL's applicability in predicting music skips and a comprehensive investigation on the relation of the skipping signal with users' behaviour, listening context, and content. This paper aims to investigate the following two important research questions: _can DRL be applied to the users' music skipping behaviour prediction task, and if so, would it be more effective in the music skip prediction task than deep learning state-of-the-art models?_ (**RQ1**); _what historical information is considered discriminative and serves as a high-quality indicator for the model to predict why people skip music?_ (**RQ2**). To investigate our RQs, we have conducted an extensive study on a real-world music streaming dataset (Spotify). Our comprehensive analysis demonstrates the effectiveness of our approach and a temporal data leakage problem in the historical data. Overall, our findings indicate that the most discriminative features for our proposed DRL model to predict music skips are some users' behaviour features, with content and contextual features reporting a lesser effect. This suggests that a limited amount of user data can be leveraged to predict this behaviour, thereby offering implications in the building of novel user-centred MRSs and responsible data collection procedures. This is a necessary step in creating a holistic representation of the listeners' preferences, interests, and needs. The main contributions of this paper are:
* We demonstrate the applicability and effectiveness of DRL in predicting users' skipping behaviour from listening sessions. A framework is devised to extend the DRL's applicability to perform this classification and offline learning. This is the first time that DRL has been explored in this task. The effectiveness of our approach is empirically shown on a real-world music streaming dataset (Spotify). Our proposed approach outperforms state-of-the-art models in terms of Mean Average and First Prediction Accuracy metrics.
* We perform a comprehensive post-hoc (SHAP) and ablation analysis of our approach to study the utility of users' historical data in detecting music skips. We reveal a temporal data leakage problem in the historical data. Further, our results indicate that overall users' behaviour features are the most prominent and discriminative in how the proposed DRL model predicts music skips. The listening content and context features are reported to have a lesser effect.
## 2. Related Work
A successful MRS needs to meet the users' various requirements at any given time (Stein et al., 2019; Stein et al., 2019; Stein et al., 2019). Thus, user modelling is a key element. A line of research has tried to untangle the relationship between personality and the users' musical preferences (Stein et al., 2019; Stein et al., 2019; Stein et al., 2019). Volokhin and Agichtein (Volkin and Agichtein, 2019) introduced the concept of music listening intents and showed that intent is distinct from context (user's activity). A different, and arguably complementary, research direction is trying to understand and model how users interact with the underlying platform. This is a long-standing and under-researched problem of online streaming services (Stein et al., 2019). An example of these interactions is the skips between songs. Its modelling and understanding during music listening sessions plays a crucial role in understanding users' behaviour (Stein et al., 2019). The skips are often the only information available to the underlying MRS, and therefore they are used as a proxy to infer music preference (Stein et al., 2019).
The skipping signal has already been used in prior works, as a measure in heuristic-based playlist generation systems (Stein et al., 2019; Stein et al., 2019), user satisfaction (Stein et al., 2019; Stein et al., 2019), relevance (Stein et al., 2019), or as a counterfactual estimator (Stein et al., 2019). Furthermore, given its universality and presence in other domains, recent research has also investigated its effect in ads on social media platforms (Stein et al., 2019; Stein et al., 2019; Stein et al., 2019). Despite being abundant in quantity, it is a noisy implicit signal (Stein et al., 2019; Stein et al., 2019). A skipped track does not necessarily imply a negative preference. Multiple hypotheses can be formulated on why users skip songs, with recent research suggesting that people manifest an universal behaviour in skipping across songs, dictated by time, geography, and reaction to audio fluctuations or musical events (Stein et al., 2019; Stein et al., 2019; Stein et al., 2019). Moreover, it has been shown in (Stein et al., 2019) that people who usually listen to songs in their entirety, show higher listening duration that those who do not. Most recently, Meggetto et al. (Stein et al., 2019) proposed a clustering-based approach that clearly identifies four user types with regards to their session-based skipping activity. These types, namely _listener_, _listen-then-skip_, _skip-then-listen_, and _skipper_, are influenced by the length of the listening session, time of the day, and playlist type. The main limitation of these prior works is that they explore the relation between listening context and content with the skipping behaviour. They do not explore how the user interactions with the
platform influence the detection of skips. This is a limitation that this works addresses.
In 2019, Spotify identified music skip prediction as an important challenge and organised the _Sequential Skip Prediction Challenge_1 to explore approaches that could alleviate this problem. The challenge focused on predicting whether individual tracks encountered in a listening session will be skipped or not. To respond to this challenge, several deep-neural networks (Beng et al., 2017; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) and supervised learning (Kang et al., 2019) models were proposed. Afchar and Hennequin (2019) proposed using interpretable deep neural networks for skip interpretation via feature attribution. Whilst neural networks, and in particular Recurrent Neural Networks (RNNs), have been shown to effectively model sequential data, they consider the procedure as a static process. They do not intuitively provide a mechanism for the long-term optimisation of user satisfaction and engagement, continuous learning, and the modelling of the dynamic nature of the user's behaviour (Srivastava et al., 2014; Li et al., 2015; Li et al., 2016; Li et al., 2017). Therefore, it is a case where DRL is required, an investigation and application of which has never been explored before. A research gap this work aims to address.
Footnote 1: [https://www.aicrowd.com/challenges/spotify-sequential-skip-prediction-challenge](https://www.aicrowd.com/challenges/spotify-sequential-skip-prediction-challenge)
The _Sequential Skip Prediction Challenge_ is a binary classification task. Despite receiving limited attention to date, DRL has been shown to be suitable and effective in classification tasks. It can assist classifiers in learning advantageous features (Kang et al., 2019; Wang et al., 2019) and select high-quality instances from noisy data (Kang et al., 2019). Wiering et al. (2019) demonstrate that RL is indeed suitable for classification. Their model slightly outperforms existing classifiers, but training time and extra computational requirements are major drawbacks. With the recent advances in the field, a body of research is showing the superiority of DRL-based approaches for classification tasks (Kang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). In particular, the authors in (Wang et al., 2019; Wang et al., 2019) show that a Vanilla Deep Q-Network (DQN) (Vaswani et al., 2017) approach is superior and more robust to state-of-the-art algorithms.
In this work, we explore, for the first time, the applicability of DRL in the task of sequentially predicting users' music skipping behaviour. This is motivated by the limitations of existing approaches and the advantages of DRL. By comprehensively analysing users' historical data, we study its utility and effect in our approach for this task. This work is the first step in understanding why people skip music.
## 3. Approach
In this section, we present our framework to facilitate the application of DRL to the problem of sequentially predicting users' skipping behaviour from listening sessions. To do so, we model this problem as a Markov Decision Process (MDP) and a mechanism is introduced in the RL problem formulation to correctly exploit logged interactions and thus perform offline learning. The details of this framework are as follows:
**State**: it is the record-level representation of a listening session at a discrete time step (i.e., position in the session). The state, i.e. a record in a listening session, includes various user's contextual information about the stream, their interaction history with the platform, and information about the track that the user listened to. An episode is the entire listening session, with sessions containing from 10 up to at most 20 records.
**Actions**: it is a discrete action space which is a binary indicator of whether the current track is going to be skipped or not by the corresponding user. Effectively, the problem formulation can also be thought of as a binary classification problem \(A=\{0,1\}\), where 0 represents a no skip operation and 1 represents a skip.
**Reward**: a positive reward of 1 is given for a correctly predicted skip classification, 0 reward (i.e., no penalty) otherwise.
Motivated by the discrete action space and off-policy requirements of the music skip prediction task, we leverage DQN2. These requirements preclude the use of algorithms such as Deep Deterministic Policy Gradient (continuous action space) and Proximal Policy Optimization (on-policy learning). Whilst the problem is formulated as an MDP, it is partially observable (POMDP) by definition. This is because only partial information about the listening context and of the user is available (Li et al., 2016). Hence, in our problem formulation, we consider MDP and POMDP to be equivalent. This means that we do not perform any further processing of the state representation (e.g., masking of some features).
Footnote 2: Due to space limitations, we refer the readers to (Vaswani et al., 2017) for the necessary background and overview of the algorithm.
This classification formulation can be seen as a guessing game, where a positive reward is given for a correct guess, and no penalty is given for an incorrect one. Long-term optimisation via discount factor \(\gamma\) can be thought of as a way to correctly guess as many records in an episode as possible. Since there is a sequential correlation among records within an episode (i.e., a music listening session), a high \(\gamma\) value should be used. This corresponds to optimisation on the total number of correct guesses in an episode (long-term) rather than optimisation on the immediate ones (short-term). By taking into account previous points in time and the past interactions with the environment, the DRL agent makes fully informed decisions.
### Offline Mechanism
The DQN's standard training procedure is entirely online. Online learning is an iterative process where the agent collects new experiences by interacting with the environment, typically with its latest learned policy. That experience is then used to improve the agent's policy. However, exploiting logged data may be helpful and informative for the agent as a form of (pre)training. In offline learning (Batch RL (Srivastava et al., 2014)), the agent's task is instead defined as learning from a static dataset. Policies are learnt from logged data, and no interactions with the underlying environment are required. Whilst our prior formulation would work in an online learning setting, it presents a major problem when performing offline learning. A misclassification would cause a transition to a new state, which is, however, not part of the original trajectory and thus not represented in the dataset as well. The agent will generate and associate a (discounted) cumulative reward to a wrongly generated trajectory that is substantially different from the original. Thus, a pure offline algorithm has to exclusively rely on the transitions that are stored in the dataset provided in the input. From our initial formulation, we need to account for those out-of-distribution actions.
Within the definition of the reward function itself, the out-of-distribution, untruthful action is marked as invalid and, if sampled
by the agent throughout learning, it causes the current episode to be terminated. In other words, an incorrect guess (0 reward) leads to a terminal state. This simple constraint forces a minimisation of estimation errors and therefore it avoids the creation of potential estimation mismatches. As such, the untruthful action that causes the current episode to terminate avoids the future propagation of incorrect bootstrapped return estimations in the Temporal Difference target. This is to minimise the distributional shift issues due to differences between the agent's policy and the behaviour policy. More specifically, it explicitly ensures that regardless of the next sampled action, the current policy \(\pi(a^{\prime}|s^{\prime})\) is as close as possible to the behaviour distribution \(\pi_{\beta}(a^{\prime}|s^{\prime})\). The Q-function is queried as little as possible on out-of-distribution and unseen actions since this will eventually increase errors in the estimations.
This error, i.e. "extrapolation error" (Krishnan et al., 2017), is introduced when an unrealistic and erroneous estimation is given to state-action pairs. This is caused when action \(a^{\prime}\) from estimate \(Q(s,a)\) is selected, and the consequent state-action pair \((s^{\prime},a^{\prime})\) is inconsistent with the dataset due to the pair being unavailable. It provides a source of noise that can induce a persistent overestimation bias and that cannot be corrected, in an off-policy setting, due to the inability to collect new data (Krishnan et al., 2017; Krishnan et al., 2017). Directly utilising DQN in an offline setting may result in poorer performance and a resemblance to overfitting (Zhu et al., 2018). Our proposed mechanism minimises these errors. It is important to note that the "correct" action is not forcefully fed to the agent as in Behaviour Cloning based approaches. We let the agent deterministically decide as if it were a live interaction with the environment, thus keeping the general workflow of the original algorithm intact. This provides a single interface to easily transition from offline to online learning and vice versa.
Finally, it is important to note that the aim of this work is to enhance our understanding of why people skip music and identify the high-quality features for its detection. To this end, we analyse the applicability of DRL in predicting this behaviour. We leave further tailoring of the approach to the music skip prediction task and an evaluation with recently proposed offline model-free algorithms (Beng et al., 2017; Liu et al., 2017; Liu et al., 2017; Liu et al., 2017) for future work. Nevertheless, our proposed approach requires no architectural or algorithmic modifications. It offers the potential for a swift transitioning from online to offline learning and vice versa. It can be also be considered as a swift pre-training of an agent that can later be deployed online for continual learning.
## 4. Experimental Settings
### Dataset
We conduct our experiments on the real-world Music Streaming Sessions Dataset (MSSD) provided by Spotify (Spotify, 2017). The publicly available training set consists of approximately 150 million logged streaming sessions, collected over 66 days from July 15th and September 18th 2018. Each day comprises ten logs, where each log includes streaming listening sessions uniformly sampled at random throughout the entire day. Sessions contain from 10 up to at most 20 records and are defined as sequences of songs/tracks that a user has listened to (one record per song). Each record includes various user's contextual information about the stream (e.g., the playlist type) and interaction history with the platform (e.g., scrubbing, which is the number of seek forward/back within the track). Although the track titles are not available, descriptive audio features and metadata are provided for them (e.g., acousticsness, valence, and year of release). It is important to note that there is no user identification, nor access to demographic or geographical information. Hence, by not knowing whether two sessions have been played by the same user or by two different users, this study revolves around the modelling and understanding of the users' skipping behaviour.
#### 4.1.1. Temporal Correlation
There is no temporal correlation among listening sessions, i.e. the sessions are not presented in historical order, which is reflected in the chance of consecutive sessions having a considerably different hour of the day (e.g., morning and evening). Also, there is no order to the ten logs within a given day (i.e., the 1st log of the first day does not necessarily occur before the 2nd of the same day). This does not preclude the potential applicability of DRL for the skip prediction task since the hour of the day in which a song was played is provided. Thus, it allows for the modelling of skipping behaviour dependent on the hour of the day.
#### 4.1.2. Creation of Training and Test Sets
In this work, we only leverage the training set since, in the test set, most of the metadata and the skipping attributes used as ground truth in our evaluation are not provided. By selecting logs from the original training set, statistics for our training and test datasets are presented in Table 1. As it can be seen from the statistics, the ratio of skip values for all sets is balanced between True and False values. This balanced distribution is an intrinsic property of the dataset and of any of the available logs. Due to the large amount of data, and therefore computational and execution time requirements, the first four logs of the first available day are used for training. Testing is performed on various logs in order to test the models' generalisability for different days. Except for T1, which is the 5th and next immediate consecutive log after the training set collection, all the other logs are of a random index, day and/or month. This random selection approach is justified by the fact that there is no temporal correlation among logs of the same day. This is to show the generalisation capabilities of our proposed approach and to allow for the comprehensive analysis of the importance of the users' historical data.
#### 4.1.3. Data Preprocessing
All available features, with a full description available in (Spotify, 2017), are included in the state representation, except for the skip features, session and song identifiers. Categorical features, such as the playlist type and the user's actions that lead to the current track being played or ended, are one-hot encoded. All the audio features are standardised to have a distribution with a mean value of 0 and a standard deviation of 1. Overall, this results in a state representation consisting of 70 features. For ease of discussion, they are grouped as follows:
**User Behaviour (UB)**:
* **Reason End (RE)** is the cause of the current playback to end. This is a one-hot encoded feature that thus groups various encoded features such as _Trackdone_, _Backbtn_, _Fwdbtm_, and _Endplay_.
* **Reason Start (RS)**. Similar to _Reason End_, it is the type of actions that cause the current playback to start.
* **Pauses (PA)** is the length of the pause in between playbacks. It consists of _No_, _Short_, and _Long Pause_.
* **Scrubbing (SC)** is the number of seeking forward or backward during playback. They correspond respectively to _Num Seekfwd_ and _Num Seekback_.
* **Playlist Switch (PS)** indicates whether the user changed playlist for the current playback.
**Context (CX)**:
* **Session Length (SL)** is the length of the listening session.
* **Session Position (SP)** is the position of the track within the session.
* **Hour of Day (HD)** is the hour of the day in which the playback occurred ([0..23]).
* **Playlist Type (PT)** is the type of the playlist that the playback occurred within. Examples are _User Collection_, _Personalized Playlist_, and _Radio_.
* **Premium (PR)** indicates whether the user was on premium or not.
* **Shuffle (SH)** indicates whether the track was played with shuffle mode activated.
**Content (CN)**. This third and final category groups all the **Track (TR)** metadata and features, as they constitute the only content-based information in the MSSD. It includes 28 features such as _Beat Strength_, _Key_, _Duration_, and the eight _Acoustic Vectors_ ([0..7]).
### Evaluation Metrics
To perform an evaluation of our proposed approach, we adopt the evaluation metrics from the _Spotify Sequential Skip Prediction Challenge_. This is also to provide a fair comparison with the selected baselines, since they were proposed on this challenge and for the following task: _given a listening session, predict whether the individual tracks encountered in the second half of the session will be skipped by a particular user_. Therefore, every second half of a session in the selected test set is used for prediction. If a session has an odd number of records, the mid-value is rounded up. This is motivated by the fact that an accurate representation of the user's immediately preceding interactions can inform future recommendations generated by the music streaming service. Hence, it is important to infer whether the current track is going to be skipped as well as subsequent tracks in the session. First Prediction Accuracy and Mean Average Accuracy are adopted as metrics.
**First Prediction Accuracy (FPA)** is the accuracy at predicting the first interaction for the second half of each session.
**Mean Average Accuracy (MAA)** is defined as:
\[MAA=\frac{\sum\limits_{i=1}^{T}A(i)L(i)}{T} \tag{1}\]
where \(T\) is the number of tracks to be predicted within the given session, \(A(i)\) is the accuracy up to position \(i\) of the sequence, and \(L(i)\) indicates whether the \(i^{th}\) prediction is correct or not. Intuitively, in these evaluation metrics higher importance is given to early predictions. In our setting, however, we do not exploit this specification in the problem formulation. Instead, the agent is instructed to optimise the total number of correct predictions in the session. This is to keep the system's specifications simple and easily adaptable to different metrics and/or tasks. In the dataset schema, prediction is based on the _skip_2_ feature. It indicates a threshold on whether the user played the track only briefly (no precise threshold is provided) before skipping to the next song in their session.
### Models
#### 4.3.1. Baselines
To identify state-of-the-art baselines on the music skip prediction task, we performed an extensive search on prior works that utilise the MSSD dataset. We identified the following 4 of the top-5 ranked submissions to the _Spotify Sequential Skip Prediction Challenge_ and presented at the WSDM Cup 2019 Workshop:
* **Multi-task RNN**: RNN-based approach that predicts multiple implicit feedbacks (multi-task) (Zhu et al., 2019).
* **Multi-RNN**: Multi-RNN with two distinct stacked RNNs where the second makes the skip predictions based on the first, which acts as an encoder (Krizhevsky et al., 2017).
* **Temporal Meta-learning**: A sequence learning, meta-learning, approach consisting of dilated convolutional layers and highway-activations (Krizhevsky et al., 2017).
* **Weighted RNN**: RNN architecture with doubly stacked LSTM layers trained with a weighted loss function (Krizhevsky et al., 2017). They respectively reported the 1st, 2nd, 3rd, and 5th best overall performance on the Spotify Challenge, with Multi-task RNN being the strongest and Weighted RNN being the weakest baselines. The exclusion of the 4th overall best model on the challenge in our evaluation is because no manuscript and code repository were found. For the selected baselines, we use the code accompanying the papers (GitHub links available in cited manuscripts). We then
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Dataset & Date & log(s) \# & \# of records & \# of sessions & skip (\%) \\ \hline Training Set & 15/07/2018 & [0, 3] & 11,927,861 & 711,838 & 51.20\% \\ Test Set (T1) & 15/07/2018 & 4 & 2,991,438 & 178,419 & 51.21\% \\ Test Set (T2) & 19/07/2018 & 8 & 3,395,883 & 204,145 & 50.53\% \\ Test Set (T3) & 27/07/2018 & 0 & 3,447,209 & 207,060 & 50.76\% \\ Test Set (T4) & 10/08/2018 & 6 & 3,407,685 & 205,267 & 50.42\% \\ Test Set (T5) & 09/09/2018 & 1 & 2,588,711 & 155,617 & 51.48\% \\ \hline \hline \end{tabular}
\end{table}
Table 1. Summary of datasets used for experiments after pre-processing. log(s) # indicate which log(s) are selected out of the available ten. skip (%) refers to the ratio between True and False values.
reproduced their results locally by running their provided public code locally, to the best of our abilities and with an optimised set of parameters. However, despite our best efforts, we reported consistently worse results than the ones in the Spotify Challenge public leaderboard and/or accompanying papers. The test set used in challenge is not fully released. No ground truth is available, thereby not allowing for a local evaluation. However, given our procedure for the creation of the train and test sets (Section 4.1.2), i.e. the training is performed on the first available day and the evaluation is for different days/months, we make the strong assumption that the overall data distribution of our selected test sets and the one used in the public challenge are similar. For a fair comparison, we thus report the results from the public leaderboard since they are better than the ones from our local evaluation.
#### 4.3.2. DQN Architecture
For this work, we explored nine state-of-the-art DQN architectures. By adhering to our proposed framework, they have been thoroughly investigated in the users' music skipping behaviour prediction task. They are the Vanilla (Vanilla, 2017), Double (Double, 2017), Dueling (Dueling, 2017), and their respective n-step learning variants (Vaswani et al., 2017). Partially observable architectures have also been explored, with observations stacking (Vanilla, 2017) and Gated Recurrent Units (GRU) and Long Short-Term Memory (LSTM) based architectures (Kal
slightly inferior performance compared to Weighted RNN. Overall, we note that all the baselines perform consistently worse on our local evaluation than in the public challenge. We observe decreases in performance of 4.9, 16.2, 4.9, 0.8 (%) and 2.4, 8.2, 2.2, 0.4 (%) in MAA and FPA and for Multi-task RNN, Multi-RNN, Temporal Meta-learning, and Weighted RNN respectively. Therefore, in Table 2, we report results in terms of MAA and FPA metrics for our proposed DQN approach with the baselines' public results from the Spotify Challenge. This is because they are better than those that we obtained from our local evaluation and to provide an as fair as possible comparison. Our proposed approach exhibits significant improvements over all baselines on both MAA and FPA metrics. Our proposed DQN registers an increase of performance for both MAA and FPA of \(17\%\) and \(7\%\) respectively with regards to Multi-task RNN, the best performing baseline from the public challenge.
Overall, our results demonstrate the validity and applicability of DRL to predict users' music skipping behaviour. A Vanilla DQN architecture can outperform the more complex deep learning based state-of-the-art models. Furthermore, the results and a thorough analysis, omitted from this paper due to space limitations, also indicate that convergence is achieved using a significantly lower number of episodes, at around \(2\times 10^{5}\) (\(\sim 1/4\) of the episodes in the training set). This suggests sample efficiency and swift convergence of our proposed approach. Thus, it also addresses the well-known problem of DRL, which is its computationally intensive and slow learning. Our approach converges swiftly and, in contrast to the selected baselines, it does not require GPU access. The low variability in performance across multiple runs and during the learning process also indicates stable and effective learning.
### Identification of Temporal Data Leakage
In the previous section, we compared our proposed DQN against the selected baselines in order to demonstrate the validity of our proposed DQN. By performing an as fair as possible comparison, empirical results indicate the superiority of our approach. However, this benchmarking introduced errors into the model. This is because, as described in Section 4.4.3, we recognise that there are data leaking features in MSSD. The _SL_ informs the model of how many songs a given user will listen to. This should not be made available because it is impossible to know how many songs a user will listen to in their current listening session. Further, the _RE_ features provide information about how the current stream ends. This information should also not be exposed to the model. However, to provide a fair comparison with the baselines, since they are included in their input representation, these features were not removed despite our acknowledgement.
The temporal data leakage problem is validated by Figure 1, which reports the analysis of the average impact on model output (SHAP) of all features in the input state representation. It can be noted how the most discriminative feature to detect music skips is _RE Trackdone_, followed by _RS Trackdone_, _RS Fwdbtn_, and _Short PA. SL_ is also found to have a relative impact (19th). It is clear that the proposed DQN considers these features to be of high quality and prominent importance for predicting the users' music skipping behaviour. However, they introduce a data leaking problem. By their removal from the input state representation, we observe a decrease in performance for our proposed DQN of \(16\%\) and \(11\%\) in MAA and FPA respectively. Further, we observe decreases in performance of 5.2, 26.2, 7.6, 0.6 (%) and 3.5, 28.4, 6.0, 1.3 (%) in MAA and FPA for Multi-task RNN, Multi-RNN, Temporal Meta-learning, and Weighted RNN respectively (differences calculated from the results obtained in our local evaluation after removal of the features with those reported in the public challenge). Overall, these results validate our initial intuition and demonstrate the data leakage problem. This finding provides a strong implication for a future outlook on creating attentive data collection procedures for transparent measurements of user behaviours. Offline benchmarks should be an as truthfully as possible reflection of real-world (online) tasks.
### The Role of User Behaviour, Context, and Content in Detecting Music Skips (RQ2)
In this final section, we aim to address our main research question: why people skip music? To this end, we acknowledge and thus remove the leaking features from the state representation to enable for a correct modelling of the users' music skipping behaviour.
#### 5.3.1. **User Behaviour (UB)**
Figure 2 reports the SHAP features importance analysis of the proposed DQN on the "corrected" state representation. It can be observed that how the user interacted with the underlying platform to start the current playback (i.e., the _RS_ type) is considered being the most discriminative feature to detect music skips. _Trackdone_ and _Fwdbtn_ are the highest negatively and
\begin{table}
\begin{tabular}{l l|l l|l l} \hline \hline & & \multicolumn{2}{c|}{_MAA_} & \multicolumn{2}{c}{_FPA_} \\ \hline & & Mean & 95\% CI & Mean & 95\% CI \\ \hline \multirow{5}{*}{
\begin{tabular}{l} **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** \\ **Reward** **Reward** \\ **Reward** \\ **Reward** **Reward** \\ **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Rewardeward** **Reward** \\ **Rewardeward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** \\ **Rewardeward** **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Rewardeward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Rewardeward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** **Reward** **Reward** \\ **Rewardeward** **Reward** **Reward** **Reward** **Reward** \\ **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **Reward** **R** **Reward** **Reward** **Reward** **Reward** **Reward** **R** **Reward** **R** **Reward** **R** **Reward** **
positively correlated features in predicting a skip. They correspond to the user starting the current playback having listened in full or having pressed the forward button (i.e., skip) on the previous playback. These findings validate the recent observations by Meggetto et al. (2018). By considering their defined listener and skipper user types, we hypothesise that the user behaviour that can inform the membership of a user to one of these two types is a _RS Trackdone_ or _Fwdbtn_. From our results, it is clear that how a person interacted with the previous song appears to greatly affect the DRL's ability to detect how they will interact next. Another UB that appears to have a prominent effect is the pause in between playbacks. A _Short PA_ and a _No PA_ are shown to highly and weakly suggest a music skip respectively. In the case of a _Long PA_, our results strongly indicate that the user will not skip their current song. This finding validates our initial hypothesis. It may correspond to a person searching the catalogue for a song they would like to listen, and hence a long pause. Therefore, it is intuitive that it may not be skipped. However, the effect of a short pause in detecting music skips is of surprising effect. This may be justified by a user's exploratory state where they browse the catalogue and briefly listen to multiple songs until they find a match for their needs.
#### Context (Cx)
We observe that users that listen in _Shuffle_ mode and/or with a _Premium_ account are associated with less skipping activity. Listening with a _User Collection_ PT is associated with a higher skipping rate. It is also shown that listening under a _Personalised Playlist_ or _Radio_ is subject to more listening and thus less skipping activity. This finding could suggest that they have a higher users' engagement. However, this is not possible to quantify, and further evaluation is required in order to understand this phenomenon. This could be explained by the noisy nature of the skipping activity and the possibility, as in the example of radio listening, of passive (background) consumption of the music. Although the _PT_ findings appear to partially validate prior work (Dubna et al., 2018), in our ablation analysis we see that their removal from the state representation registers no significant effect on the DRL's ability to predict music skips.
#### Content (Cn)
The only content-based features in the MSSD are related to the track being listened by the user (_TR_). The correlation between skipping activity and the _TR_ features is less obvious since they appear to be less discriminative and prominent in detecting music skips. _Beat Strength_ and _Key_, although mostly centred around a zero impact, suggest that a high beat strength is associated with more listening, and a high-pitched song (_Key_) with higher chances of skipping. Further, longer songs (_Duration_) are usually associated with higher listening activity, although they may also correspond to skips. However, in our ablation analysis, we observe the no effect in the DQN's performance by the removal of all _TR_ features. We find this to be of surprising effect, since it appears
Figure 1. SHAP features importance analysis of the proposed DQN. The categorisation of the features and an explanation of the used acronyms is described in Section 4.1.3. Features are ranked in order of importance and they are reported as “[Name] | [Category] |[Type]”.
\begin{table}
\begin{tabular}{c l|c c|c c} \hline \hline & \multicolumn{2}{c|}{_MAA_} & \multicolumn{2}{c}{_FPA_} \\ \hline & \multicolumn{2}{c|}{Mean} & \multicolumn{2}{c|}{95\% CI} & \multicolumn{2}{c}{Mean} & \multicolumn{2}{c}{95\% CI} \\ \hline & \multicolumn{1}{c|}{Corrected State} & 0.664 & [0.662 - 0.666] & 0.773 & [0.772 - 0.774] \\ \hline & \multicolumn{1}{c|}{Reason Start (RS)} & 0.389 (\({}^{**}\)) & [0.378 - 0.400] & 0.479 (\({}^{**}\)) & [0.464 - 0.494] \\ & \multicolumn{1}{c|}{Pauses (PA)} & 0.659 (\({}^{*}\)) & [0.657 - 0.661] & 0.769 (\({}^{*}\)) & [0.768 - 0.770] \\ & \multicolumn{1}{c|}{Scrubbing (SC)} & 0.659 & [0.655 - 0.663] & 0.770 (\({}^{*}\)) & [0.768 - 0.772] \\ & \multicolumn{1}{c|}{Playlist Switch (PS)} & 0.662 & [0.659 - 0.665] & 0.773 & [0.772 - 0.774] \\ \hline & \multicolumn{1}{c|}{Hour of Day (HD)} & 0.663 & [0.661 - 0.665] & 0.773 & [0.772 - 0.774] \\ & \multicolumn{1}{c|}{Playlist Type (PT)} & 0.663 & [0.661 - 0.665] & 0.772 & [0.771 - 0.773] \\ & \multicolumn{1}{c|}{Premium (PR)} & 0.664 & [0.662 - 0.666] & 0.773 & [0.772 - 0.774] \\ & \multicolumn{1}{c|}{Shuffle (SH)} & 0.663 & [0.660 - 0.666] & 0.774 & [0.773 - 0.775] \\ \hline & \multicolumn{1}{c|}{Track (TR)} & 0.664 & [0.661 - 0.667] & 0.773 & [0.772 - 0.774] \\ \hline \hline \end{tabular}
\end{table}
Table 3. MAA and FPA results for our ablation analysis on the proposed DQN on the corrected state representation. The reported results are the average across all test sets and the 95% CIs. (\({}^{*}\)) and (\({}^{**}\)) indicate that the selected type of features had a statistically significant effect in performance in the proposed DQN (on a “corrected state”) on MAA or FPA. This is based on confidence levels (\(p<.05\)) and (\(p<.001\)) respectively.
Figure 2. SHAP features importance analysis with positive (skip) and negative (no skip) impact values of the proposed DQN on a “corrected” state representation (i.e., after addressing temporal data leakage). The Feature Value axis refers to high or low observational values. For Boolean features (e.g., _RS Trackdone_), high/red is a True value, and low/blue is False. The categorisation of the features and an explanation of the used acronyms is described in Section 4.1.3. Features are ranked in order of importance and they are reported as “[Name] | [Category] | [Type]”.
to contradict prior research suggesting that audio characteristics influence how people skip music (Kumar et al., 2017; Wang et al., 2018).
#### 5.3.4. Ablation Analysis
In order to validate our findings and to demonstrate the impact, whether statistically significant or not, that these features have on the DQN's performance, in Table 3 we report the results for the ablation analysis. We performed paired t-tests on the prediction accuracy of the proposed DQN (on the "corrected" input state representation) with each of the selected type of features (e.g., \(RS\)). We use (*) and (**) to denote the fact that the removal of the selected type of features had a statistically significant effect in performance in the proposed DQN on MAA and FPA. This is based on confidence levels (\(p<.05\)) and (\(p<.001\)) respectively. We note how the \(RS\) features type, as previously shown in Figure 2, is the highest quality estimator to detect music skips. Its removal registers a decrease in performance of \(28\%\) and \(29\%\) in MAA and FPA respectively. The \(PA\)s also register a significant impact. All the remaining features, including the CX and CN categories, do not appear to show a statistically significant effect on the DQN's performance. These results, therefore, suggest that a limited amount of users' data can be indeed leveraged to predict the users' music skipping behaviour, with only the \(RS\) and \(PA\) user behaviours showing a statistically significant effect.
## 6. Discussion & Conclusions
In this work, we aim to understand why people skip music. To carry out such an analysis, we first proposed to leverage DRL to the task of sequentially predicting users' skipping behaviour in song listening sessions. By first understanding how a DRL model learns individual user behaviours, we can then help the process of explaining recommendations of a DRL-based MRS. To this end, we extended the DRL's applicability to this classification task. Results on a real-world music streaming dataset (Spotify) indicate the validity of our approach by outperforming state-of-the-art deep learning based models in terms of MAA and FPA metrics (**RQ1**). By empirically showing the effectiveness of our proposed approach, our main post-hoc and ablation analysis revolves around a comprehensive study of the utility and effect of users' historical data in how the proposed DRL detects music skips (addressing **RQ2**).
Our findings indicate that how users interact with the platform is the most discriminative indicator for an accurate detection of skips (i.e., \(RS\) and \(PA\)). Surprisingly, the listening CX and CN features explored in this work do not appear to have an effect on the DRL model for the prediction of music skips. Our analysis also reveals a temporal data leakage problem derived from some features in the dataset and used in the public challenge, since they provide information from the future that should not be made available to a live predictive system. Overall, this work shows that an accurate representation of the users' skipping behaviour can be achieved by leveraging a limited amount of user data. This offers strong implications for the design of novel user-centred MRSs with a minimisation and selection of high-quality data features to avoid introducing errors and biases. The results and a thorough analysis of our proposed approach indicate sample efficiency, swift convergence, and long-term stability of our proposed approach. With convergence reached using a significantly lower number of episodes, training time can be greatly reduced by early termination. With no GPU access required (in contrast to the state-of-the-art deep learning based models), our approach also clearly addresses the well-known limitation of DRL being a computationally extensive approach. These findings and the consistent performance with no signs of instability make this work of great interest for future research.
With the importance of modelling and understanding the users' skipping behaviour, we believe this work to be an important step towards improving user modelling techniques. An accurate representation of the skipping behaviour can provide an invaluable stream of information to the underlying recommendation process. For example, we expect our findings, e.g. the \(RS\) type, to be highly relevant in the downstream task of capturing, in real-time, a user's skipping type (Kumar et al., 2017). By extending our approach to predict and understand other users' behaviours, we can create a holistic representation of the listeners' preferences, interests, and needs. We also advocate for thoughtful considerations when collecting and then presenting data to a model for measuring user behaviours. With increasingly rising concerns around users' data collection and privacy, the need for minimal data collection is paramount. Our proposed approach can be extended in future works to predict _when_ the song is likely to be skipped. This level of information could allow to predict moments in a song where skips are most likely to occur, which could be of great value for the underlying platform. Considering _how_ user's emotions or current psychological state affect their skipping behaviour is also an interesting venue for further research. With access to richer behavioural data and non-anonymised listening sessions, another line of research can investigate the relation between skipping signal and the individual user's preferences (e.g., situation-aware MRS). Finally, although not the aim of this work, performance improvements are to be expected by further tailoring our approach to the music skip prediction task. Given the user-based exploratory nature of this work, we leave further experimentation and evaluations with emerging DRL model-free offline algorithms and architectures (e.g., extending our analysis to transformer-based DRL models (Wang et al., 2018)) for future investigation.
###### Acknowledgements.
This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/R513349/1].
|
2307.07969
|
Seis2Rock: A Data-Driven Approach to Direct Petrophysical Inversion of
Pre-Stack Seismic Data
|
The inversion of petrophysical parameters from seismic data represents a
fundamental step in the process of characterizing the subsurface. We propose a
novel, data-driven approach named Seis2Rock that utilizes optimal basis
functions learned from well log information to directly link band-limited
petrophysical reflectivities to pre-stack seismic data. Seis2Rock is composed
of two stages: training and inference. During training, a set of optimal basis
functions are identified by performing singular value decomposition on one or
more synthetic AVO gathers created from measured or rock-physics synthesized
elastic well-logs. In inference, seismic pre-stack data are first projected
into a set of band-limited petrophysical properties using the previously
computed basis functions; this is followed by regularized post-stack seismic
inversion of the individual properties. In this work, we apply the Seis2Rock
methodology to a synthetic dataset based on the Smeaheia reservoir model and
the open Volve field dataset. Numerical results reveal the ability of the
proposed method in recovering accurate porosity, shale content, and water
saturation models. Finally, the proposed methodology is applied in the context
of reservoir monitoring to invert time-lapse, pre-stack seismic data for water
saturation changes.
|
Miguel Corrales, Hussein Hoteit, Matteo Ravasi
|
2023-07-16T07:42:36Z
|
http://arxiv.org/abs/2307.07969v2
|
# Seis2Rock: A Data-Driven Approach to Direct Petrophysical Inversion of Pre-Stack Seismic Data
###### Abstract
The inversion of petrophysical parameters from seismic data represents a fundamental step in the process of characterizing the subsurface. We propose a novel, data-driven approach named Seis2Rock that utilizes optimal basis functions learned from well log information to directly link band-limited petrophysical reflectivities to pre-stack seismic data. Seis2Rock is composed of two stages: training and inference. During training, a set of optimal basis functions are identified by performing singular value decomposition on one or more synthetic AVO gathers created from measured or rock-physics synthesized elastic well-logs. In inference, seismic pre-stack data are first projected into a set of band-limited petrophysical properties using the previously computed basis functions; this is followed by regularized post-stack seismic inversion of the individual properties. In this work, we apply the Seis2Rock methodology to a synthetic dataset based on the Smeaheia reservoir model and the open Volve field dataset. Numerical results reveal the ability of the proposed method in recovering accurate porosity, shale content, and water saturation models. Finally, the proposed methodology is applied in the context of reservoir monitoring to invert time-lapse, pre-stack seismic data for water saturation changes.
Pre-stack Seismic inversion Petrophysical properties Data-driven
## 1 Introduction
Determining petrophysical parameters from seismic data is critical to any hydrocarbon, geothermal, and CO\({}_{2}\) sequestration project. We usually refer to seismic reservoir characterization as the framework under which inversion methods that aim to estimate any form of rock parameters are developed [1]. Two approaches are commonly used to retrieve petrophysical parameters from pre-stack seismic data. The first, referred to as _sequential or cascaded inversion_, estimates seismic data for elastic parameters; this is followed by a step of rock physics inversion [2, 3]. The second approach, named _joint inversion_, aims to estimate elastic and petrophysical parameters simultaneously; this is usually performed in a Bayesian setting using Monte-Carlo sampling methods due to the complex and nonlinear nature of the
associated modelling operators [4]. In both cases, elastic and petrophysical parameters are linked through empirical relationships retrieved from well logs, core data, laboratory measurements, theoretical rock physics models, or a combination of them. In the context of reservoir modelling, the process of creating a direct link between petrophysical and elastic parameters is usually referred to as Petro-Elastic Modeling (PEM) [5, 6]. On the other hand, pre-stack (or Amplitude Variation with Offset (AVO)) seismic data can be directly modelled from elastic parameters via the nonlinear Zoeppritz equation [7] or by one of its linear approximations [8, 9]. Though they are easier to interpret and invert, these linear approximations tend to be valid only for weak parameter contrasts and small angles.
The nonlinear relationships linking petrophysical properties to seismic pre-stack amplitudes, in addition to the band-limited nature of seismic data and the inevitable presence of noise, render the seismic to rock parameters a severely ill-posed inverse problem [10]. Recent advancements in deep learning have opened new exciting research avenues to handle such nonlinearities. Notable examples of post-stack inversion include the development of a convolutional neural network (CNN) for seismic impedance inversion by [11] and an innovative unsupervised deep-learning method for porosity estimation from post-stack seismic data proposed by [12]. Moreover, [13] introduced a robust deep learning-based seismic inversion workflow that leverages temporal convolutional networks to transform sequences of post-stack seismic data into a series of predicted acoustic impedance. In the context of pre-stack inversion, [14] proposed a CNN guided by the physics of the pre-stack (or post-stack) modelling operator, resulting in more accurate and efficient predictions. Likewise, [13] utilized two convolutional neural networks to extract petrophysical properties from pre-stack seismic data, further expanding the capabilities of neural network-based inversion methods. The first network, a _direct end-to-end CNN_, emulates the joint inversion approach and exclusively outputs petrophysical properties. In contrast, the second network, a _cascaded CNN_, retrieves both elastic and petrophysical properties as implemented in the conventional sequential approach.
Nevertheless, neural networks are well known to be data-hungry, a feature that may not always align with the availability of a limited set of log data (especially in fields with limited well coverage). An alternative data-driven approach to pre-stack seismic inversion was proposed by [15] under the name of Optimal linear AVO Approximation (OptAVO). This method also uses a priori information in the form of available well-logs and generated seismic reflection curves by means of the nonlinear Zoeppritz equation. Singular Value Decomposition (SVD) is then performed to identify a set of optimal basis functions that are later used to invert pre-stack seismic data into their corresponding elastic parameters. Because a linear relation between elastic parameters and seismic data is found from the data itself, this approach can extract information from seismic amplitudes at a wider angle range than classical model-based approaches. In order to correctly handle wavelet effects in the inversion process, [16] extended the OptAVO approach to band-limited seismic data. Similar to the original approach, explicit inversion of the poorly conditioned AVO operator is circumvented, reducing the impact of noise on the estimated elastic parameters. Compared to the original approach, band-limited OptAVO can retrieve full-bandwidth instead of relative, elastic subsurface models. As an example, [17] used optimal basis functions to retrieve elastic parameters of the subsurface for estimation of CO\({}_{2}\) saturation using the Sleipner seismic data.
In this work, we introduce a data-driven approach to rock physics inversion that extends the capabilities of band-limited OptAVO to perform direct inversion of pre-stack seismic data for petrophysical parameters. We refer to this new approach as **Sesi2Rock**. Depending on the availability of elastic (i.e., sonic, shear sonic, and density) well-log data, two variants of Seis2Rock can be identified: first, when elastic well-log data are unavailable, a rock-physics model must be introduced to synthesise such parameters from petrophysical well logs prior to seismic modelling. Second, when elastic well-log data are available, the method becomes fully data-driven in that seismic modelling can be directly performed using the available elastic parameters. In both cases, a training stage is first employed, where a set of elastic properties are used to create a synthetic pre-stack gather employing the Zoeppritz equation. Then, this synthetic gather is used as input to an SVD process, creating a data-driven link between petrophysical parameters and seismic amplitudes. Finally, at the inference stage, petrophysical properties can be estimated by applying the SVD eigenvectors to the pre-stack seismic data and back-projecting the estimated optimal coefficients into petrophysical the parameters of choice (i.e., porosity, shale content, and water saturation). Similar to band-limited OptAVO, the wavelet and time derivative effects are compensate for in an additional step of post-stack seismic inversion.
The remainder of this paper is organized as follows. First, we present the theory of the Seis2Rock methodology. This is subsequently validated numerically using a synthetic example created based on Smeaheia's reservoir model. This example is intended to assess the capabilities of our method to retrieve petrophysical properties from pre-stack seismic data, as well as its ability to estimate changes in water saturation in time-lapse settings. Here, we assume that elastic well logs are not available, and therefore perform two steps of modeling in training. A second example is presented based on the field Volve dataset: first, we rigorously describe the sequence of pre-processing steps needed to prepare
the data for Seis2Rock. In contrast to the synthetic example, a rock-physics model is not required in this case as the well-log measurements already include elastic-petrophysical pair information, making the method entirely data-driven. The effectiveness of Seis2Rock when dealing with the Volve data is tested by extracting two fences along wells NO 15/9-19 BT2 and NO 15/9-19 A. Initially, the optimal basis functions and coefficients are obtained using well-log information from well NO 15/9-19 BT2. Subsequently, additional well-log information from well NO 15/9-19 A are included in the training process. Finally, we present an analysis and discussion of the results and conclude with a summary of our main findings.
## 2 The Seis2Rock Method
Seis2Rock is a data-driven approach to petrophysical inversion [18]. Its main goal is to find a direct mapping between physical coefficients (elastic or petrophysical) and seismic AVO responses (Figure 1). Here the physical coefficients are the so-called petrophysical reflectivities (\(r_{\phi},r_{V_{sh}},r_{Sw}\)), defined as the vertical (time or depth) derivative of the petrophysical properties. Once this direct mapping is found at well locations, the reverse process is applied to all other locations in the seismic data to be inverted.
More specifically, the proposed method is composed of two main stages: training and inference. Training refers to the process of obtaining a set of optimal basis functions from pre-stack data modelled at a (small) number of well locations. Inference represents the application of such basis functions to the entire seismic pre-stack data to be inverted. The overall process is summarized in Figure 2.
Figure 1: Seis2Rock descriptive goal. Mapping between petrophysical reflectivities and pre-stack seismic amplitudes.
Figure 2: Descriptive summary of Training and Inference stages proposed in Seis2Rock.
### Training stage
In the training stage (see Figure 2 and 3), Seis2Rock aims to find a set of optimal basis functions linking contrasts in petrophysical parameters, also referred to herein as _petrophysical reflectivities_, to pre-stack seismic data. Such petrophysical parameters must be available at one or more well locations as they represent the key training data used by Seis2Rock. If there is a lack of well logs of elastic parameters, the first step of Seis2Rock is represented by the definition of a representative Rock Physics Model (RPM) linking petrophysical and elastic properties; in this work we consider the Hertz-Mindlin model to estimate dry-rock properties such as effective bulk modulus \(K_{dry}\) and effective shear modulus \(\mu_{dry}\) as a function of pressure \(P\), porosity \(\phi\), coordination number \(C\), shear modulus \(\mu_{min}\) and Poisson ratio \(\nu_{min}\) of a mixture of minerals.
\[K_{dry}=\left[\frac{C^{2}(1-\phi)^{2}\mu_{min}^{2}}{18\pi^{2}(1-\nu_{min}^{2} )}P\right]^{\frac{1}{3}} \tag{1}\]
\[\mu_{dry}=\frac{5-4\nu_{min}}{5(2-\nu_{min})}\left[\frac{3C^{2}(1-\phi)^{2}\mu _{min}^{2}}{3\pi^{2}(1-\nu_{min}^{2})}P\right]^{\frac{1}{3}} \tag{2}\]
Mixing of the minerals (bulk modulus \(K_{min}\) and shear modulus \(\mu_{min}\)) is performed using the Voigt-Reuss-Hill average [2]. The bulk moduli \(K_{fl}\) and density \(\rho_{fl}\) of the mixture of fluids are obtained in a similar fashion. Then, Gassmann fluid substitution equations are applied to obtain fluid-saturated moduli and densities:
\[K_{sat}=K_{dry}+\frac{\left(1-\frac{K_{dry}}{K_{min}}\right)}{\frac{\phi}{K _{fl}}+\frac{1-\phi}{K_{min}}-\frac{K_{dry}}{K_{min}^{2}}} \tag{3}\]
\[\mu_{sat}=\mu_{dry} \tag{4}\]
\[\rho=\rho_{min}(1-\phi)+\rho_{fl}\phi \tag{5}\]
Finally, the wave velocities \(V_{p}\) and \(V_{s}\) are computed as follows:
\[V_{p}=\sqrt{\frac{K_{sat}+\frac{4}{3\mu_{sat}}}{\rho}} \tag{6}\]
\[V_{s}=\sqrt{\frac{\mu_{sat}}{\rho}} \tag{7}\]
Equations 1 to 7 represent the chosen nonlinear rock-physics model, compactly defined in the following as \(g\). Here porosity (\(\phi\)), shale content (\(V_{sh}\)), and water saturation (\(S_{w}\)) represent the parameters of the model, while \(\xi\) is used to group the set of hyperparameters (\(C\), \(K\), \(\mu\), \(\nu\), \(P\), and \(T\)). Finally, given the computed elastic properties (\(V_{p}\), \(V_{s}\), and \(\rho\)), the synthetic pre-stack seismic data can be obtained via the Zoeppritz equation followed by convolution with the source wavelet. Ultimately, the seismic AVO gather \(d\) can be briefly defined as a function of both the nonlinear Zoeppritz equation \(f\) and the rock-physics model \(g\):
\[d\left(\theta,t\right)=f\left(g(\phi,V_{sh},S_{w};\xi)\right) \tag{8}\]
Finally, we note that when both elastic and petrophysical information are available in the form of well logs, there is no need to create a rock-physics model, and we can directly apply the **Rock Physics Inversion** framework illustrated in Figure 3.
Following the formulation of the problem and the notation used in [16], SVD is then applied on the synthesised seismic AVO gather to obtain so-called optimal basis functions:
\[\widetilde{\mathbf{D}}-\widetilde{\mathbf{D}}_{\mathbf{b}}=\mathbf{F}\mathbf{ A}\mathbf{V}=\mathbf{F}\mathbf{C} \tag{9}\]
where the matrix \(\widetilde{\mathbf{D}}\) is the modeled seismic gather of size \(N_{\theta}\times N_{t_{w}}\) corresponding to the chosen dictionary \(\widetilde{\mathbf{M}}\) of petrophysical parameters of size \(n_{m}\times N_{t_{w}}\). Here \(N_{t_{w}}\) refers to the size of the vertical axis used for both the well logs and seismic data and \(N_{\theta}\) is the number of angles. Finally, \(n_{m}\) refers to the number of independent petrophysical parameters used in the rock physics model. Similarly, \(\widetilde{\mathbf{D}}_{\mathbf{b}}\) represents the seismic AVO gather modelled which is also
used to obtain the background model \(\widetilde{\mathbf{M}}_{\mathbf{b}}\) (i.e., a smoothed version of the petrophysical parameters \(\widetilde{\mathbf{M}}\)). Singular vectors are placed in matrices \(\mathbf{F}\) and \(\mathbf{V}\) of size \(N_{\theta}\times N_{t_{w}}\) and \(N_{t_{w}}\times N_{t_{w}}\), respectively, whilst singular values are placed along the diagonal of the matrix \(\mathbf{\Lambda}\) of size \(N_{t_{w}}\times N_{t_{w}}\). Matrices \(\mathbf{\Lambda}\) and \(\mathbf{V}\) are conveniently combined to form the matrix of optimal coefficients \(\mathbf{C}\), which act as the weights of the basis functions stored along each column of the matrix \(\mathbf{F}\) to form the data \(\widetilde{\mathbf{D}}-\widetilde{\mathbf{D}}_{\mathbf{b}}\). When the SVD process is applied to the modeled seismic data, the singular values tend to quickly decay to zero, meaning that the contribution of the different basis functions to the reconstruction of the data rapidly decreases; therefore it is possible to consider a small subset of basis functions \(p<N_{\theta}\), and write an approximate relation as follows:
\[\widetilde{\mathbf{D}}-\widetilde{\mathbf{D}}_{\mathbf{b}}\approx\mathbf{F}_{ \mathbf{p}}\mathbf{C}_{\mathbf{p}} \tag{10}\]
where \(\mathbf{F}_{\mathbf{p}}\) and \(\mathbf{C}_{\mathbf{p}}\) are matrices of size \(N_{\theta}\times p\) and \(p\times N_{t_{w}}\), respectively. Equation 10 marks the end of the training process and provides the optimal basis functions \(\mathbf{F}_{\mathbf{p}}\) for the inference process. Although we have considered here petrophysical logs for a single well, Seis2Rock can be easily extended to accommodate for the availability of multiple wells. This can be accomplished by concatenating \(n\) well profiles one after the other whilst ensuring a smooth transition in the properties between two consecutive wells. As a consequence, the sizes of the corresponding data \(\widetilde{\mathbf{D}}\) and dictionary \(\widetilde{\mathbf{M}}\) matrices become \(N_{\theta}\times n\cdot N_{t_{w}}\) and \(n_{m}\times n\cdot N_{t_{w}}\).
### Inference stage
The inference process intends to convert a seismic pre-stack dataset covering an extensive geographical area distant to the control well(s) into a set of petrophysical property models. Here we consider for simplicity a single location, and define \(\mathbf{D}\) and \(\mathbf{M}\) to be matrices of size \(N_{\theta}\times N_{t}\) of size \(n_{m}\times N_{t}\), respectively. Since \(\mathbf{F}_{\mathbf{p}}\) is an orthonormal matrix by construction, the optimal coefficients for any seismic gather \(\mathbf{D}\) (i.e., at a given spatial location) can be obtained as follows:
\[\mathbf{C}_{\mathbf{p}}=\mathbf{F}_{\mathbf{p}}^{\mathbf{T}}(\mathbf{D}- \mathbf{D}_{\mathbf{b}}) \tag{11}\]
Similarly to the training process, a consistent background model \(\mathbf{M}_{\mathbf{b}}\) is required to model a background synthetic dataset \(\mathbf{D}_{\mathbf{b}}\) to be subtracted from the recorded data. These optimal coefficients are subsequently back-projected into a band-limited representation of the physical petrophysical parameters \(\mathbf{B}\) (see Appendix A for the full derivation of the back-projection process). Finally, an inverse problem is solved to undo the effect of the wavelet \(\mathbf{W}\) and time-derivative \(\mathbf{T}\) operators:
\[\mathbf{M}=(\mathbf{W}\mathbf{T})^{-1}\mathbf{C}_{\mathbf{p}}^{\mathbf{T}} \mathbf{H}_{\mathbf{p}}^{\mathbf{T}}\mathbf{W}\widetilde{\mathbf{R}}=(\mathbf{ W}\mathbf{T})^{-1}\mathbf{B} \tag{12}\]
Figure 3: Descriptive workflow of the Seis2Rock methodology, which is composed of training and inference. When the well-log information consists of elastic and petrophysical properties, there is no need to define a Rock-Physics Model.
where \(\widetilde{\mathbf{R}}\) is the matrix containing the petrophysical reflectivities from the well log used in training of size \(n\cdot N_{t}\times n_{m}\), and \(\mathbf{H_{p}}=\mathbf{V_{p}^{T}}\mathbf{A_{p}^{-1}}\) is a matrix of size \(N_{t}\times p\). Note that the right-hand-side of equation 12 can be interpreted as a series of post-stack seismic inversions (one post-stack inversion per petrophysical parameter), which are solved here using the PyLops computational framework [19]. Moreover, whilst we have considered a single location for simplicity in this derivation, the final step of inversion is usually carried out for all spatial locations at the same time, such that spatial regularization in the form of Laplacian or Total Variation (e.g., [20]) can be introduced.
## 3 Results
### Synthetic data
The proposed method is first assessed on a synthetic example. The porosity model is constructed based on a 2D section of the Smeaheia reservoir model, whilst the shale content model is stochastically generated using a normal-random distribution conditioned to the porosity values. Likewise, the water saturation model is assumed to follow a normal random distribution above the water contact (Depth=4895 m). The system is assumed to be only occupied by oil and water and the reservoir conditions are assumed to be a pressure of 24.1 MPa, temperature of 50 C, water salinity of 10,000 ppm, and oil gravity of 20 API. Figure 4a shows the true petrophysical models.
Figure 4: Synthetic 2D example based on Smeheia Reservoir model. a) Petrophysical properties. b) Derived Elastic properties using the RPM. c) AVO synthetic gather using Zoeppritz equation.
#### 3.1.1 Direct Petrophysical Inversion
In this case, different vertical pillars in the model are assumed to correspond to wells. Based on the information in the selected well logs (\(\phi,V_{sh},S_{u}\)) and the hyperparameters \(\xi=\{K_{sand}=37.6\times 10^{9}\) Pa, \(\mu_{sand}=44.6\times 10^{9}\) Pa, \(\rho_{sand}=2.65\) g/cm\({}^{3}\); and \(K_{shale}=20.9\times 10^{9}\) Pa, \(\mu_{shale}=30.6\times 10^{9}\) Pa, and \(\rho_{shale}=2.58\) g/cm\({}^{3}\)), the rock-physics model in equations 1-7 is used to compute the elastic parameters in the different wells and the entire 2D sections 4b. Such elastic parameters are further used to model reflection coefficients for angles ranging from \(0^{\circ}\) to \(50^{\circ}\) 4c. Whilst angles beyond \(30^{\circ}\) cannot be handled by conventional linear approximations, this example is created to prove that our methodology can handle such angles and is therefore more successful in recovering strong contrasts. Finally, the reflection coefficients are band-passed using a Ricker wavelet with a central frequency of 20 Hz to produce the seismic AVO gathers, which are used as input to the SVD process as well as the data to be inverted.
In the following setup, three experiments are performed assuming knowledge of petrophysical well-logs at one, two, or three vertical profiles (\(x=281.6,1182.8,1971.32\) m). By including the wells progressively, we wish to analyze the impact on the final inversion results of having more information in the training process. Regardless of on the amount of wells available in the training process, \(p=6\) coefficients are chosen to decompose the data, as in equation 10. Using equation 11, the retrieved optimal basis functions are used to estimate the band-limited coefficients. Finally, to obtain the petrophysical parameters, each band-limited petrophysical reflectivity model is inverted by means of spatially regularized post-stack inversion where a Laplacian operator is used as a regularizer. Furthermore, we do not impose any iterative constraint on the inversion within the range [0,1] for each output produced by the petrophysical inversion process.
Figure 5 shows the inverted petrophysical parameters for the cases with one, two, and three vertical profiles, exhibiting high accuracy in all cases. In addition, the absolute error is visualized in Figure 7 to further assess the quality of the reconstruction process. Moreover, the mean square error (MSE), relative residual error (RRE), and peak signal-to-noise ratio (PSNR) of the inverted parameters are computed as a function of the number of wells included in the training process, showing an increment in the quality of the inversion as more information is added to the training process (Figure 6). From this figure, we can also observe that porosity is the best-resolved parameter, followed by water saturation and shale content; this is a direct consequence of the fact that elastic parameters (and therefore seismic data) are more sensitive to porosity and saturation variations in the pores than rock type variations in the matrix.
In addition, the inversion results along the well at location x=1971.3 m are presented in Figure 8b. The recovered properties are highly accurate when compared to the ground truth, except in places where high contrasts are present; this issue could be circumvented using another type of regularization, as explained in more detail in the Discussion section. Finally, Figure 8a presents a comparison between the true petrophysical coefficients and the petrophysical coefficients obtained using only the first p=6 coefficients (or singular values).
Figure 5: Results of inversion stacking wells during the training. a) Example of Petrophysical coefficients obtained when using one well to build the optimal basis functions. b) Set of background models used for inversion. Panels c), d), and e) show the inversion results when adding one, two, and three wells on the training stage respectively. The vertical white lines represent the location of the well logs extracted to build the optimal basis functions.
Figure 6: Metrics summarizing the quality of the inversion process as the number of wells used to build the basis functions increases. From left to right, mean square error (MSE), relative residual error (RRE), and peak signal-to-noise ratio (PSNR). Water saturation improves while adding more wells for training in this example.
Figure 7: Difference between the true and the inverted petrophysical models when using a) one, b) two, and c) three wells in the training stage respectively.
Figure 8: Inversion results along well \(x=1971.3\)\(\mathrm{m}\). a) Band-limited petrophysical coefficients (p=6) and b) results of petrophysical inversion.
#### 3.1.2 Seis2Rock as water saturation tracker
Next, we further investigated the capabilities of Seis2Rock in the context of geophysical monitoring using pre-stack time-lapse seismic data. A second dataset was created by shifting the oil-water contact to Depth=100. Note that the training dataset, and therefore the basis functions, remain unchanged. The inversion results presented in Figure 9 show that our method can produce almost the same porosity and shale content values as the baseline data, as well as a highly accurate estimation of the oil-water contact movement.
Figure 9c shows the difference between the petrophysical inversion results before and after the water displacement has taken place. Seis2Rock is able to identify and recover such fluid displacement.
Figure 9: Inversion results for the model with updated oil-water contact. a) Petrophysical coefficients (B) obtained. b) Inversion results after water displacement (monitor). c) Difference between inverted properties before (baseline), and after (monitor) the oil-water contact was displaced.
### Field data
In order to apply Seis2Rock to a field dataset, one must have access to one or more wells with a well-log suite comprising of petrophysical and, ideally, elastic parameters, as well as time or depth pre-stack seismic offset (or preferably angle) gathers. The Volve dataset contains pre-stack seismic data in the offset domain and a vast collection of well logs with both petrophysical and elastic parameters from two wells. Therefore, Seis2Rock is employed here in a purely data-driven fashion, eliminating the need for a rock physics model. However, the application of the Seis2Rock methodology is not straightforward and a series of pre-processing steps is necessary to obtain accurate results. A summary of the pre-processing sequence adopted in this work is illustrated in Figure 10.
#### 3.2.1 Pre-processing Volve data
To begin with, we identify the three inputs required to apply the Seis2Rock workflow: a pre-stack seismic data in the offset domain, a migration velocity model, and well-logs containing elastic and petrophysical information. Wells NO 15/9-19 BT2 and NO 15/9-19 A provide appropriate data for our investigation. More specifically, wells NO 15/9-19 BT2 is a dry well (fully water saturated), whilst well NO 15/9-19 A is partially filled with oil. However, as their trajectories deviate (Figure 11), we need to extract seismic data along 2D fences passing through the well paths. This is essential to enable the use of well-log information for comparison purposes after performing inversion.
Given these field conditions, the pre-processing framework for our study, as illustrated in Figure 10, begins with converting the pre-stack seismic data from the offset domain to the angle domain. We start the process by identifying the coordinates (lines and xlines) corresponding to the deviated well paths. These coordinates serve as a basis for extracting the pre-stack data and migration velocity, limited only to the relevant ilines and xlines. The obtained pre-stack offset and velocity fences are both used to create the angle gathers. Figure 12 presents the 2D fences along the two well logs used in this study. It shows the data in offset, its respective conversion in the angle domain, and the subsection to be used for inversion.
The next phase of our study entails the establishment of reliable background petrophysical models that serve as an initial guess for the final step of post-stack seismic inversion. We employ the petrophysical well-log information and velocity model at well locations to determine three different linear relationships. The resulting linear relationships are applied to the entire velocity model, which is converted into a set of petrophysical background models as required.
The last part of the pre-processing sequence is intended to create synthetic AVO gathers for the training phase on Seis2Rock. These synthetic AVO gathers should closely resemble those from the field data. To achieve this, we convert the well-logs to the same resolution as the pre-stack seismic data, followed by statistical wavelet extraction and amplitude calibration using the available pre-stack seismic gathers. As an example, the synthetic gather of well NO 15/9-19 BT2 is presented and compared to the real gather in Figure 13. Though some differences can be observed between the two gathers, the key events are successfully modelled and present similar AVO responses to those in the
Figure 10: Summary of the pre-processing framework applied to the Volve dataset.
field data.
#### 3.2.2 Seis2Rock inversion in Volve
When applying Seis2Rock to the processed data, two primary steps are involved. The first step involves the inversion of only the gathers located at the well position to ascertain the method's validity. Subsequently, the methodology is extended to cover the pre-stack data extracted along the well fences.
Figure 11: Well trajectories.
Figure 12: Fence data obtained along wells NO 15/9-19 A and NO 15/9-19 BT2. The offset domain, angle domain, and a subsection of the angle domain extracted for inversion are presented from left to right. The red line denotes where the well log information is collected.
#### Seis2Rock inversion along the well
The accuracy of our method is first tested by extracting the pre-stack gather passing through well NO 15/9-19 BT2. The petrophysical and elastic well-log data are employed in the training stage to obtain the optimal basis functions and coefficients. We initially decided to work with a single gather to evaluate the ability of Seis2Rock to manage noise present in the data and assess whether we can reconstruct the petrophysical well logs used in training. The results of the Seis2Rock inversion are compared to the ground truth in Figure 13d and e. Notably, the proposed methodology effectively reconstructs the well-log profile from a considerably smooth background model, with p=3 optimal basis functions. We opt for this number, as it successfully reduces the effect of noise in the data, whereas choosing larger values for the p coefficient produces poorer results.
#### Seis2Rock inversion along 2D fences
After successfully validating the ability of Seis2Rock to handle field data, we perform inversion on the pre-stack data along the two well fences extracted during the pre-processing stage. Initially, for the fence associated with well NO 15/9-19 BT2, we use the optimal basis functions computed in the previous section which utilize well-log data from the same well. Despite high noise levels within the data, Seis2Rock effectively reconstructs porosity and shale content along the fence. However, given the lack of hydrocarbon information within the saturation well-log (refer to Figure 14b), the inverted saturation model is almost identical to the background model. Subsequently, additional well-log information from well NO 15/9-19 A is incorporated in the training stage. Due to the presence of an oil zone in this well, the inverted water saturation model changes significantly, delineating potential hydrocarbon zones that were not identified in the previous inversion result (Figure 15b). However, some of these potential hydrocarbon zones may be erroneously identified due to the additional information provided by the second well in training. It is crucial to note that the outcomes for 14b) and 15b) were derived using the same background model, 14c). Thus, the hydrocarbon zones disclosed in 15b) are attributed to changes in the training data (due to the introduction of a second well), rather than a different background model causing water saturation changes.
Figure 13: Seis2Rock inversion results along well NO 15/9-19 BT2 of the Volve field. a) Pre-stack data along the well trajectory. b) Close-up in a depth window where the well-log data is available. c) Synthetic gather created from the well-log data. d) Porosity (\(\phi\)) inversion results. e) Shale content (\(\mathrm{V_{sh}}\)) inversion results. Water saturation is not inverted because the well-log presents a constant value for \(\mathrm{S_{w}}=1\).
Next, for the fence associated with well NO 15/9-19, the optimal basis functions are computed using solely the information contained within this well, and also adding well NO 15/9-19 BT2. Figures 16 and 17 present the inversion results obtained for porosity, shale content, and water saturation, alongside their respective background models. The estimated models exhibit lateral continuity and areas of high porosity and presence of hydrocarbon content.
Figure 14: Seis2Rock inversion results on the fence along well NO 15/9-19 BT2 of the Volve field. a) Petrophysical coefficients data, with the red line showing the well trajectory. b) Inversion results using only the information from well NO 15/9-19 BT2 in training. c) Background models used for inversion.
Figure 15: Seis2Rock inversion results on the fence along well NO 15/9-19 BT2 of the Volve field. a) Petrophysical coefficients data, with the red line showing the well trajectory. b) Inversion results using both wells NO 15/9-19 BT2 and NO 15/9-19 A in training. c) Background models used for inversion.
Figure 16: Seis2Rock inversion results for the 2D fence along well NO 15/9-19 A. a) Petrophysical coefficients data, with the red line showing the well trajectory. b) Inversion results obtained using the well-log information from well NO 15/9-19 A in training. c) Background models used for inversion.
Figure 17: Seis2Rock inversion results for the 2D fence along well NO 15/9-19 A. a) Petrophysical coefficients data, with the red line showing the well trajectory. b) Inversion results obtained using the well-log information from wells NO 15/9-19 A and NO 15/9-19 BT2 in training. c) Background models used for inversion. The construction of these models for inversion was obtained by utilizing a smoothed variant of the water saturation log, which effectively accounted for the presence of oil. This smoothed representation was duplicated across the two-dimensional fence to establish a robust background model.
#### 3.2.3 Seis2Rock inversion in 3D
In addition to our primary analyses, we also implemented a 3D inversion for each petrophysical parameter (porosity, shale content, and water saturation), thereby generating 3D petrophysical coefficient data for each specific parameter. For this case, well-log information corresponding to wells NO 15/9-19 A and NO 15/9-19 BT2 was used to build the optimal basis functions in the Seis2Rock framework. The depth of the analysis area ranged from \(2400m\) to \(3300m\). In a similar vein, we utilized Laplacian regularization as a part of our inversion scheme. The results of each petrophysical parameter, along with the \(\mathbf{B}\) coefficients tailored for porosity, are showcased in Figure 18.
The Seis2Rock inversion outcomes for the designated 3D area of interest are detailed as follows: part a) presents the petrophysical B coefficients specifically for porosity. Although we have also formulated similar data constructs for shale content and water saturation, these components are not depicted in the current figure. The results corresponding to the inversion of porosity, shale content, and water saturation are displayed in parts b), c), and d) respectively.
In conclusion, Seis2Rock accurately reconstructs petrophysical properties, particularly porosity and shale content, despite the high noise levels in the data. However, the water saturation displays inferior inversion results, potentially due to the limited variability of water saturation in the training data.
Figure 18: Seis2Rock inversion results for the 3D area of interest in the Volve dataset. a) Petrophysical coefficients B for porosity. Similar data terms are constructed for shale content and water saturation, however they are not included in the figure. Porosity, shale content, and water saturation results are show in b), c), and d) respectively.
## 4 Discussion
Seis2Rock is a novel data-driven method for direct petrophysical inversion of pre-stack seismic data. A peculiar characteristic of the proposed method is that it relies on simple (linear) algebraic operations to invert an underlying nonlinear relationship (equation 8). Interestingly, the final step of the algorithm can be interpreted as a post-stack seismic inverse problem (although applied on so-called band-limited petrophysical parameters): being this a workhorse for quantitative characterization of the subsurface, many algorithms have been developed over the years, which we can directly benefit from. The retrieved petrophysical models are relatively smooth for both synthetic and field data, however, the high contrasts in petrophysical properties are underestimated in some regions. Whilst we attribute this behavior mostly to the choice of using a Laplacian regularization and least-squares solvers, future work will focus on addressing this limitation by employing alternative regularization and inversion techniques. Additionally, next research efforts could be directed towards imposing constraints on the iterative outcomes of the inversion process to ensure they remain within the specified bounds of the expected petrophysical values.
For example, Total Variation Regularization could be used to enhance blockiness in the recovered subsurface model. However, as applying TV regularization introduces a non-smooth functional in the loss function, this cannot be easily minimized with standard least-squares solvers. Proximal solvers, such as the alternating direction method of multipliers (ADMM) can efficiently optimize these functionals [21]. Moreover, integrating clustering or segmentation constraints into the inversion process, as exemplified in the joint inversion-segmentation approach of [22], fosters the selection of models predominantly composed of a set of expected rock units or facies. In petrophysical inversion context, segmentation may be particularly beneficial as the link between facies and petrophysical properties is more direct than the link between facies and acoustic/elastic parameters as considered in [22]. In addition, our framework also permits the integration of new promising deep-based algorithms like the Plug-and-Play method with CNN-Based Denoisers [23] and its probabilistic extension [24, 25].
Apart from the final inversion step, Seis2Rock can be applied in an entirely data-driven fashion when well information includes both petrophysical and elastic parameters, as shown in the field data example. However, when elastic parameters are not measured at well locations, a representative rock physics model can be integrated to provide the link between petrophysical and elastic parameters; whilst this step allows Seis2Rock to be applied to a much wider set of use cases, it introduces additional complexity due to the uncertain nature of rock physical models. Future research will investigate how to embed sensitivity analysis and uncertainty quantification of the RPM and its hyperparameters into the Seis2Rock process. Finally, part of our method's robustness is attributed to its capacity to mitigate noise effects in seismic data by selecting an appropriate number of coefficients (\(p\)) when constructing the optimal basis functions and coefficients. However, the selection this parameter is user-dependent; consequently, incorporating \(p\) into subsequent uncertainty quantification analysis is a viable approach.
## 5 Conclusions
Seis2Rock is an efficient and robust technique for petrophysical inversion. The introduced approach relies on singular value decomposition as a way to identify a set of optimal basis functions from pre-stack seismic data modelled at one or a small number of well locations. Such basis functions are later used to project pre-stack seismic data into band-limited petrophysical reflectivities; these reflectivities can be ultimately inverted for full-bandwidth petrophysical parameters by simply solving a post-stack seismic inversion per parameter. The proposed approach contrasts with data-hungry deep learning models, which require extensive amounts of synthetic data to establish a connection between petrophysical parameters and pre-stack data. Additionally, the flexibility to select the number of optimal coefficients allows Seis2Rock to handle data with various degrees of noise. Results on synthetic data indicate that Seis2Rock can directly invert petrophysical properties from seismic pre-stack data and porosity, water saturation, and shale content can be recovered with moderate to high degree of accuracy. When applied to field datasets, Seis2Rock relies on application of pre-processing steps to construct synthetic AVO gathers that closely mimic the field data. The results obtained in this work on the Volve dataset suggest that Seis2Rock can effectively recover petrophysical information, even in the presence of high noise levels. In this regard, using a smaller number of optimal basis functions helps to manage noise levels. Finally, similarly to any other data-driven method, Seis2Rock's optimal basis functions may perform deficiently when applied on seismic datasets with geological settings that differ to those of the training data (e.g, far away from the control well).
## Acknowledgment
This publication is based on work supported by King Abdullah University of Science and Technology (KAUST) and the DeepWave consortium. The authors also thank Equinor and partners for providing access to the Smeaheia and Volve datasets.
## Appendix A Seis2Rock back-projection of band limited optimal coefficients
This Appendix provides a mathematical explanation of the back-projection process of Seis2Rock where optimal coefficients are converted into band-limited petrophysical parameters. To begin, we postulate the existence of a 3-terms linear modelling equation linking the petrophysical parameters of interest to pre-stack seismic data. We do so both for the data \(\tilde{d}(t_{j},\theta)\) associated with the parameters coming from the available well-logs, as well as for the data \(d(t_{j},\theta)\) we wish to invert for:
\[\tilde{d}(t_{j},\theta) \approx\sum_{i=1}^{N_{t}}w(t_{j}-t_{i})\left[\alpha\tilde{r_{1}}( t_{i})+\beta\tilde{r_{2}}(t_{i})+\gamma\tilde{r_{3}}(t_{i})\right] \tag{13}\] \[d(t_{j},\theta) \approx\sum_{i=1}^{N_{t}}w(t_{j}-t_{i})\left[\alpha r_{1}(t_{i})+ \beta r_{2}(t_{i})+\gamma r_{3}(t_{i})\right] \tag{14}\]
or in a matrix-vector notation as:
\[\mathbf{\tilde{d}} =\alpha\mathbf{W\tilde{r_{1}}}+\beta\mathbf{W\tilde{r_{2}}}+\gamma \mathbf{W\tilde{r_{3}}} \tag{15}\] \[\mathbf{d} =\alpha\mathbf{W\mathbf{r_{1}}}+\beta\mathbf{W\mathbf{r_{2}}}+ \gamma\mathbf{W\mathbf{r_{3}}} \tag{16}\]
Here \(\alpha\), \(\beta\), \(\gamma\) depend on the type of linearization, \(r_{1}\), \(r_{2}\), \(r_{3}\) are reflectivity coefficients associated with the petrophysical parameters of interest (i.e., porosity, shale content, water saturation). \(\mathbf{W}\) is the convolutional operator that applies the wavelet \(w(t)\) to the reflectivities, and the symbol \(\sim\) is used to indicate parameters coming from the well-log information. Note that in this derivation we have omitted for simplicity the background dataset.
We also expand the Seis2Rock modelling operator in equation 9 as:
\[d_{j}(t_{j},\theta)\approx c_{1}f_{1}(t_{j},\theta)+c_{2}f_{2}(t_{j},\theta)+c _{3}f_{3}(t_{j},\theta)+\cdots=\sum_{k=1}^{p}c_{k}f_{k}(t_{j},\theta) \tag{17}\]
From [15] we can write each basis function \(f\) as follows:
\[f_{k}(t_{j},\theta) =\sum_{j=1}^{N_{t}}h_{jk}\tilde{d}(t_{j},\theta)\] \[=\sum_{j=1}^{N_{t}}h_{jk}\sum_{i=1}^{N_{t}}w(t_{j}-t_{i})\left[ \alpha\tilde{r_{1}}(t_{i})+\beta\tilde{r_{2}}(t_{i})+\gamma\tilde{r_{3}}(t_{i })\right] \tag{18}\] \[=\alpha\sum_{j=1}^{N_{t}}h_{jk}\sum_{i=1}^{N_{t}}w(t_{j}-t_{i}) \tilde{r_{1}}(t_{i})+\beta\sum_{j=1}^{N_{t}}h_{jk}\sum_{i=1}^{N_{t}}w(t_{j}-t _{i})\tilde{r_{2}}(t_{i})+\gamma\sum_{j=1}^{N_{t}}h_{jk}\sum_{i=1}^{N_{t}}w(t_ {j}-t_{i})\tilde{r_{3}}(t_{i})\]
Now we can insert the basis functions in equation 18 into equation 17:
\[d_{j}(\theta)\approx \alpha\sum_{k=1}^{p}c_{k}\sum_{j=1}^{N}h_{jk}\sum_{i=1}^{N}w(t_{j }-t_{i})\tilde{r_{1}}(t_{i})+\] \[\beta\sum_{k=1}^{p}c_{k}\sum_{j=1}^{N}h_{jk}\sum_{i=1}^{N}w(t_{j }-t_{i})\tilde{r_{2}}(t_{i})+ \tag{19}\] \[\gamma\sum_{k=1}^{p}c_{k}\sum_{j=1}^{N}h_{jk}\sum_{i=1}^{N}w(t_{j }-t_{i})\tilde{r_{3}}(t_{i})\]
Expressing equation 19 for all time samples in the matrix vector notation:
\[\mathbf{d}=\alpha\mathbf{c^{T}H^{T}W\tilde{r_{1}}}+\beta\mathbf{c^{T}H^{T}W\tilde {r_{2}}}+\gamma\mathbf{c^{T}H^{T}W\tilde{r_{3}}} \tag{20}\]
and considering the terms with the same coefficients \(\alpha\), \(\beta\), \(\gamma\) in equations 16 and 20, we obtain:
\[\mathbf{Wr_{1}}=\mathbf{c^{T}H^{T}W\tilde{r_{1}}}\quad\ i=1,2,3(\mathrm{number \ of\ physical\ parameters}) \tag{21}\]
This set of equations could be written in a compact form for the whole time sequence and three petrophysical parameters if we define \(\mathbf{R}=[\mathbf{r_{1}}(t),\mathbf{r_{2}}(t),\mathbf{r_{3}}(t)]\) and \(\mathbf{C}=[\mathbf{c_{1}}(t),\mathbf{c_{2}}(t),\mathbf{c_{3}}(t)]\):
\[\mathbf{W}\mathbf{R}=\mathbf{C^{T}H^{T}W\tilde{R}} \tag{22}\]
Finally, to obtain reflectivities we can simply divide by the wavelet in each side:
\[\mathbf{R}=\mathbf{W^{-1}C^{T}H^{T}W\tilde{R}} \tag{23}\]
Similarly, to obtain the petrophysical parameters the derivative operator can be also inverted for from \(\mathbf{R}\) leading to equation 12.
|
2306.08973
|
Scattering of relativistic electron beams by the anode mesh in
high-current vircators
|
In a virtual cathode oscillator, the scattering of a high-current
relativistic electron beam by the anode mesh leads to formation of an electron
cloud near the anode. The cloud particles possess low energy and large spread
in velocities caused by multiple scattering and ionization losses. The
electrons captured by the cloud do not participate in the oscillations of the
virtual cathode and partially block a vircator. As a result, the amplitude of
the electric field oscillations is reduced. In order to increase the
oscillation amplitude, the thickness of the anode mesh should be equal to the
mean free path for electrons in the mesh material.
|
Sergei Anishchenko, Vladimir Baryshevsky, Alexandra Gurinovich
|
2023-06-15T09:08:00Z
|
http://arxiv.org/abs/2306.08973v1
|
# Scattering of relativistic electron beams by the anode mesh in high-current vircators
###### Abstract
In a virtual cathode oscillator, the scattering of a high-current relativistic electron beam by the anode mesh leads to formation of an electron cloud near the anode. The cloud particles possess low energy and large spread in velocities caused by multiple scattering and ionization losses. The electrons captured by the cloud do not participate in the oscillations of the virtual cathode and partially block a vircator. As a result, the amplitude of the electric field oscillations is reduced. In order to increase the oscillation amplitude, the thickness of the anode mesh should be equal to the mean free path for electrons in the mesh material.
vircator, multiple scattering, ionization losses, radiation losses pacs: 41.75.-i, 34.50.Bw
## I Introduction
The interaction of charged particles with matter plays an important role in various fields of science and technology. Radiation therapy, design of modern particle detectors, and radiation protection of spacecrafts are impossible without a thorough analysis of the passage of charged particles through matter and quantitative treatment of multiple scattering and energy losses.
The interaction of charged particles with matter is also important in high-current electronics dealing with the propagation of high-power ion and electron beams in electrodynamic structures [1]. Indeed, some of the beam particles, when they inevitably hit one of the structural elements (anode mesh, collector, drift tube, etc.), are reflected due to multiple scattering, return back into the interaction area and continue to interact with electromagnetic fields inside the system. For example, about a quarter of particles normally incident on a steel element of the structure are reflected off. If the incidence angle significantly differs from normal then the fraction of reflected particles is much higher. The spectrum of kinetic energy for reflected particles extends from zero to the initial energy. Since the number of incident electrons is comparable with the number of reflected ones, the latter can significantly affect the operation of high-current devices.
A large number of theoretical results devoted to multiple scattering of particles are unsuitable for studying the interaction of high-current electron beams with matter. First, many studies use a small-angle approximation [2; 3; 4; 5; 6; 7; 8]. Secondly, the approximate values of elastic electron-atom scattering cross sections differ significantly from those obtained by numerical calculation at energies < 1 MeV [9]. Thirdly, multiple scattering cannot be considered separately from ionization [10; 11; 12; 13; 14] and radiation [15; 16] losses. These losses are significant in the case of <<thick>> electrodynamic structures.
A consistent quantitative description of particle-matter interaction can be obtained using Monte-Carlo simulations [17; 18; 19; 20; 21; 22; 23; 24; 25] based on the most rigorous Goudsmit-Sanderson multiple scattering theory [3]. The algorithms used in numerical calculations should be, on the one hand, sufficiently accurate, and, on the other hand, fast.
In this paper, we describe an approach to modeling the interaction of electron beams with electrodynamic structures in high-current electronic devices. The approach will be validated by calculating electron transmission and reflection coefficients and comparing the obtained results with the numerical simulations [20; 22] and experimental data [26; 27] published in the literature.
After validation, the numerical method will be integrated into a one-dimensional program for modeling high-current devices with an oscillating virtual cathode (VC). We will demonstrate that the scattering of relativistic particles by the anode mesh leads to the formation of an electron cloud near the anode. The cloud particles possess large energy spread and cause significant decrease in the amplitude of field oscillations in vircators. It will be shown that the use of an anode mesh with the thickness approximately equal to the electron mean free path in the anode material leads to decrease in the number of cloud particles and could contribute to increase of the oscillation amplitude.
## II Passage of relativistic electrons through matter
### Monte-Carlo simulation
The basis of numerical simulation of electron-matter interaction is the Monte-Carlo method [22; 23; 24; 19], which can be described as follows. The trajectory of each electron in matter is divided into many small segments. For each segment, the electron energy is assumed to be constant. When passing from one segment to another, the particle changes its energy in accordance with the theory of ionization and radiation losses [14]. Angular distribution of scattered particle is described by the Goudsmit-Sanderson distribution [19]. The calculation is carried out until the particle either leaves the substance or loses a significant part of initial energy (in practice, calculations stop when the electron energy approaches 10 keV).
A significant drawback of the Monte-Carlo method is the computational complexity. To calculate the Goudsmit-Sanderson distribution, it is necessary to sum a large number of terms. Each term contains an integral of a rapidly oscillating function. This circumstance significantly complicates the use of the standard Monte-Carlo method in high-current electronics due to the huge number of particles used in the simulation of high-current devices.
A way out in this situation could be a method, in which the distribution function of the scattering angles would be found using a simple formula. And such a method was proposed in [25]. It is based on a rigorous theoretical results obtained by Lewis [28]. According to [28], the most significant mean values depend only on the transport cross-sections: the first one
\[\sigma_{1}=2\pi\int_{0}^{\pi}(1-\cos\chi)\frac{d\sigma(\chi)}{d\Omega}\sin\chi d\chi \tag{1}\]
and second that
\[\sigma_{2}=2\pi\int_{0}^{\pi}\frac{3}{2}(1-\cos^{2}\chi)\frac{d\sigma(\chi)}{d \Omega}\sin\chi d\chi. \tag{2}\]
Here, \(\frac{d\sigma(\chi)}{d\Omega}\) is the differential elastic electron-atom cross-section. For example, the average longitudinal velocity \(v\) and the root-mean-square deviation of the transverse velocity in a
segment of length \(s\)1 change in accordance with the formulas as follows, respectively:
Footnote 1: The length of \(s\) must be much less than the interval over which the particle energy changes significantly.
\[v<\cos\theta>=v\exp(-n\sigma_{1}s) \tag{3}\]
and
\[v^{2}(1-<\cos^{2}\theta>)=v^{2}\Big{(}1-\frac{1+2\exp(-n\sigma_{2}s)}{3}\Big{)}. \tag{4}\]
Symbol \(n\) denotes the particle density, \(\theta\) is the polar angle between two vectors of particle velocity. The first vector corresponds to the entering to the segment \(s\) and the second one corresponds to the segment exit.
The idea stated in [25] is as follows. If a simple distribution leads to the formulas (3) and (4), then the computational complexity of the Monte-Carlo method is significantly reduced. At the same time, the results of modeling particle passage through matter does not change. This was convincingly demonstrated by authors [25] by calculating the angular distribution of electrons after passed through gold foils of various thicknesses.
### Reflection and transmission coefficients
To verify the described approach to simulation of the interaction of electrons with matter, we investigated electron scattering by thick foils made of various materials (Be, C, Al, Fe, Ag, Au, U) and calculated reflection coefficients. The energy of particles normally incident on a target varied from 0.1 to 2 MeV. Comparison of our simulation results and experimental data [26; 27] demonstrates better agreement for heavy elements (left plot in figure 1).
Simulation results obtained for light elements (right plot in figure 1) deviate from experimental data in a greater extent: this is due to ignoring the electron-electron collisions in the simulation that increases the angle of multiple scattering by approximately \(1+1/Z\) times. The nuclear charge in the denominator indicates that this phenomenon is more significant for light elements. Difference observed in simulation results and experiment for heavy elements (see curves for gold and uranium in left plot in figure 1) could be explained by rearrangement of the shells of valence electrons in a solid matter.
Figure 1: Electron reflection coefficients: solid lines depict our simulation results, points correspond to experimental data [26; 27].
Figures 2, 3 illustrate comparison of simulation results with calculations [22] for transmission and reflection coefficients for electrons scattered by aluminum and beryllium foils of different thicknesses; comparison with experimental data [27] is also provided. Our results are in good agreement with the calculations [22]. However, experimentally obtained data for reflection coefficient (right plots in Figures 2, 3) differ from both our simulations and calculations [22] that is most likely due to the neglect of electron-electron collisions.
Figure 4 shows the dependence of the reflection coefficient on the kinetic energy of a particle incident on an aluminum plate. Three solid curves present the simulation results obtained for different angles of incidence. At energy values up to \(\sim 2\) MeV our simulation results are in good agreement with both calculation [20] and experimental data [27]. At the energy \(\sim 2\) MeV, the reflection coefficients given in [20] demonstrate a noticeable decrease in contrast to our calculations. Such behavior applies to the case of normal incidence.
Figure 3: Transmission (left) and reflection (right) coefficients for electrons scattered by beryllium foils of various thicknesses: solid lines correspond to the simulation results; red, green and black dots are the calculations from the paper[22]; blue dots depict experimental data [27]
Figure 2: Transmission (left) and reflection (right) coefficients for electrons scattered by aluminum foils of various thicknesses: solid lines correspond to the simulation results; red, green and black dots are the calculations from the paper[22]; blue dots depict experimental data [27]
Slight deviation of our simulation results from those presented in Berger's paper [20] could be associated with different approaches to the consideration of multiple scattering. We used the approach [25] based on rigorous results obtained by Lewis [28] from the Goudsmit-Sanderson theory, while Berger resorted to the approximate Moli\(\grave{e}\)re theory. Note that experimental data obtained at normal incidence (\(\theta=0^{o}\)) (figure 1) does not show such a decrease in the reflection coefficient.
Thus, the simulation results obtained in this work demonstrate good agreement with experimental data and numerical calculations published in the literature. Some discrepancy (\(\sim 20\%\)) with the experimental data is observed in the reflection coefficient for electrons with energy \(\sim 0.1\) MeV scattered by beryllium foil.
## III VC oscillations
The approach developed to simulate the passage of electrons through matter was integrated into a one-dimensional code designed to simulate high-current devices with an
Figure 4: Dependencies of the electron reflection coefficient on the kinetic energy at different angles of incidence on an aluminum plate: solid lines depict our simulation results and dots show calculation results presented in [20].
Figure 5: One-dimensional model of a high-current device with an oscillating VC.
oscillating virtual cathode. This code calculates motion of relativistic electrons in a self-consistent longitudinal electric field by the particle-in-cell method. The computational domain consists of two parts (figure 5): the cathode-anode gap and the drift space. The potential of the right wall of the drift space can be either equal to the anode potential (in the case of a vircator) or to the cathode potential (in the case of a reflex triode). Injection of particles into the system is carried out under conditions of unlimited emission capability of the cathode. This condition corresponds to the regime of explosive electron emission that takes place in high-current accelerators.
The initial version of the PIC code took no account of the scattering by anode mesh: the anode was assumed to be semitransparent and was characterized by a single parameter, the geometric transparency \(\eta\) which varied from zero to one. When a particle passed through the anode, its charge was multiplied by the transparency coefficient. The procedure corresponded to the partial absorption of particles by the anode mesh. Neither the reflection of particles, nor the deceleration of particles in the anode material, nor spread of particle velocity due to multiple scattering were taken into account by the code. In what follows, we will refer to the simulation model just described as the absorption model. Figures 6 and 7 show the electric field oscillations, spectrum, and the phase portrait of the beam in the vircator in the absorption model. The geometric transparency is \(\eta=0.7\).
In the updated version of PIC code the scattering of particles by the anode mesh was added via consideration of electron-matter interaction. (We will refer to the new simulation model as the scattering model.) The anode mesh was described by three parameters: the geometric transparency of the mesh, the anode material, and anode thickness \(d\). The geometric transparency \(\eta\) was introduced in terms of probability \((1-\eta)\) for a particle to hit the anode mesh, which was supposed to be a steel foil of thickness \(d\) with holes. The ratio of the sum of hole areas to the total foil area defined the geometric transparency \(\eta\).
Thus, a particle passing through the anode is scattered with probability \(1-\eta\) rather than \(\eta\). Particle scattering by the anode is calculated exactly in the same way as scattering by a solid foil2. The scattered (unabsorbed) particle enters either the cathode-anode gap or the drift space with a random component of the transverse momentum and with a reduced longitudinal velocity due to ionization energy loss. In this case, time of particle
Figure 6: Electric field in the VC region and electric field spectrum. Absorption model.
motion inside the anode material is neglected. Since the characteristic thickness \(d\) is much smaller than the characteristic value of the cathode-anode gap, we neglect time, which particle spends inside the metal.
Figures 8 and 9 show the electric field oscillations, spectra and the phase portraits in the case of steel anode of different thicknesses (thickness are indicated on the plot in units of the mean free path \(s_{0}\)). The geometric transparency of anode mesh is \(\eta=0.7\). Simulation of vircators showed that electron scattering by the anode mesh leads to the formation of a cloud of low-energy electrons with a large spread in velocities. These electrons appear due to loss of longitudinal momentum after scattering by the anode mesh. Due to deceleration, these electrons do not have enough energy to reach the either real or virtual cathode. As a result, they oscillate in the potential well between the cathode and the virtual cathode until complete absorption. Scattering by the anode leads to a random change in phase of electron oscillations. This change prevents particles from generating coherent oscillations. In addition, an electron cloud partially shields the anode and blocks the vacuum diode. As a consequence, the amount of electrons, which could participate in collective oscillations, appears smaller. As a result, the amplitude of oscillations in a vircator decreases.
The vircator operation can be improved if the thickness of the partitions in the anode mesh is chosen to be close to the electron path length in the anode material. In this case, the number of scattered electrons passing through the anode could be significantly reduced. As a consequence, fewer particles in the cloud would affect oscillation amplitude decreasing.
Figure 7: Phase portrait of an electron beam in the absorption model. The longitudinal coordinates and particle velocities are shown along the abscissa in the units of the cathode-anode gap and along the ordinate in units of speed of light, respectively.
## IV Conclusion
Multiple scattering of a relativistic electron beam by an electrodynamic structure accompanied by ionization and radiation energy losses significantly influences on operation of a high-current virtual cathode oscillator. In a vircator, the scattering of a high-current relativistic electron beam by the anode mesh leads to formation of an electron cloud near the anode. The cloud particles possess low energy and large spread in velocities caused by multiple scattering and ionization losses. The electrons captured by the cloud do not participate in the oscillations of the virtual cathode and partially block a vircator. As a result, the amplitude of the electric field oscillations is reduced. In order to increase the oscillation amplitude, the thickness of wires forming the anode mesh should be equal to the mean free path for electrons in the mesh material. Use of anode mesh corresponding the above requirements enables to minimize the number of scattered electrons passing through the anode.
Figure 8: Electric field and its spectrum in the VC region.
Figure 9: Phase portraits of the electron beam at different thicknesses of the steel anode. The phase portraits shown on the left and on the right correspond to the thickness \(0.8s_{0}\) and \(0.2s_{0}\), respectively. The longitudinal coordinates and particle velocities are shown along the abscissa in the units of the cathode-anode gap and along the ordinate in units of speed of light, respectively
We applied the approach for modeling of multiple scattering described by [25] in the simulation code developed by us to calculate the influence of electron ionization losses and multiple scattering by the anode mesh on particles' dynamics in a vircator. The above approach significantly reduces calculation time as compared with the most popular concept based on the Goudsmit-Sanderson distribution.
|
2301.12754
|
Why stars inflate to and deflate from red giant dimensions, II: replies
to critics
|
In a 1992 paper of ours the role of opacity-driven thermal instabilities in
shaping the course of stellar evolution was amply illustrated. This included
the classical issue of ``{\it why stars become red giants"} as well as the
subsequent formation of extended ``Cepheids" {\it blue loops} during the helium
burning phases. Our explanation of these evolutionary phenomena has been
occasionally dismissed with just a few words in refereed or not refereed
publications. In a most recent case, the fact that, through the years, I did
not reply to these criticisms is interpreted as evidence that they were well
founded. In this paper it is made clear that this is not at all the case, the
leading role of such instabilities is instead reaffirmed and the criticisms are
shown to be insubstantial.
|
Alvio Renzini
|
2023-01-30T10:00:20Z
|
http://arxiv.org/abs/2301.12754v1
|
# Why stars inflate to and deflate from red giant dimensions, II:
###### Abstract
In a 1992 paper of ours the role of opacity-driven thermal instabilities in shaping the course of stellar evolution was amply illustrated. This included the classical issue of "_why stars become red giants_" as well as the subsequent formation of extended "Cepheids" _blue loops_ during the helium burning phases. Our explanation of these evolutionary phenomena has been occasionally dismissed with just a few words in refereed or not refereed publications. In a most recent case, the fact that, through the years, I did not reply to these criticisms is interpreted as evidence that they were well founded. In this paper it is made clear that this is not at all the case, the leading role of such instabilities is instead reaffirmed and the criticisms are shown to be insubstantial.
keywords: stars: evolution - stars: interiors - Hertzsprung-Russell and colour-magnitude diagrams
## 1 Introduction
The founding fathers of stellar evolution, Subrahmanyan Chandrasekhar, Martin Schwarzschild, Alan Sandage and Fred Hoyle, set up the stage, made the early discoveries and introduced the _language_ of stellar evolution. But left a problem unsolved, which is usually formulated as a question: "why stars _become_ red giants?1 Then, in the early'sixties, adequate computers became available and, thanks to scientists such as Rudolf Kippenhahn, Pierre Demarque and Icko Iben, full realistic stellar evolutionary sequences began to be produced. Computers had no difficulty whatsoever to make red giants, and to account, for instance, for the so-called Hertzsprung gap, i.e., the paucity of stars in between the main sequence (MS), spectral types O to F, and the red giants branch (RGB), spectral types K and M. Models swept quickly through to the gap and found a rest only as red giants. Problem solved? Yes and no. The clear success of the models satisfied almost everybody, and the question _Why?_ became kind of academic and few continued to pay attention to it. A common wisdom was that it is not easy to predict, without computers, what the solutions are of a system of four, non-linear, differential equations.
Footnote 1: The reason to use italics fonts for this verb will be made clear in the course of the paper.
A first hint came to me when realising what was happening in the evolution of a \(3\,M_{\odot}\) stars in one of the classical Iben's papers (Iben, 1965). In its Figure 6, one can see that some 10 Myr after hydrogen exhaustion in the core, the surface luminosity of the star was starting to drop while the nuclear luminosity (provided by the hydrogen burning shell) was still increasing for another \(\sim 2\) full Myr, and only then it started to drop as well. So, something was happening to the envelope that the core did not know yet. The envelope had become incapable to transmit outwards and radiate away all the energy being generated in the interior, i.e., the envelope had become thermally unstable! This incapacity was clearly causing radiative energy being trapped in the envelope, which in turn was causing its runaway expansion. Thus, a thermal instability of the stellar envelopes was first proposed as the driving of the stars to red giant dimensions (Renzini, 1984; Iben & Renzini, 1984) This was the beginning of the story.
## 2 runaway inflations and deflations of stellar envelopes
This explanation was more effectively formulated in a paper with the same title of the present one (Renzini et al., 1992), along with the presentation of stellar evolutionary sequences and toy envelope models illustrating what physically happens to the stars.2 I could find no better way for summarising the paper than use its very abstract, which is reproduced here:
Footnote 2: Most of these calculations were made with Iben’s stellar evolution code that kko kindly provided to us.
"We demonstrate that a unique physical process is responsible for the runaway expansion of stars to red giant dimension, as well as for their subsequent recollapse leading to the so-called blue loops. In response from an increasing luminosity from the core, the stellar envelope expands keeping its thermal equilibrium3 insofar
the envelope thermal conductivity increases. However, expansion implies local cooling, ion recombination, and thus increasing opacity, in such a way that a time comes when further expansion causes a drop in thermal conductivity in the envelope. As the luminosity transferred outward and radiated away from the surface drops, thermal equilibrium is broken and an increasing fraction of the core luminosity is trapped in the envelope, causing further expansion and further drop of the thermal conductivity: the resulting runaway inflation of the envelope brings the star into the red giant region of the H-R diagram, and thermal equilibrium is not restored until convection penetrates inwards and the whole envelope becomes convective. The reverse process is responsible for the formation of the blue loops. During the early helium-burning phase, the core luminosity decreases and the star descends along the Hayashi track. By contraction the envelope heats up, the heavy elements ionize and the opacity drops. As the inner part of the envelope returns to radiative equilibrium, the envelope departs again from thermal equilibrium, since by contraction the temperature increases, the heavy ions ionize, the opacity drops, the thermal conductivity increases, and so do the radiative energy losses. Thus the envelope catastrophically deflates inside the potential well of the star. We present detailed analyses of these runaway inflations and deflations, and apply those concepts to achieve a deeper physical understanding of several major features of stellar evolution, etc..."
So, from the beginning ours was not just an explanation of why stars _become_ giants, but also of why some stars retreat from a red giant size and experience extended blue loops during their core-helium burning phase, another unexplained issue at that time. Lauterborn, Refsdal & Roth (1971) had indeed realized that what they called a "secular instability" was developing "during" the loops, but could not identify its physical origin. Our 1992 paper has been often criticized, but only for the red giant issue, while the loops part was ignored. This was somehow disappointing, because ours ambition had been to _unify_ the two problems as a manifestation of a single process: a thermal instability arising in stellar envelopes, where opacity plays a leading role. For many years I left unanswered the criticisms, hoping that time would have settled the issue, but this did not happen. Once every few years a paper appears cursorily dismissing our explanation and venturing into attempts to find the real one, but never claiming success. The latest case is a paper by Miller Bertolami (2022) that has recently appeared on the ArXiv, where one reads: "The discussions about why stars become red giants have sometimes turned into heated debates (Sugimoto 1997; Faulkner 1997; Sugimoto & Fugimoto 2005; Faulkner 2005), while some other times authors have ignored criticism, and continue to develop ideas (Renzini et al. 1992; Renzini & Ritossa 1994) that had already been seriously questioned by other researchers (Weiss 1989; Iben 1993)". So, our explanation was dismissed on the basis that others had already claimed that it is bogus and the fact that we did not further react is interpreted as an implicit admission that the criticism was well founded. So, this paper is meant to make clear that this is not at all the case. I still believe that the explanations in our 1992 paper are correct and describe why (some) stars _become_ red giants, and others don't.
## 3 The Critics
In this section the criticisms made by the mentioned authors are quoted and comment. Let's start from Weiss (1989). In his paper Achim Weiss examines the criterion for thermal stability of stellar envelopes that I had proposed in 1984 and he found some cases in which apparently it would not apply. In particular, an approximate rendition for the criterion was found inaccurate. This last point was admitted in our 1992 paper and the other criticisms where answered. But in the end Weiss concluded: "I agree with Renzini's discussion on the differences in redward evolution (Renzini 1984) of stars of different masses, different metal content and his conclusion about the connection with stellar opacities. Also, this paper confirms that the expansion is a pure envelope phenomenon that has to be initiated somehow (e.g. by a rapid contraction of the core and the presence of a burning shell). It might be acceptable that the final expansion in some stars after having reached a maximum luminosity is indeed a thermal runaway effect connected with some specific opacity features." So, rather than demolish it, Weiss confirmed the proposed physical origin of the inflation to red giants.
Sugimoto & Fujimoto (2000) confine their criticism to the following sentences: "Renzini et al. (1992) and Renzini & Ritossa (1994) have argued that thermal instability in the envelope, which results in the deep penetration of surface convection, is the cause of the expansion to a red giant. It is true that the red giant branch is separated from the branch of stable blue giants without surface convection by the well-known Hertzsprung gap, as first noted by Kozlowski (1971) and Lauterborn (1972) and also seen from Figure 7. A star does undergo thermal instability in the envelope when crossing the gap. This is not the cause of the large expansion, however, but is a result of potentially extraordinarily large radii, as already noted by many authors (Yahil & van den Horn 1995; Whitworth 1989; Fujimoto & Iben 1991; Iben 1993)". So, Sugimoto and Fujimoto concur that a thermal instability develops in the envelope, however it would not be the thermal instability that drives the expansion but rather the instability would be driven by the _willingness_ of the star to assume a stable giant configuration that terrestrial mathematicians had proved to exist. I don't know how to otherwise interpret this sentence.
According to Fujimoto & Iben (1991): "Renzini (1984) discusses the thermal instability in the envelope as a cause of the red giant structure." This sentence does not properly represent what said in my 1984 paper, where the thermal instability is seen the cause of the _expansion_ to the red giant structure, rather than of the existence of static red giant solution to the four equations. Then Fujimoto and Iben go on: "... during the evolution to a red giant, a thermal instability in the envelope occurs and the surface convective zone extends deeply into the interior. however, the envelope instability is just an episode which occurs when the surface temperature grows low enough in the course of expansion, forcing the helium ionization zone inward (in the case of helium stars) or forcing the hydrogen and helium ionization zones inward (in the case of hydrogen stars), and causing the effective polytropic index to increase. This instability is not itself responsible for the development of a red giant structure (see also Weiss 1989)." However, this is not what Weiss concluded, as reported above, if by "development" one means _expansion_ of a dwarf to become a giant. If instead they meant _existence_ of a red giant structure solution to the equations, then the same comment to the point of view of Sugimoto & Fujimoto applies, as above. Incidentally, in our 1992 paper it is shown that through the expansion the polytropic index in the envelope does not change much at all (see Figure 9 there).
Iben (1993) had four objections to our 1992 paper, partly following the lines of Fujimoto & Iben (1991). Contrary to what stated by Miller Bertolami (2022), we did not leave unanswered the four objections, but in Renzini & Ritossa (1994) we demonstrated all the four points to be invalid, so there is no need to expand here
and the interested reader may look directly at lben's views and our replies.
## 4 Static stellar models
In the course of their evolution, stars either expand or contract all the time. Why are they doing so? On the MS and beyond, what drives stellar evolution is nuclear burning in the deep interior, whereas the envelope has a passive role; it has _just_ to adjust to the changing conditions in the core. However, details of the core internal structure are irrelevant, as there is no _underground_ communication between the envelope and the core. What matters to the envelope are just three core quantities, namely, its mass, radius and luminosity, which set the the boundary conditions at the base of the envelope. Of the three, by far the dominant one for the driving of envelope changes is luminosity. For example, take a star during its core hydrogen-burning phase: in the conversion of hydrogen into helium three particles disappear, pressure would drop (because \(P=nkT\)), so the core by loosing pressure support is forced to contract. In doing so it heats up thus being able to maintain its hydrostatic equilibrium. As core density and temperature increase, so does the nuclear luminosity, and this is the trigger to envelope expansion. Suppose the envelope does not change at all while the nuclear luminosity increases. The envelope would receive from its bottom more energy than it is able to emit from the surface, hence its total energy (thermal plus gravitational) would increase: it expands. In general, when the core luminosity increases the star expands, when it decreases the star contracts (at least insofar the envelope is close to thermal equilibrium). Anyway, during these changes deviations from hydrostatic equilibrium are tiny and indeed we speak of _quasi-static_ evolving configurations.
Several papers devoted to the issue "why stars _become_ red giant?" deal instead with fully static configurations, i.e., having set to zero the gravitational energy generation (the so called \(\epsilon_{\rm g}\)). This is equivalent to freeze evolution and explore instead potential stellar structures in full thermal equilibrium that may or may not be realized in Nature. As such, these studies may answer the question "what is the structure of red giants", but having switched off evolution they are by construction incapable of answering the other question, i.e., how such structure _becomes_ established in real stars, and in stellar evolutionary sequence alike? The key point is that the word _become_ implies _evolution_, not static, unevolving states. The transition to red giant can only be understood in a full evolutionary context.
Most of the papers mentioned in the previous section deal instead with static models, and thus are intrinsically unable to answer the usual question. Fairly enough, Whitworth (1989) entitled his paper "Why red giant are giant". This is indeed a question that static models can answer. But intrinsically cannot account for evolutionary transitions. Not only static models were used, but a specific subset of them (e.g., Yahil & van den Horn 1995; Fujimoto & Iben 1991; Sugimoto & Fugimoto 2005; Faulkner 2005). Perhaps with the ambition to succeed where the founding fathers have failed, they tried to use their same old tools, specifically the (in)famous homology invariants \(U\) and \(V\). For what matters here, suffice to say that \(U\) and \(V\) are built using only two of the four equations, namely those for hydrostatic equilibrium and mass conservation. No energy generation. No luminosity, i.e., no energy transfer through the star. Hence, in such approach the drivers of evolution are expunged altogether. I dare to say that virtually nothing about stellar evolution can be understood by cruising the \(UV\) plane. Well, in absence of better tools the founding fathers got something out of it. Not much, but something. However, after \(\sim 60\) years of full stellar evolution through computers the regression to the rudimentary tools of the fathers sounds inexplicable to me. Apparently, there was a kind of reluctance to look at what in fact happens inside stars from computer outputs, as if knowledge achieved in that way was lacking some sort of sublinity. Perhaps these feelings were best expressed by Faulkner (2005), when saying "The end result is that the post-main-sequence developments of all stars -low- mass, intermediate-mass and high-mass- as they expand to become giants, are finally seen to be example of one underpinning fact: that dense cores with surrounding shells naturally follow hydrogen exhaustion. While this has been known all along from off-repeated computer calculations, we now know why analytically. That matters to true theorists." (p. 150).
## 5 A chicken and egg problem?
Miller Bertolami (2022)5 fully endorses Faulkner's (2005) criticism of our physical explanation of the dwarf-to-giant transition, so I feel obliged to comment on such criticism. I cut and paste here all his points relative to us, and comment them.
Footnote 5: It has been thanks to this paper that I discovered Faulkner’s article of 2005, as it never appeared on ArXiv.
"For the most part, such stars [massive post-main-sequence stars] are out of thermal equilibrium, but not thermally unstable as Renzini has claimed, alone or in concert, in a series of papers, e.g. Renzini et al. (1992)." (p. 190) In his whole article there is no attempt at elucidating what would be the difference between being out of thermal equilibrium and being thermally unstable, specifically in stars expanding to red giants. In the early hydrogen shellburning phase stars can be in or very close to thermal equilibrium for quite some time (see the run of nuclear and surface luminosity in Iben's Figure 6), until suddenly they start to depart precipitously and increasingly from thermal equilibrium. What is this if not a thermal instability?
A few pages later, he states: "We now see that it is most definitively not an envelope thermal instability that drives the expansion of the entire star, as Renzini, either alone or with a succession of (sometimes ascending) colleagues has long asserted. To the absolute contrary, the envelopes are sent out of equilibrium by an expansion imperative dictated by developments in the deepest and densest interiors of the stars." (p. 196).
Then Faulkner goes on: "We can now contrast this with Renzini's proposed explanation for the expansion of stars to the red-giant branch, an explanation he advanced particularly for intermediate mass stars. Renzini suggested that a star of intermediate mass starts to expand from the main sequence - a fact taken as a given - recombination occurring in its cooler regions increasing the opacity there, leading to absorption of luminous energy from below. That in turn expands the star's outermost layers still further, leading to more cooling and yet more absorption, as the hypothesized recombination wave sweeps inward through the star. A thermal instability develops from the outside inwards, for which the luminosity dips are advanced as evidence of this absorption of flux in the outermost regions. He has furthermore claimed that this opacity effect is naturally smaller when there are fewer heavy elements in a star, and that this is why such stars do not expand as much. In this explanation, once the initial expansion from the main sequence has made the
outermost layers cool enough, the natural behavior of those layers takes over and promotes the major, fast transition to the red-giant branch itself. Thus the behavior of the outermost layers leads the star to the giant branch."
This rendition of our interpretation of the phenomenon of the evolution to red giants (for intermediate mass stars) is fair enough, though Faulkner doesn't spare us a couple of picks6.
Footnote 6: Does anybody doubt that stars expand during their MS phase? Or, are we the first to _claim_ that metals contribute to stellar opacity? That ion recombination takes place was not an “hypothesis” but it is a fact that can be easily verified on stellar evolutionary sequences.
But then Faulkner continues: "I have shown above, instead, that the behavior of the central regions is the main driver. Far from leading the rest of the star along, the envelope regions are _lagging behind_ where they would have been had there been time for them to reach either a complete new equilibrium mandated by the change in central conditions (in particular the increasing central condensation) or some'moving target' analogue." (p. 199-200)
Here Faulkner is half right, hence half wrong. It is obviously true that the ultimate driver of stellar evolution are the nuclear transformations taking place in the core (including the shell). It is the ensuing increase in luminosity emanating from the core that drives the initial expansion of the envelope, during the first times after hydrogen shell ignition. But once the thermal instability suddenly erupts, it is the envelope that takes the lead. As mentioned in Section I, it is the luminosity radiated by the envelope that starts dropping, while the shell (nuclear) luminosity still keeps increasing for a while. Then, with some delay, also the nuclear luminosity starts dropping fast, which is due to the feedback effect from the envelope. As documented in our 1992 paper, with the runaway expansion of the envelope, its weight on the shell drops, i.e., when the accelerated expansion started from the surface sweeps inside all the way to reach the upper part of the shell, then density and temperature in the upper shell drop and so does the generation of nuclear energy.7 In this phase the envelope is clearly _leading_ and the core (the shell) is _lagging_, to the extent that the envelope expansion almost succeeds in switching off the shell.
Footnote 7: This is what Iben (1965) called the “shell narrowing phase”, see Iben’s Figure 6.
Faulkner's argument seems instead to consist in that the core would be changing too rapidly for the envelope to follow, and then the envelope would lag behind. Hence, there would be no genuine thermal instability in the envelope. But this not what happens, especially in intermediate-mass stars: Past shell ignition there is a long period during which the core shrinks slowly, nuclear and surface luminosities are almost identical, and yet the thermal instability suddenly erupts (see again Iben's Figure 6). Moreover, in our 1992 paper we already countered this option: in Figure 4 there we showed indeed that the thermal instability erupts, no matter how slowly the core luminosity increases. In other words, it is not that the core would be evolving too rapidly for the envelope to adjust, it is instead that the envelope becomes incapable of transferring outwards all the energy being provided by the core. However, in stars more massive than \(\sim 10\,M_{\odot}\), after hydrogen exhaustion the core contracts very rapidly, out of thermal equilibrium, shell ignition is violent and there is no approach the thermal equilibrium in the envelope which instead is immediately pushed into runaway expansion. Still, the recombination wave starts from near the surface adding the usual thermal instability to an already complex structure which is out of thermal equilibrium everywhere, from the core, through the shell and then in the envelope.
So, whether it is the core or the envelope that comes first, cannot be reduced to a chicken and egg problem. During most of the evolution of a star it is the stellar core that leads and the envelope follows, struggling to adjust. But when envelope thermal instabilities develop, and they do, it is the envelope itself that drives further changes, including those deep in the shell, and does so with the short, thermal timescale. The relative roles of core and envelope in determining the transition to red giants where clearly stated in the conclusions of our 1992 paper, where it was said: "We have demonstrated that stars become red giants in response to the increasing luminosity being provided by the core, and that the runway expansion -when it takes place- is triggered by the thermal conductivity of the envelope reaching a maximum and then decreasing. The decrease of thermal conductivity is caused by the opacity increase promoted by the recombination of heavy ions in the envelope, as the envelope itself expands and cools." (Point 1 of the concluding section). Ultimately, it is the core that drives the envelope to the edge of its catastrophic thermal instability, which however first erupts near the surface and then drills through the star until it reaches to expand even the burning shell itself. All this can be easily verified, suffice to look, and with humidity pay attention, at what happens inside models in stellar evolutionary sequences.
## 6 Stars that do not become red giants
The statement above "... the post-main-sequence developments of all stars -low- mass, intermediate-mass and high-mass- as they expand to become giants..." (Faulkner 2005) is not correct. Not all stars become red giants during their post-MS phase. Section 5.3 of our 1992 paper was dedicated to the effect of metallicity on the evolution to red giant configurations. If metal opacity is the culprit, one expects a big effect of metal abundance on the phenomenon. And indeed the effect is big. Even before 1992, it was known that intermediate- and high-mass stars of low (or zero) metallicity do _NOT_ become red giants during their post-MS phase, but fail to incur in the thermal instability and ignite helium in the core as blue giants. In this respect, as evidence we quoted Stothers & Chin (1977), Tornambe (1984) and Bertelli, Bressan & Chiosi (1985), see also Cassisi & Castellani (1993). These stars were developing a large central concentration, a steep density and molecular weight gradient at the edge of their hydrogen-exhaused core, ignited the shell, reached and surpassed the Schonberg-Chandrasekhar limit, just like their more metal rich stars, but failed to become red giants. Perhaps it is worth repeating here that "We emphasize that in no other proposed explanation of _why stars become red giants_ one has ever attempted to answer the question '_why very metal poor (intermediate-mass) stars do NOT become red giants?_', a question which instead finds its most natural answer in the frame of our physical interpretation" (Renzini et al. 1992). And this remains fully valid even to these days, thirty years later, in particular for those articles that have dismissed our demonstration as reported in the previous sections.
Just to illustrate this further, Figure 1 shows several stellar evolutionary sequences, as described in the caption. Let us consider first the (blue) tracks computed with standard opacities, and relatively low metallicity, \(Z=0.001\). In the \(9\,M_{\odot}\) sequence helium is ignited at the center while the star is still a blue giant, with an effective temperature of \(\sim 13,000\) K, when it begins slowly contracting and spends all the core helium-burning phase in the blue. It is not before central helium exhaustion and helium-shell ignition (corresponding to the prominent loop in the track) that the fast excursion to the red begins, with the onset of the thermal instability signalled
by the luminosity drop past the loop. In practice, it is only after central helium exhaustion that the luminosity grows high enough to trigger the envelope instability, whereas past hydrogen exhaustion the luminosity had not reached such threshold, hence failed to trigger the instability. This illustrates the point made before that what matters to the envelope is only the **luminosity** at its base, irrespective of the structure inside the outer(most) shell, rather than unspecified "developments in the deepest and densest interiors of the stars", as advocated by Faulkner. In other words, there is no _subterranean_ communication between the core and the envelope.
The case of the \(7\,M_{\odot}\) track is quite similar, with the main difference being that helium ignites while the star has reached the slightly cooler temperature of \(\sim 9,000\) K. In the case of the \(5\,M_{\odot}\) star, instead, the thermal instability clearly erupts and the star inflates to red giant dimension, though an RGB phase is promptly aborted by helium ignition. The \(3\,M_{\odot}\) sequence is much similar to the case of more metal rich stars, with inflation to the RGB and deflation from it that gives raise to the (first) blue loop. (A hint for the second blue loop is barely visible shortly after the arrival on the RGB.) The \(2\,M_{\odot}\) star develops the thermal instability and its helium core becomes electron degenerate, which allows for the extended rise in luminosity terminated by the helium flash. The extended RGB is even more prominent in the \(1\,M_{\odot}\) case, where no thermal instability erupts (see next section).
Turning to the red sequences, for which the opacity has been artificially restricted to pure electron scattering, one can notice the following. First, all tracks are systematically brighter and hotter compared to the those with standard opacity, as expected. Past central hydrogen exhaustion, the stellar luminosity steadily increases, no thermal instability sets in and helium is ignited while the star is still at a high effective temperature, that decreases with decreasing stellar mass. Fast excursions to low effective temperatures start only after helium-shell ignition, corresponding the (second) blue loops. Moreover, no envelope convection sets in (opacity is too low) and tracks run to very low effective temperatures. Being now globally hotter, the \(2\,M_{\odot}\) star does not develop an electron-degenerate helium core, and helium is ignited under non-degenerate conditions. This pure Thompson-scattering experiment shows that stellar models can be constructed whereby these models reach large dimensions even in absence of a thermal runway instability, suffice for them to become bright enough. Yet, this happens only after helium exhaustion in the core, but not after central hydrogen exhaustion. However, this is not what real stars do, as their opacity is not limited to electron scattering. This experiment shows that (on the computer) a giant configuration can always be achieved, provided that enough luminosity is _pumped_ into the envelope. Incidentally, it also shows that a proper RGB is produced only if using realistic opacities.
## 7 Stars that become red giants without ever breaking their thermal equilibrium
One objection heard frequently was: _but low mass stars do become red giants and yet they do not develop a thermal instability._ True! Suffice to look at the colour-magnitude diagrams of galactic globular clusters, where main sequence, turnoff and subgiant branch join smoothly to the RGB. Most recently, Miller Bertolami (2022) states: "the fact that low mass red giants evolve in a nuclear timescale and develop the most extreme case of giantness, clearly show that thermal instabilities in the envelope are not what pushes stars into red giant dimensions." True, for low-mass stars, but not true for intermediate-mass ones, just more massive than, say \(\sim 1.1-1.2\,M_{\odot}\), which instead do _become_ red giants as a result of the thermal instability.
This point was addressed in our 1992 paper (in Section 5.2) where we said: "In general, the violence of the runaway decreases with decreasing mass...It does so to the extent that for masses below \(\sim 1\,M_{\odot}\) (the actual value depends on metallicity) the drop in surface luminosity... vanishes, and so does the runaway expansion itself. Low mass stars become red giant without ever departing from TE... This follows from the fact that low-mass stars begin their evolution already close to the Hayashi line and their envelope becomes convective _before_ the thermal instability has a chance to take place: the early establishment of convection suppresses the thermal instability of the envelope."
So, why do they _become_ red giants? The reason was already mentioned in Section 3, when saying: "In general, when the core luminosity increases the star expands, when it decreases the star contracts." And the core luminosity does indeed steadily increase and does so by about three orders of magnitude (see Figure 1): as the shell around the electron-degenerate helium core burns through, the core mass increases and core radius decreases (this is what happens to degenerate cores and white dwarfs alike). Hence, gravity, pressure, density and temperature in the shell all increase and so does the nuclear energy generated there. To accommodate the increasing luminosity from the core, and radiate it away, the envelope is forced to expand. Contrary to the case of stars experiencing the thermal instability, the core now strongly holds the _lead_ all the way through -from the MS to the RGB tip- and the envelope _lags_ behind. Again, the key factor for the expansion is the luminosity and its steady increase, something that is pathetically absent in the \(UV\) homology invariants.
Figure 1: Evolutionary sequences for the indicated masses, from the pre-MS all the way to past helium exhaustion at the center and helium shell burning. In blue are the tracks for standard opacities. and metallicity \(Z=0.001\). In red are the sequences have assumed that the only source of opacities is pure electron (Thomson) scattering. Courtesy of Santi Cassisi.
## 8 Summary and Conclusions
In this paper we have quoted in full the specific criticisms that have been moved to our answer to the question of "why stars _become_ red giants?" It can be appreciated that these dismissals are embarrassingly shallow, being typically limited to just a few, sometimes even cryptic words. Several authors admit that a thermal instability develops in the envelope, but just state that it would not be responsible for the runaway expansion. They don't explain why this would not be the case nor what other physical process would drive the expansion instead. Another author admits that stars sweeping to become red giant are indeed in _thermal imbalance_ but this would be not caused by a thermal instability of the envelope. Without guessing what other physical process would trigger the thermal imbalance itself. Faulkner (1997), at the beginning of his one page article says: "Thermal imbalance or even thermal instabilities may in some cases describe _how_, but they do not tell us _why_ stars expand to giant dimensions." As far as physics is concerned, I don't see much difference between the _how_ and the _why_, and Faulkner did not try to explain what the difference would be. In any event, I am satisfied for having proposed how the expansion takes place, and if others believe that a different, whimsical _why_ exists, and I don't think it does, then I am happy to pass the hand to \(UV\)-plane explorers. Ultimately, distinguishing evolutionary phases in thermal equilibrium from those that are thermally unstable is critical to properly understand stellar evolution and so the shapes of evolutionary sequences in the HR diagram.
## Acknowledgments
I would like to thank Santi Cassisi for his critical reading of the manuscript and for constructive comments. I also thank him for having computed on my request the evolutionary tracks shown in Figure 1.
## Data Availability
No new data were generated or analyzed in support of this research.
|
2305.11972
|
Parity anomaly with impurities and the Pauli--Villars subtraction
|
We calculate the anomalous part of the polarization tensor of Dirac fermions
in $2+1$ dimensions in the presence of impurities described by the scattering
rate $\Gamma$ for arbitrary external frequency and momenta. We consider two
different versions of the Pauli--Villars subtractions and discuss their
physical consequences.
|
Ozório Holanda, René Meyer, Dmitri Vassilevich
|
2023-05-19T19:43:25Z
|
http://arxiv.org/abs/2305.11972v1
|
# Parity anomaly with impurities and the Pauli-Villars subtraction
###### Abstract
We calculate the anomalous part of the polarization tensor of Dirac fermions in \(2+1\) dimensions in the presence of impurities described by the scattering rate \(\Gamma\) for arbitrary external frequency and momenta. We consider two different versions of the Pauli-Villars subtractions and discuss their physical consequences.
Introduction
Due to the parity anomaly [1; 2] (see also [3]), the one-loop effective action for Dirac fermions in \(2+1\) dimensions contains a parity violating Chern-Simons part. From phenomenological point of view, this means the appearance of a quantum anomalous Hall effect, i.e. of a Hall type conductivity in the absence of an external magnetic field. The momentum dependence of the anomalous part of the polarization tensor was analyzed in [4]. These papers were followed by a very interesting development in quantum field theory. To learn about general aspects of the Chern-Simons theory the reader may consult [5].
An intriguing feature of the parity anomaly is that it leads to a Chern-Simons term with the weight \(\pm 1/2\) (or, equally, to the Hall conductivity being \(\pm 1/2\) of the Hall quantum). Such a term is not invariant under large gauge transformations which has been a source of confusion for a long period. This apparent contradiction was finally resolved in [6; 7] where it was demonstrated that the Ward identities corresponding to large gauge transformations contain nonperturbative contributions capable to restore gauge invariance of the effective action.
Theoretical investigations of quantum anomalies are highly relevant in condensed matter systems, see [8]. For the benefit of the typical high energy physics reader, we here given an overview over the main developments in this direction: Quantum anomalies are tied to relativistic fermions, which arise in topologically protected approximately relativistic band crossings such as topological insulators with 2+1-dimensional Dirac surface states [9; 10; 11; 12; 13; 14; 15; 16; 17; 18], or Weyl and Dirac [19; 20; 21; 22; 23; 24] semimetals with 3+1-dimensional Weyl fermions as band crossings. In particular, the parity anomaly directly contributes to the quantum anomalous Hall effect, a Hall effect in the absence of external magnetic fields, in two- dimensional quantum anomalous Hall (QAH) insulators such as (Hg,Mn)Te quantum wells [25; 26] or magnetically doped (Bi,Sb)Te thin films [27; 28]. The DC quantum anomalous Hall conductivity has a quantized part that is directly induced by the parity anomaly, i.e. the coefficient of the Chern-Simons term in the low energy effective action. In real-world condensed matter systems such as QAH insulators, the band structure can contain UV relevant terms [10] which serve as a UV regulator and contribute to the quantized as well as non-quantized parts of the quantum anomalous Hall conductivity [29]. They preserve large gauge invariance and break parity, hence contributing as a UV regulator to the parity anomaly [29]. From the
point of view of the band structure, the quantized part of the anomalous Hall conductivity can be related to the presence of momentum space Berry curvature in the wave functions at the Dirac band touching points [30], or equivalently to a quantized winding number in the Brillouin zone [31]. There are other non-quantized, \((T,\mu)\) dependent contributions to the anomalous Hall conductivity as well [32]. Experimental signatures of the parity annmaly in 2+1-dimensional Dirac materials have been discussed in e.g. [33; 34].
In condensed matter physics, there are several ways to describe impurities depending on the properties of a particular material [35]. The usual assumption is that impurities provide a randomly distributed non-periodic short-range scattering potential, on which the conduction electrons in a solid scatter in an energy and charge conserving way, but loose momentum on these impurities. For small impurity densities, the momentum-relaxing effect of impurity scattering, together with the details of the potential, can be summarized in a momentum relaxation term on the right hand side of the translation ward identity. An analogous relaxation time approximation is possible for weak breaking of translational symmetry in AdS/CFT models of strongly correlated quantum matter, for a recent review c.f. [36; 37]. In a particular limit, the AdS/CFT correspondence [38; 39; 40] is a duality between strongly interacting quantum systems on the one side, and semiclassical gravity in one additional dimension on the other side (for a review of the correspondence, c.f. e.g. [41]). In AdS/CFT, breaking of translational symmetry can be implemented in several ways [42; 43; 44; 45; 46; 47; 48; 49]. For small induced momentum relaxation rate these models are all equivalent to a theory with a massive graviton [50], which is responsible for the relaxation term on the right hand side of the translational Ward identity.
Long-range charged impurities or long-range disorder potentials such as the ones generated by the charge puddles in graphene need a different treatment [35], by introducing a scattering rate \(\Gamma\) which enters the propagator of quasiparticles through substitution of the temporal momentum \(p_{0}\to p_{0}+\mathrm{i}\Gamma\mathrm{sgn}(p_{0})\). This approach to the impurities was used in calculations of the anti-symmetric part of polarization tensor in graphene in the presence of external magnetic field in e.g. [51; 52; 53; 54]. The computations of [54] are in a very good agreement with the measurements [55] of giant Faraday rotation in graphene. Note that in these calculations an antisymmetric part of the polarization tensor was caused by the presence of an external magnetic field. The anomalous Hall conductivity of graphene cancels between generations of fermions. This explains why the anomalous Hall contributions
with impurirites described by scattering rate \(\Gamma\) were neglected at that time. However, new applications including Hall conductivities of surface states of topological materials put the problem forward again.
The purpose of this work is to close an important gap in the literature by computing the anomalous part of polarization tensor for fermions in \(2+1\) dimensions in the presence of impurities described by the scattering rate \(\Gamma\). We do not relate this computation to any particular material. Our main attention is on the Quantum Field Theory aspects. In particular, we study the Pauli-Villars subtraction scheme and suggest two versions of this scheme leading to qualitatively different dependence of the parity anomaly on \(\Gamma\). The first scheme, consists in subtracting the contribution of a regulator field with mass \(M\) with taking subsequently the limit \(|M|\to\infty\). In the second scheme, we also take the limit \(\Gamma_{R}\to\infty\), where \(\Gamma_{R}\) is the impurity parameter for the regulator field, keeping the ratio \(\Gamma_{R}/M\) fixed. From the Quantum Field Theory perspective, the difference between two schemes is merely a renormalization ambiguity. We perform the computations for arbitrary values of external frequency and momenta which is not common for current literature although this is important for evaluation of the Casimir force, for example.
This paper is organized as follows. In the next section, we compute the unregularaized anomalous polarization tensor. Two possible PV subtraction schemes are analyzed in Section III. The results of this work are briefly discussed in Section IV.
## II Polarization tensor in presence of impurities
We start with the action
\[S=\int d^{3}x\bar{\Psi}\not{D}\Psi, \tag{1}\]
where \(\not{D}={\rm i}\tilde{\gamma}^{\mu}(\partial_{\mu}+ieA_{\mu})-m\). A twiddle over a 3-vector means that that the spatial components are rescaled with the Fermi velocity. In particular,
\[\tilde{\gamma}^{0}=\gamma^{0},\ \ \tilde{\gamma}^{i}=v_{F}\gamma^{i} \tag{2}\]
The Greek letters label the spacetime coordinates, \(\mu=0,1,2\), while the Latin letters denote the spatial coordinates, \(i,j=1,2\). We use the metric \(g^{\mu\nu}={\rm diag}(1,-1,-1)\). We fix \({\rm tr}[\gamma^{\mu}\gamma^{\nu}\gamma^{\alpha}]=2{\rm i}\varepsilon^{\mu\nu\alpha}\). Here, \(\varepsilon^{\mu\nu\alpha}\) is the Levi-Civita tensor, and \({\rm i}:=\sqrt{-1}\).
The one-loop effective action due to quantum fermions in the second order of electromagnetic field \(A\) reads
\[S_{\rm eff}=\frac{1}{2}\int\frac{d^{3}k}{(2\pi)^{3}}A_{\mu}(-k)\Pi^{\mu\nu}(k)A_ {\nu}(k), \tag{3}\]
where the polarization tensor is given by
\[\Pi^{\mu\nu}(k,m)={\rm i}e^{2}\int\frac{d^{3}p}{(2\pi)^{3}}{\rm tr}[\tilde{ \gamma}^{\mu}G_{f}(p,m)\tilde{\gamma}^{\nu}G_{f}(p+k,m)] \tag{4}\]
with the fermion propagator
\[G_{f}(p,m)=\frac{1}{\not{p}-m}=\frac{(\not{p}+m)}{\tilde{p}^{2}-m^{2}}, \tag{5}\]
where \(\not{p}:=\tilde{\gamma}^{\mu}p_{\mu}\). In the presence of impurities, the temporal momentum has to be replaced as
\[p_{0}\rightarrow\hat{p}_{0}=p_{0}+{\rm i}\Gamma\,{\rm sgn}\,p_{0}. \tag{6}\]
Here \(\Gamma\) is an impurity scattering rate, \(\Gamma>0\).
Note, that the replacement (6) done in the propagator alone breaks gauge invariance. Indeed, such propagator can be formally obtained from a Dirac action containing an additional term with \({\rm sgn}(-{\rm i}\partial_{0})\). To ensure gauge invariance, any derivative has to be accompanied by an electromagnetic potential. Thus, one expects something like \({\rm sgn}(-{\rm i}(\partial_{0}+eA_{0}))\). Giving a precise meaning to such a term is not easy, but obviously it should modify the couplings of fermions to \(A_{0}\) and lead to new vertices in the Feynman rules involving \(A_{0}\). To avoid complications, we will consider only the diagrams with external legs corresponding to the spatial components \(A_{j}\), i.e the \(\Pi^{ij}\) part of the polarization tensor. This will be enough for our purposes since the full polarization tensor can be recovered through the transversality condition. Thus, we will evaluate
\[\Pi^{ij}(k)={\rm i}e^{2}\int\frac{d^{3}p}{(2\pi)^{3}}{\rm tr}[\tilde{\gamma}^{ i}G_{f}(\widehat{p},m)\tilde{\gamma}^{j}G_{f}(\widehat{p+\tilde{k}},m)], \tag{7}\]
where \(\widehat{p}^{\,\mu}:=(p_{0}+{\rm i}\Gamma\,{\rm sgn}(p_{0}),\vec{p})\).
To analyze the dependence of polarization tensor on the Fermi velocity one can use the following simple trick [56]. Let us change the integration variable \(p\rightarrow\tilde{p}\) in (II). The Jacobean factor \(v_{F}^{-2}\) is canceled the factors of \(v_{F}\) in \(\tilde{\gamma}^{i}\) adn \(\tilde{\gamma}^{j}\), so that one obtains
\[\Pi^{ij}(v_{F};k)=\Pi^{ij}(v_{F}=1,\tilde{k}) \tag{8}\]
Thus, from now on we set
\[v_{F}=1. \tag{9}\]
Thus we have
\[\Pi^{ij}(k,m,\Gamma)={\rm i}e^{2}\int\frac{d^{3}p}{(2\pi)^{3}}\frac{{\rm tr}[ \gamma^{i}(\widehat{\not{p}}+m)\gamma^{j}(\widehat{\not{p}+\not{k}}+m)]}{( \widehat{p}^{2}-m^{2})((\widehat{p+\not{k}})^{2}-m^{2})}. \tag{10}\]
We are interested in the anomalous part \(\Pi_{\rm odd}\) of the polarization tensor which changes the sign under the change of orientation of spacetime. This is the part which is antisymmetric in the indices \(i,j\) and proportial to \(\varepsilon^{ij}\equiv\varepsilon^{ij0}\). After computing the traces over spinor indices and selecting relevant tensor structures we obtain
\[\Pi^{ij}_{\rm odd}(k,m,\Gamma)=\varepsilon^{ij}k_{0}\left[\zeta(k,m,\Gamma)+ \chi(k,m,\Gamma)\right], \tag{11}\]
where
\[\zeta(k,m,\Gamma) = -2me^{2}\int\frac{d^{3}p}{(2\pi)^{3}}\frac{1}{((p_{0}+{\rm i} \Gamma\,{\rm sgn}(p_{0}))^{\,2}-\vec{p}^{\,2}-m^{2})} \tag{12}\] \[\times \frac{1}{((p_{0}+k_{0}+{\rm i}\Gamma\,{\rm sgn}(p_{0}+k_{0}))^{2 }-(\vec{p}+\vec{k})^{2}-m^{2})}\]
and
\[\chi(k,m,\Gamma) = -2{\rm i}me^{2}\frac{\Gamma}{k_{0}}\int\frac{d^{3}p}{(2\pi)^{3}} \frac{{\rm sgn}(p_{0}+k_{0})-{\rm sgn}(p_{0})}{((p_{0}+{\rm i}\Gamma\,{\rm sgn }(p_{0}))^{\,2}-\vec{p}^{\,2}-m^{2})} \tag{13}\] \[\times\frac{1}{(p_{0}+k_{0}+{\rm i}\Gamma\,{\rm sgn}(p_{0}+k_{0}) )^{2}-(\vec{p}+\vec{k})^{2}-m^{2}}\,.\]
We use the Feynman formula
\[\frac{1}{AB}=\int_{0}^{1}dx\frac{1}{(Ax+(1-x)B)^{2}}. \tag{14}\]
Quite remarkably, the expression under integral above has no poles in the whole integration region despite the presence \(\Gamma\). We thus have
\[\zeta(k,m,\Gamma)=-2me^{2}\int_{0}^{1}dx\int\frac{dp_{0}}{2\pi}\int\frac{d^{2 }\vec{l}}{(2\pi)^{2}}\frac{1}{(\vec{l}^{2}-M)^{2}} \tag{15}\]
and
\[\chi(k,m,\Gamma)=-2ime^{2}\frac{\Gamma}{k_{0}}\int_{0}^{1}dx\int\frac{dp_{0}} {2\pi}({\rm sgn}(p_{0}+k_{0})-{\rm sgn}(p_{0}))\int\frac{d^{2}\vec{l}}{(2\pi) ^{2}}\frac{1}{(\vec{l}^{2}-M)^{2}}, \tag{16}\]
with \(\vec{l}=\vec{p}+\vec{k}x\) and \(M=(p_{0}+{\rm i}\Gamma\,{\rm sgn}(p_{0}))^{2}(1-x)+x(p_{0}+k_{0}+{\rm i}\Gamma {\rm sgn}(p_{0}+k_{0}))^{2}-\vec{k}^{2}x(1-x)-m^{2}\).
After performing the \(\vec{l}\) integration we obtain
\[\zeta(k,m,\Gamma) = \frac{me^{2}}{4\pi^{2}}\,\int_{0}^{1}dx\int dp_{0}\frac{1}{M}, \tag{17}\] \[\chi(k,m,\Gamma) = \frac{\mathrm{i}me^{2}\Gamma}{4\pi^{2}k_{0}}\,\int_{0}^{1}dx\int dp _{0}\frac{\mathrm{sgn}(p_{0}+k_{0})-\mathrm{sgn}(p_{0})}{M}. \tag{18}\]
In the presence of \(\Gamma\), the \(p_{0}\) integral cannot be done by computing the residues and requires somewhat more attention. In the expression (17), one has to divide the integration region in three intervals depending on the signs of \(p_{0}\) and \(p_{0}+k_{0}\). In (18) only one of such intervals contribute. After performing the integrations, one obtains
\[\zeta(k,m,\Gamma) = \frac{1}{4\pi^{2}}me^{2}\int_{0}^{1}dx\left[\frac{\pi-2\arctan \left(\frac{k_{0}x+\mathrm{i}\Gamma}{\sqrt{-m^{2}+k^{2}x(1-x)}}\right)}{\sqrt {-m^{2}+k^{2}x(1-x)}}\right. \tag{19}\] \[+ \left.\frac{2\arctan\left(\frac{(k_{0}+2\mathrm{i}\Gamma)x-i \Gamma}{\sqrt{-m^{2}+((k_{0}+2\mathrm{i}\Gamma)^{2}-\vec{k}^{2})x(1-x)}} \right)}{\sqrt{-m^{2}+((k_{0}+2\mathrm{i}\Gamma)^{2}-\vec{k}^{2})x(1-x)}}\right]\]
and
\[\chi(k,m,\Gamma)=\frac{\mathrm{i}}{4\pi^{2}}me^{2}\frac{\Gamma}{k_{0}}\int_{0 }^{1}dx\frac{4\arctan\left(\frac{(k_{0}+2\mathrm{i}\Gamma)x-\mathrm{i}\Gamma} {\sqrt{-m^{2}+((k_{0}+2\mathrm{i}\Gamma)^{2}-\vec{k}^{2})x(1-x)}}\right)}{ \sqrt{-m^{2}+((k_{0}+2\mathrm{i}\Gamma)^{2}-\vec{k}^{2})x(1-x)}}. \tag{20}\]
### Limits of the polarization tensor
Before going on with the renormalization, let us consider some limits of unrenormalized polarization tensor. In the \(m\to 0\) limit with all other parameters staying finite one immediately obtains
\[\Pi^{ij}_{\mathrm{odd}}(k,0,\Gamma)=0. \tag{21}\]
Another important limit is \(\Gamma\to 0\),
\[\Pi^{ij}_{\mathrm{odd}}(k,m,0)=-\frac{\mathrm{i}e^{2}}{4\pi}\varepsilon^{ij} k_{0}\frac{2m}{|k|}\mathrm{arctanh}\left(\frac{|k|}{2|m|}\right), \tag{22}\]
which reproduces the known result [4]. Do not confuse arctanh in this formula with arctan in Eq. (19).
In the limit when both \(m\) and \(\Gamma\) become large but their fraction is kept constant, we have
\[\lim_{\lambda\rightarrow\infty}\Pi^{ij}_{\rm odd}(k,\lambda m, \lambda\Gamma) = \frac{1}{4\pi^{2}}me^{2}\varepsilon^{ij}k_{0}\left(\frac{-{\rm i} \pi+2{\rm i}\arctan\left(\frac{\Gamma}{|m|}\right)}{|m|}\right) \tag{23}\] \[- \frac{1}{4\pi^{2}}{\rm i}\Gamma me^{2}\varepsilon^{ij}k_{0}\left( \frac{2}{\Gamma^{2}+m^{2}}\right).\]
(This formula is easier to obtain from (17) and (18) where the integration over \(p_{0}\) has not been done yet.)
The same formula (23) describes the limit when both \(k_{0}\) and \(|\vec{k}|\) are small as compared to \(m\) and \(\Gamma\). This formula also allows to obtain the limits
\[\lim_{|m|\rightarrow\infty}\Pi^{ij}_{\rm odd}(k,m,\Gamma)=-\frac{{\rm i}e^{2} }{4\pi}\,\frac{m}{|m|}\varepsilon^{ij}k_{0} \tag{24}\]
and
\[\lim_{\Gamma\rightarrow\infty}\Pi^{ij}_{\rm odd}(k,m,\Gamma)=0. \tag{25}\]
To recover the dependence of polarization tensor on Fermi velocity it is sufficient to replace \(k\rightarrow\tilde{k}=(k_{0},v_{F}\vec{k})\) in the formulas given above. The full anomalous polarization tensor may be obtained by solving the conservation condition \(k_{\mu}\Pi^{\mu\nu}_{\rm odd}=0\). One obtains \(\Pi^{0j}_{\rm odd}=-(k_{i}/k_{0})\Pi^{ij}_{\rm odd}\) and \(\Pi^{00}_{\rm odd}=0\).
## III Pauli-Villars subtractions
As has been noted already by Redlich [1], although the parity-odd part of polarization tensor is non-divergent, one has to apply a Pauli-Villars subtraction to get a correct result. The usual prescription consists of subtracting from the non-renormalized polarization tensor a contribution from a fermion field have exactly the same parameters except for the mass \(M\) and taking the limit \(|M|\rightarrow\infty\) at the end of calculations. We additionally assume that \(M\) has the same sign as \(m\) so that to pass from \(m\) to \(M\) one does not need to cross the gapless phase. This assumption is not essential. The opposite limit can be analysed along the same lines. We will call this scheme PV1 (since there also be a PV2). Basically, this prescription boils down to subtracting (24) from the polarization tensor.
For further discussion, is it convenient to introduce a quantity \(\sigma\),
\[\Re\left[\frac{\Pi^{ij}_{\rm odd}}{{\rm i}k_{0}}\right]=\varepsilon^{ij}\, \frac{e^{2}}{2\pi}\,\sigma \tag{26}\]
which is nothing else than the anti-symmetric (Hall) conductivity measured in the units of Hall quantum \(e^{2}/(2\pi)=e^{2}/h\). The imaginary part of conductivity is not affected by the Pauli-Villars subtraction and thus will not be considered here.
We immediately obtain the zero-gap result
\[\sigma_{\rm PV1}(k,0,\Gamma)=\frac{1}{2}\,\frac{m}{|m|}\,, \tag{27}\]
which comes exclusively from the PV regulator field1, does not depend on \(\Gamma\), and represents the classical value of parity anomaly. Also, the limit \(\Gamma\to 0\) reproduces a known result,
Footnote 1: The sign on the right hand side is a consequence of our assumption \({\rm sgn}\,M={\rm sgn}\,m\). In general case, the sign factor in (27) is given by \({\rm sgn}\,M\).
\[\sigma_{\rm PV1}(k,m,0)=\frac{1}{2}\left[\frac{m}{|m|}-\frac{2m}{|k|}{\rm arctanh }\left(\frac{|k|}{2|m|}\right)\right]\,. \tag{28}\]
We also have
\[\lim_{\Gamma\to\infty}\sigma_{\rm PV1}(k,m,\Gamma)=\frac{1}{2}\,\frac{m}{|m|}\,. \tag{29}\]
In the scheme PV1, the mass of regulator field goes to infinity while the parameter \(\Gamma\) which describes impurities remains fixed. Thus, relative to the mass, the impurities become negligible. Since \(\Gamma\) is a phenomenological parameter, one can also consider other prescriptions for the behavior of the impurity parameter \(\Gamma_{R}\) for the regulator field. A reasonable choice seems to be to fix the ratio \(\Gamma_{R}/M\) while taking the limit \(\Gamma_{R},\ M\to\infty\). In other words, we take the limit \(m\to\infty\) keeping all _dimensionless_ parameters like \(e\) and \(\Gamma_{R}/M\) fixed. We call this scheme PV2. In this scheme, we need to subtract the expression (23) from the unrenormalized polarization tensor. Let us discuss physical consequences of the PV2 subtraction.
If \(m\neq 0\), there are simple analytic formulas for \(k=0\) which we present here for the sake of completeness.
\[\sigma(0,m,\Gamma)=\frac{m}{2\pi}\left(-\pi+\frac{2\arctan(\Gamma/|m|)}{|m|}- \frac{2\Gamma m}{\Gamma^{2}+m^{2}}\right). \tag{30}\]
In the PV1 scheme, one has
\[\sigma_{\rm PV1}(0,m,\Gamma)=\frac{m}{\pi}\left(\frac{\arctan(\Gamma/|m|)}{|m |}-\frac{\Gamma m}{\Gamma^{2}+m^{2}}\right). \tag{31}\]
In the PV2 scheme, the conductivity vanishes identically in this limit, \(\sigma_{\rm PV2}(0,m,\Gamma)=0\) which also happens at \(\Gamma=0\) in any scheme in the absence of a chemical potential. (Note,
that the limits \(k\to 0\) and \(m\to 0\) do not commute.) Although there are analytic formulas (30) and (31) we present the plots on Fig. 1) for convenience.
Essential differences between two schemes appear, as expected, when \(\Gamma\) is large. In the infinite \(\Gamma\) limit,
\[\lim_{\Gamma\to\infty}\sigma_{\rm PV2}=0 \tag{32}\]
in contrast to (29). Already at a finite \(\Gamma\) the differences are significant. For a finite \(\Gamma\) and \(m=0\), one has
\[\sigma_{\rm PV2}(k,0,\Gamma)=0, \tag{33}\]
while in PV1 this value is nonzero, see (27).
For \(k_{0}/|m|=1\) and \(k_{0}/|m|=10\) the conductivity \(\sigma\) is depicted at Fig. 2 and Fig. 3, respectively. In both cases \(\vec{k}=0\). Note, that large values of \(\Gamma/|m|\) do not necessarily mean that impurities are strong. Equally, the mass gap may be small. In general, in the scheme
Figure 1: The anomalous conductivity \(\sigma\) at \(\vec{k}=0\), \(k_{0}=0\) as a function of \(\Gamma/|m|\) without subtraction (dashed line) and with PV1 subtraction (dashed-dotted line). Since \(\sigma_{\rm PV2}(0,m,\Gamma)=0\) the corresponding plot is not included.
PV2, the impurities damp anomalous Hall conductivity, while in PV1 they do not. For a moderate frequency, see Fig. 2, \(\sigma_{\rm PV2}\) is close to zero for all values of the parameter \(\Gamma\).
In \(2+1\) dimensions the structure of ultraviolet divergences is quite simple so that just a single regulator field is required. The only essential feature is that one has to take the limit of an infinite mass gap of the regulator field. From this point of view, there are good reasons to expect that both subtraction schemes are internally consistent and remove all ultraviolet divergences. These arguments are however not watertight and we shall pay more attention to the QFT aspects of Pauli-Villars scheme in some future work. Note, that there are examples when modifications of a model impose severe restrictions on the PV subtractions. (See e.g. the paper [57] where the PV scheme in 4D QED with boundaries was analyzed.)
At any rate, the final word in choosing between two subtraction schemes should belong to an experiment.
Figure 2: The anomalous conductivity \(\sigma\) as a function of \(\Gamma/|m\) for \(\vec{k}=0\), \(k_{0}/|m|=1\). The dashed line corresponds to the conductivity without subtraction, while the dashed-dotted and solid lines correspond to PV1 and PV2 schemes, respectively.
## IV Discussion
In this paper, we have studied with Quantum Field Theory Methods the anomalous part of polarization tensor of a \(2+1\) dimensional fermion interacting with impurities described by a scattering rate \(\Gamma\) for arbitrary external frequency and momenta. We used the PV scheme and argued that there are two natural subtractions. In one of them, which we called PV1, the contribution of a massive regulator field is subtracted in the limit \(|M|\rightarrow\infty\). This is just the standard PV subtraction known from text books. In the other scheme, we treat \(\Gamma/m\) as a dimensional parameter similar to the electric charge. With such an interpretation, it is natural to assume that the regulator field sees the same "impurity charge" \(\Gamma_{R}/M=\Gamma/m\). This boils down to subtracting a double limit \(|M|,\Gamma_{R}\rightarrow\infty\) while keeping the ratio \(\Gamma_{R}/M\) fixed. In this scheme, called PV2 throughout this work, the dependence of Hall conductivity \(\sigma\) on \(\Gamma\) differs crucially from the predictions of PV1. In PV1, the
Figure 3: The anomalous conductivity \(\sigma\) as a function of \(\Gamma/|m\) for \(\vec{k}=0\), \(k_{0}/|m|=10\). The dashed line corresponds to the conductivity without subtraction, while the dashed-dotted and solid lines correspond to PV1 and PV2 schemes, respectively.
conductivity is enhanced by the impurities while in PV2 the impuirities tend to diminish \(\sigma\). The last word in choosing between PV1 and PV2 should belongs to an experiment. However, further consistency checks are also needed. An important lesson from the papers [6; 7] is that large gauge invariance cannot be used to check perturbative calculations since corresponding Ward identities contain important nonperturbative contributions.
There are alternative methods of calculations of the parity anomaly. One of them relies on the \(\zeta\) function regularization. It allows to evaluate the anomaly for massless [58] and massive [6; 7; 59] fermions, in the presence of an external magnetic field [60], and even in the presence of boundaries [61]. The results obtained with this method are consistent with the Pauli-Villars scheme. Unfortunately, there is no generalization of the \(\zeta\) function regularization in the presence of impurities. For completeness, we like to mention a rather extreme proposal [62] that the parity anomaly is merely a counterterm which is needed to restore parity.
The Casimir force between surfaces which exhibit a Hall-type conductivity may become repulsive (for an overview of this effect see [63; 64] and a recent paper [65]). The Casimir interaction is an integral effect. That is, all frequencies and momenta contribute to the force. The study of this effect was one of the main motivations for the calculation reported above. At present, we may suggest that in the repulsion will be most probably damped by impurities in the PV2 scheme and enhanced in PV1.
###### Acknowledgements.
One of us (D.V.) is grateful to Ignat Fialkovsky for previous collaboration and discussions on impurities. The work of D.V. was supported in parts by the Sao Paulo Research Foundation (FAPESP), grant 2021/10128-0, and by the National Council for Scientific and Technological Development (CNPq), grant 304758/2022-1. O.H. acknowledges support by the Sao Paulo Research Foundation (FAPESP), by the grant 2019/26291-8. The work of R. M. was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through Project-ID 258499086--SFB 1170 ToCoTronics and through the Wurzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter - ct.qmat
Project-ID 390858490--EXC 2147.
|
2310.12015
|
Implication of the period-magnitude relation for massive AGB stars and
its astronomical applications
|
We present astrometric very long baseline interferometry (VLBI) studies of
AGB stars. To understand the properties and evolution of AGB stars, distances
are an important parameter. The distribution and kinematics of their
circumstellar matter are also revealed with the VLBI method. We used the VERA
array to observe 22\,GHz H$_2$O masers in various subclasses of AGB stars.
Parallaxes of the three OH/IR stars NSV17351, OH39.7$+$1.5, IRC$-$30363, and
the Mira-type variable star AW~Tau were newly obtained. We present the
circumstellar distribution and kinematics of H$_2$O masers around NSV17351. The
absolute magnitudes in mid-infrared bands of OH/IR stars with very long
pulsation periods were investigated and a period-magnitude relation in the WISE
W3 band, $M_{\mathrm{W3}} = (-7.21\pm1.18)\log P + (9.25\pm3.09)$, was found
for the Galactic AGB stars. The VLBI is still a powerful tool for parallax
measurements of the Galactic AGB stars surrounded by thick dust shells.
|
Akiharu Nakagawa, Tomoharu Kurayama, Hiroshi Sudou, Gabor Orosz
|
2023-10-18T14:45:47Z
|
http://arxiv.org/abs/2310.12015v1
|
[
###### Abstract
We present astrometric very long baseline interferometry (VLBI) studies of AGB stars. To understand the properties and evolution of AGB stars, distances are an important parameter. The distribution and kinematics of their circumstellar matter are also revealed with the VLBI method. We used the VERA array to observe 22 GHz H\({}_{2}\)O masers in various subclasses of AGB stars. Parallaxes of the three OH/IR stars NSV17351, OH39.7\(+\)1.5, IRC\(-\)30363, and the Mira-type variable star AW Tau were newly obtained. We present the circumstellar distribution and kinematics of H\({}_{2}\)O masers around NSV17351. The absolute magnitudes in mid-infrared bands of OH/IR stars with very long pulsation periods were investigated and a period-magnitude relation in the WISE W3 band, \(M_{\rm W3}=(-7.21\pm 1.18)\,\log P+(9.25\pm 3.09)\), was found for the Galactic AGB stars. The VLBI is still a powerful tool for parallax measurements of the Galactic AGB stars surrounded by thick dust shells.
VLBI, Astrometry, Masers, AGB stars, OH/IR stars IAU Centenary Symposium]Implication of the period-magnitude relation for massive AGB stars and its astronomical applications A. Nakagawa et al.]Akiharu Nakagawa\({}^{1}\), Tomoharu Kurayama\({}^{2}\), Hiroshi Sudou\({}^{3}\), and Gabor Orosz\({}^{4}\)
## 1 Introduction
### Evolution of AGB stars
Asymptotic giant branch (AGB) stars are known to be the final stage of the evolution of stars with an initial mass of 0.8-10 \(M_{\odot}\)(e.g., Karakas and Lattanzio, 2014). Considering the shape of the initial mass function (IMF), a significant portion of stars will spend a period of their lifetime as AGB stars. They are surrounded by thick circumstellar dust shells and frequently present stellar pulsations. Since AGB stars return various elements into interstellar space by stellar winds, they are important objects that contribute to the chemical composition of the universe and galaxy (e.g., Hofner and Olofsson, 2018). AGB stars exhibit a wide range of pulsation periods, with the shortest periods being around 100 days and occasionally reaching 3000 days (e.g., Habing, 1996). Hence, they are often referred to as long-period variable stars (LPVs). A more detailed picture of the evolutionary process on the AGB phase reveals several stages depending on their growth.
During the early AGB phase, stars have relatively thin dust layers, making them observable in both optical and infrared bands. The Mira-type variables, visible in both optical and infrared bands, are a well-known class of AGB stars. As they progress, strong absorption caused by thick circumstellar dust shells makes them faint and difficult to be detected in optical bands (e.g., Habing, 1996; Kamizuka et al., 2020). On the other hand, they become brighter in the infrared bands as a result of re-radiation from the outer dust layer. At this stage, many objects also present OH maser emission from their outermost layer, therefore they are referred to as OH/IR stars. The OH/IR stars, a sub-class of AGB stars, are considered to be in the late stage of the AGB phases before progressing to become planetary nebulae (te Lintel Hekkert et al., 1991). Additionally, there are other sub-classes that exist in the late stages of the AGB phase. Massive stars with initial masses ranging from 7 to 10 \(M_{\odot}\) are known to go through a
phase referred to as the super AGB phase, characterized by high luminosity and long pulsation periods (Karambelkar et al., 2019). OH/IR stars with intermediate mass and large mass loss rates (\(>\)10\({}^{-4}\) \(M_{\odot}\)/yr) are classified as extreme-OH/IR stars (Justtanont et al., 2015).
Mira-type variables, surrounded by thin dust shells, represent the early stage of the AGB phase and are often associated with SiO and H\({}_{2}\)O masers. Following the Mira phase, H\({}_{2}\)O molecules are transported to the outer side of the circumstellar envelope and are photodissociated, resulting in the production of OH maser emissions (e.g., Goldman et al., 2017). Due to an excess of infrared emission from the circumstellar dust shell and the presence of OH maser emission, they will be recognized as OH/IR stars (e.g., Nyman et al., 1998). As they progress towards the post-AGB phase, the amplitude of their stellar pulsation gradually decreases, eventually leading to a phase known as non-variable OH/IR stars (e.g., Habing, 1996; Kamizuka et al., 2020). Subsequently, they move into the post-AGB phase. To understand the sequential evolution from early to late stages in the AGB phases, it is crucial to conduct studies on AGB stars with various properties. Target sources of our very long baseline interferometry (VLBI) studies cover a variety of masses and pulsation periods. This also means that the target sources that we are studying are representative of a wide range of ages.
### Masses of AGB stars and pulsation periods
There is a correlation between pulsation periods and masses in AGB stars. AGB stars with longer pulsation periods are generally considered to be more massive than those with shorter pulsation periods. For instance, Feast (2009) suggests that AGB stars with pulsation periods of 1000 days have masses ranging from 3 \(M_{\odot}\) to 4 \(M_{\odot}\). The pulsation periods and mean densities of pulsating stars are known to be coupled (e.g., Cox, 1980), and Takeuti et al. (2013) also reported a correlation between masses and pulsation periods of AGB stars. Therefore, to study AGB stars with various masses, observations of AGB stars exhibiting a wide range of pulsation periods are necessary.
In our previous VLBI observations conducted with the VLBI Exploration of Radio Astrometry (VERA) array from 2003 to 2017, we have carried out many observations towards dozens of AGB stars. Most of the target sources are classified as Mira-type variables and SR variables, with pulsation periods typically shorter than 400 days. Using H\({}_{2}\)O masers at 22 GHz, we have investigated the structures and kinematics of circumstellar matter around Mira and SR variables. The study of the SR variable S Crt by Nakagawa et al. (2008) was the initial outcome of our VLBI observation program, revealing the parallax and anisotropic circumstellar distribution of H\({}_{2}\)O masers, as well as detailed investigations of other Mira-type variables. Subsequent studies on the Mira-type variables, such as SY Scl by Nyu et al. (2011), R UMa by Nakagawa et al. (2018), and BX Cam by Matsuno et al. (2020), have also been produced from our VLBI observations. However, to date, there have been a limited number of VLBI studies focusing on OH/IR stars. Notable examples are the research on U Her by Vlemmings and van Langevelde (2007) and WX Psc and OH138.0\(+\)7.2 by Orosz et al. (2017), which represent valuable contributions to the OH/IR star studies using the VLBI method.
To obtain a diverse sample of AGB stars covering various evolutionary phases and a range of masses in our observations, it is crucial to include a wide range of pulsation periods. Since 2017, we have expanded our VLBI observations to include OH/IR stars with longer pulsation periods. The primary objective of our long-term VLBI study is to obtain astrometric and physical properties of OH/IR stars, including distance determination by parallax measurements, proper motion analysis, internal maser motion, luminosity estimation, mass loss rate analysis, and more. We think that comparing these properties between OH/IR stars and Mira-type variables can provide insights into their evolutionary relationship. In this proceedings paper, we present the current status and results from our astrometric VLBI observations. In particular, we
will discuss the latest research results on the OH/IR star NSV17351 and present preliminary parallax values for other OH/IR stars.
### Finding a new period-magnitude relation of Galactic AGB stars
It is known that there is a relation between K-band apparent magnitudes (\(m_{\rm K}\)) and the logarithm of pulsation periods (\(\log P\)) of Mira-type variables in the Large Magellanic Cloud (LMC) (e.g., Wood et al., 1999; Ita et al., 2004). By utilizing the known distance to the LMC, this relation can be converted into a relation between absolute magnitude in the K band (\(M_{\rm K}\)) and \(\log P\), thus it can be used as a distance estimator for variable stars. However, considering the difference of metallicity between the LMC and our Galaxy, it is crucial to establish a period-magnitude relation using sources in our galaxy. To convert apparent magnitudes to absolute magnitudes, parallax distances of the Galactic variable stars are required. Over the past decade, we have conducted astrometric VLBI observations for Mira and SR variables, and reported their period-magnitude relation as \(M_{\rm K}=-3.52\log P+(1.09\pm 0.14)\)(Nakagawa et al., 2016). As mentioned in the previous subsection, our previous observations had a limited period coverage of shorter than approximately 400 days.
Compared to typical Mira-type variables, OH/IR stars tend to exhibit longer pulsation periods, sometimes exceeding 1000 days. Since the OH/IR stars are surrounded by thick circumstellar dust shells, there is a large amount of extinction in optical bands. Sometimes the extinction effects extend into the near-infrared region, including the K band. We think this effect leads to a scattering of estimated K-band absolute magnitudes (Nakagawa et al., 2018). Conversely, the extinction diminishes at longer wavelengths, and the re-radiation effect from the dust shell becomes more dominant. To mitigate the impact of circumstellar extinction, we aim to validate the existence of the period-magnitude relation in the mid-infrared region. We plan to utilize data from the Wide-field Infrared Survey Explorer (WISE)1, especially in the W3 band (\(\lambda=12\)\(\mu\)m). If a period-magnitude relation is confirmed in the mid-infrared bands, it can serve as a new distance estimator for sources with pulsation periods longer than those typically observed in the Mira-type variables. With our new target sources, given in Section 2.4, we will explore this relation in the longer pulsation period range.
Footnote 1: [http://wise.ssl.berkeley.edu/index.html](http://wise.ssl.berkeley.edu/index.html)
### Astrometry of OH/IR stars to study the Galactic dynamics
In recent studies of spiral arms in disk galaxies, there has been a long-standing question about how spiral arms are formed and maintained. The quasi-stationary density wave theory (e.g., Lin and Shu, 1964) and the dynamic spiral theory (e.g., Sellwood and Carlberg, 1984; Baba, 2015) are two major theories under discussion. The studies of the Galactic spiral arms based on three-dimensional \(N\)-body simulations support a picture of non-steady spiral arms (Baba et al., 2013). The spiral arms do not show rigid rotating patterns but rather differentially rotating dynamic patterns. In the dynamic spiral theory, the amplitudes, pitch angles, and pattern speeds of the spiral arms are not constant, but change within a time span of 1-2 rotation periods at each radius (Baba, 2015). Characteristic behavior predicted from recent studies is the bifurcating or merging of the Galactic spiral arms on a time scale of \(\sim 10^{8}\) yr (e.g., Baba, 2015). In the Milky Way galaxy, rotation periods at the position of the Sun also correspond to the time scale of \(\sim\)\(10^{8}\) years.
The OH/IR stars with the longer periods are assumed to have larger masses, i.e. variable stars with \(P\simeq 1000\) days have initial masses of 3 to 4 \(M_{\odot}\)(Feast, 2009). Assuming this mass range and the relation between the main sequence lifetime \(\tau_{\rm MS}\) and mass given in Sparke and Gallagher (2000), we obtained \(\tau_{\rm MS}\) of 1.6\(\times 10^{8}\) to 3.5\(\times 10^{8}\) years. A recent study by Nikzat et al. (2022) also supports this estimation. Now, we find that the age of OH/IR stars with very
long pulsation period (\(P\gtrsim 1000\) days) is similar to the characteristic time scale of \(\sim 10^{8}\) yr in the dynamic spiral theory. This consideration also implies that the age of OH/IR stars with pulsation periods longer than \(\sim\)1000 days is \(\sim\)10\({}^{8}\) years, which is two orders of magnitude larger than the typical age of high-mass star-forming regions (SFRs) associated with spiral arms.
In the last 20 years, VLBI astrometry has measured more than two hundred parallaxes of SFRs (e.g., Burns et al., 2016; Motogi et al., 2016; Reid et al., 2019; VERA Collaboration et al., 2020) and evolved stars (e.g., Kamezaki et al., 2016; Nakagawa et al., 2016, 2018, 2019; Sudou et al., 2019). However, the ages of almost all VLBI targets fall mainly into two time scales, on the order of \(10^{6}\) years for the SFRs and \(10^{9}\) years for the Mira-type variable stars. To fully understand the mechanism of spiral arm formation, observations of sources of different ages are now needed, and the very long-period OH/IR stars with estimated ages of \(\sim\)10\({}^{8}\) years can be good probes to fill the time scale gap between \(10^{6}\) years and \(10^{9}\) years (Table 1). Astrometric VLBI is a promising tool to determine the three-dimensional positions and kinematics of the OH/IR stars. The OH/IR stars that we have selected, which have pulsation periods \(P\gtrsim 1000\) days, can contribute to this study.
### Why are VLBI observations important?
Annual parallaxes can be used to derive distances of celestial sources without making any assumptions about their chemical and/or physical properties. Recently, _Gaia_ Data Release 31 (DR3; Gaia Collaboration et al., 2023) has provided a huge amount of astrometric measurements. Most of the VLBI parallax measurements made so far have been towards SFRs. They are very close to the Galactic plane and are deeply obscured by dense dust and molecular clouds. As a result, the optical emission from the stars is intercepted by their surroundings and interstellar matter. For this reason, it is relatively difficult to find counterparts to SFRs in _Gaia_ catalogs. Compared to the SFRs, it is easier to identify AGB stars in _Gaia_ catalogs because there are many AGB stars distributed at high Galactic latitude. Because they are distributed far from the Galactic plane, they are not as heavily obscured by interstellar dust or molecular clouds. Mira-type variables, which are thought to be in the early stages of AGB phases, are bright in both optical and infrared bands. Parallaxes for a large number of Mira-type variables are available in _Gaia_ catalogs. Since the AGB stars have both _Gaia_ and VLBI parallax values, they are suitable sources for verifying _Gaia_ and the VLBI parallax measurements.
Footnote 1: [https://www.cosmos.esa.int/web/gaia/data-release-3](https://www.cosmos.esa.int/web/gaia/data-release-3)
After the release of _Gaia_ Data Release 21 (DR2), Chiavassa et al. (2018) conducted a study using three-dimensional radiative hydrodynamics simulations of convection to explore the impact of convection-related surface structures in AGB stars on their photometric variability. They extracted parallax errors in DR2 for SR variables in the solar neighbourhood and compared them with synthetic predictions of photocenter displacements. As a result, they reported that the position of the photocenter has a temporal excursion between 0.077 - 0.198 au (5 to
\begin{table}
\begin{tabular}{c c c c} \hline Time scale & Phenomena and model & Target source & VLBI Obs. \\ \hline \(\sim 10^{6}\) yr & Spiral arm & SFRs, giants & Well studied \\ \(\sim 10^{8}\) yr & Bifurcating/merging arm & Heavy OH/IR stars & _Few cases_ \\ \(\sim 10^{9}\) yr & Thick disk stars & Miras & Well studied \\ \hline \end{tabular}
\end{table}
Table 1: Models and observations for study of the Galactic dynamics.
11% of the corresponding stellar radius), depending on the simulation considered. Since the distances of the sources in our VLBI studies are in the order of a hundred pc to a few kpc, the angular size of the excursion in Chiavassa et al. (2018) can be expected to be 0.1 - 1 mas. In addition, the time variation of the surface brightness degrades the accuracy of parallax measurements on optical images. Therefore, in DR2, and even in its updated version DR3, _Gaia_'s parallax measurements of AGB stars can be expected to suffer from this effect.
If the central star is surrounded by thick dust layers in the late stage of the AGB phase, the source becomes very faint in the optical bands and cannot be observed with Gaia. For example, an OH/IR star, OH127.8+0.0, is known to have a thick circumstellar dust shell, a high mass loss rate (Kemper et al., 2002), and a long pulsation period of 1994 days (VSX+; Watson et al., 2006). But we cannot find it in DR3 catalog. OH127.8+0.0 is very bright in the infrared, but faint in the optical bands due to the strong influence of circumstellar extinction by the dust layer. The VLBI method is still a very effective and promising tool for making parallax measurements of this kind of stars.
## 2 Observation
### Single dish monitoring of H\({}_{2}\)O and SiO masers
Using the 20 m aperture telescope at the VERA Iriki station, we have been observing H\({}_{2}\)O and SiO maser emissions to obtain their spectra and time variability. Since the pulsation periods of a large number of dusty OH/IR stars are not found in the literature or in databases, we have to determine the pulsation period ourselves from single-dish observations.
The integration time of our single-dish observations is 10 to 40 minutes, to reduce the noise level (antenna temperature in K) in each observation to less than 0.05 K. The time interval of single-dish observations is approximately one month. The conversion factor from antenna temperature to flux density is 19.6 Jy K\({}^{-1}\). The acquired signal with a bandwidth of 32 MHz is split into 1024 spectral channels with a frequency resolution of 31.25 kHz, which corresponds to a velocity resolution of 0.42 km s\({}^{-1}\) at 22 GHz and 0.21 km s\({}^{-1}\) at 43 GHz. We used a signal-to-noise ratio (S/N) of 3 to 5 as a detection criterion in our single-dish observations.
To measure the overall activity of the circumstellar masers, we use the integrated intensities \(I\) in unit of K km s\({}^{-1}\) as an integration of the total maser components over a detected velocity range. We use the value \(I\) to estimate the pulsation period. Antenna temperatures have relative uncertainties of 5-20%, and we have uniformly applied uncertainties of 10% to all the integrated intensities. A simple sinusoidal function \(I_{\rm model}\), defined as
\[I_{\rm model}=\Delta I\sin\left[\frac{2\pi(t+\theta)}{P}\right]+I_{0}, \tag{1}\]
is used to estimate the pulsation period. \(\Delta I\) is the amplitude of the variation, \(t\) is the time, \(\theta\) is a zero-phase time lag, \(P\) is the period of the variation, and \(I_{0}\) is the average.
### VLBI observations
For parallax measurements and mapping of circumstellar masers, we carry out continuous VLBI observations. VERA, operated by the National Astronomical Observatory of Japan (NAOJ), has been used to observe 22 GHz H\({}_{2}\)O and 43 GHz SiO maser emission from Mira-type variables and OH/IR stars. The VERA array consists of four 20-metre aperture radio telescopes at Mizusawa, Iriki, Ogasawara, and Ishigaki-jima (Figure 1). The array, including the longest baseline length of 2270 km between Ishigaki-jima and Mizusawa, gives a typical synthesised beam size of \(\sim\)1.2 mas and \(\sim\)0.6 mas at 22 GHz and 43 GHz, respectively. Each antenna of the VERA array is equipped with a dual beam system (e.g., Kawaguchi et al., 2000),
which allows us to simultaneously observe a target maser source and an extragalactic continuum source at a separation angle between 0.3 \({}^{\circ}\) and 2.2 \({}^{\circ}\). The extragalactic sources are used as a position reference. Using the dual beam system, we can calibrate short-term tropospheric fluctuations using the phase referencing technique (Honma et al., 2008). The relative position of the target maser spots with respect to the position reference source can then be determined with an accuracy of better than 0.1 mas. By tracking the celestial motions of the maser spots, the annual parallax of the target is derived.
The signals of left-handed circular polarization from the target and position reference source are acquired with a total data acquisition rate of 1 giga-bit per second (Gbps). This gives a total bandwidth of 256 MHz. The data were recorded on the hard disks of the "OCTADISK" system (Oyama et al., 2016). This total bandwidth is divided into 16 intermediate frequency (IF) channels. Each IF channel has a width of 16 MHz. One IF channel (16 MHz) was allocated to the maser source and the remaining 15 IF channels (16 MHz \(\times\) 16 = 240 MHz) were allocated to the position reference sources. Correlation processing was performed using the Mizusawa software correlator at the Mizusawa VLBI observatory, NAOJ. In the final output of the correlator, the 16 MHz bandwidth data of the H\({}_{2}\)O or SiO masers were divided into 512 channels with a frequency resolution of 31.25 kHz. This corresponds to a velocity resolution of 0.42 km s\({}^{-1}\) at 22 GHz and 0.21 km s\({}^{-1}\) at 43 GHz.
For reduction of the VLBI data we use the Astronomical Image Processing System2 (AIPS; Fomalont, 1981) developed by the National Radio Astronomy Observatory (NRAO). A detailed description of the phase referencing analysis is given in Nakagawa et al. (2023).
Footnote 2: [http://www.aips.nrao.edu/index.shtml](http://www.aips.nrao.edu/index.shtml)
### Phase referencing with the VERA dual beam system
We will give a simplified explanation of the phase referencing method using the dual beam system equipped with the VERA array. Letting the phases obtained by a VLBI observation towards the target maser source A and the reference continuum source B be \(\phi^{\rm A}_{\rm obs}\) and \(\phi^{\rm B}_{\rm obs}\), respectively, we can express the phases as follows:
\[\phi^{\rm A}_{\rm obs}=\phi^{\rm A}_{\rm pos}+\phi^{\rm A}_{\rm struc}+\phi^{ \rm A}_{\rm atm}+\phi^{\rm A}_{\rm inst}+\phi^{\rm A}_{\rm clock} \tag{2}\]
Figure 1: Locations of the 4 antennas of the VERA array, which consists of four 20-metre aperture antennas spread across Japan, with the longest baseline length of 2270 km between Ishigaki-jima and Mizusawa.
\[\phi^{\rm B}_{\rm obs}=\phi^{\rm B}_{\rm pos}+\phi^{\rm B}_{\rm struc}+\phi^{\rm B}_ {\rm atm}+\phi^{\rm B}_{\rm inst}+\phi^{\rm B}_{\rm clock}, \tag{3}\]
where the terms on the right-hand side are the residual phase due to the sky plane position of the sources (\(\phi^{\rm A}_{\rm pos}\)), the residual phase due to the source structure (\(\phi_{\rm struc}\)), the residual phase due to the unpredictable atmospheric path length (\(\phi_{\rm atm}\)), the residual phase due to the difference between the two signal paths (\(\phi_{\rm inst}\)), and the residual phase due to the clock offset (\(\phi_{\rm clock}\)) (see, e.g., Thompson et al., 2001). Since the two sources A and B have a small separation angle on the sky plane, \(\phi^{\rm A}_{\rm atm}\) and \(\phi^{\rm B}_{\rm atm}\) are considered to be equal. The clock offsets \(\phi^{\rm A}_{\rm clock}\) and \(\phi^{\rm B}_{\rm clock}\) can be eliminated in the ordinary reduction process. However, since the signal paths for sources A and B are independent in the VERA receiver system, \(\phi^{\rm A}_{\rm inst}\) and \(\phi^{\rm AB}_{\rm inst}\) are not the same,
\[\phi^{\rm A}_{\rm inst}\neq\phi^{\rm B}_{\rm inst}. \tag{4}\]
By taking the difference between the phases from sources A and B, we will obtain
\[\phi^{\rm A}_{\rm obs}-\phi^{\rm B}_{\rm obs}=\phi^{\rm A}_{\rm pos}-\phi^{ \rm B}_{\rm pos}+\phi^{\rm A}_{\rm inst}-\phi^{\rm B}_{\rm inst}. \tag{5}\]
Here we use the calibration information obtained from the dual beam system. Artificial noise sources are installed on the surface of the VERA antenna and the same noise is sent to the two receivers along with the signal from the celestial source (Honma et al., 2008). By correlating two signals from A and B, the phase error caused by the difference in path length \(\phi^{\rm A}_{\rm inst}-\phi^{\rm B}_{\rm inst}\) can be obtained. The phase \(\phi^{\rm B}_{\rm pos}\) is assumed to be zero for the source with point-like structure. Even if source B has some structure, we can estimate and eliminate the \(\phi^{\rm B}_{\rm pos}\) term by the self-calibration reduction process. Finally, we obtain
\[\phi^{\rm A}_{\rm obs}-\phi^{\rm B}_{\rm obs}=\phi^{\rm A}_{\rm pos}. \tag{6}\]
This gives the phase information that reflects the celestial position of the target maser source A. By performing VLBI observations for 1.5 year to 2 years with an interval of about one month, we can track the position of the target maser source A to derive its parallax and proper motions.
### Target sources
From 2003 to 2017, we have carried out many VLBI observations of dozens of AGB stars using the VERA array. The main targets of the previous studies were Mira-type variables and SR variables, which in most cases have pulsation periods shorter than 400 days (Nakagawa et al., 2018). From 2017, we started VLBI observations of OH/IR stars or Mira-type variables, which have longer pulsation periods than those in the previous studies. Among them, the OH/IR stars are shown in particular in Table 2. The first four columns show the sources for which observations have been made between the years 2017 and 2022. Although successful observations of all sources have been difficult, data acquisition for some stars has been completed and data reduction is currently underway. In this proceedings paper, we will present some parallax measurements as preliminary results. In early 2023 we proposed VLBI observations of new sources, which are presented in the last four columns of Table 2. The parallaxes from DR3 (\(\Pi_{\rm DR3}\)) and their relative errors are also shown. For those sources for which no parallax values were found in DR3, we have indicated "n/a" in their respective "\(\Pi_{\rm DR3}\)" and "Err." columns. For some sources (RAFGL 5201, RAFGL 2445, and OH 358.23\(+\)0.11), the parallaxes have negative values. Several sources have errors greater than 100 % (NSV17351, RAFGL 5201, and OH 358.23\(+\)0.11). From this table, we can interpret that it is very difficult for _Gaia_ to determine accurate parallaxes of OH/IR stars. Therefore, the VLBI method is still important for parallax measurements of these dust-obscured OH/IR stars. Pulsation periods are also given in columns 4 and 8. We have selected sources that cover a wide range in pulsation
periods. Sources at low Galactic latitudes with longer pulsation periods can be expected to be young and more massive than typical Mira-type variables.
## 3 Results and discussion
### Parallaxes from VLBI and Gaia
We will now compare parallax measurements of AGB stars from VLBI and _Gaia_. In Table 3 we have presented 44 Galactic LPVs whose parallaxes have been determined from astrometric VLBI, in order of their right ascension (RA). In the third, fourth, and fifth columns, the parallax values determined from astrometric VLBI, DR2 and DR3 are presented as \(\Pi_{\rm VLBI}\), \(\Pi_{\rm DR2}\), and \(\Pi_{\rm DR3}\), respectively. Although the number of digits of the original parallaxes and their formal errors in DR2/DR3 is large, we have presented the values in digits of 0.001 mas. The species of the maser molecules observed in each VLBI observation are given in the sixth column. In the last column, the references of their VLBI parallaxes are given using abbreviations, whose meaning is explained in the table footnote. Regarding R Aqr, since the VLBI parallax is published in two independent studies by Kamohara et al. (2010) and Min et al. (2014), it has two lines in the table. For the two sources WX Psc and OH138.0\(+\)7.2 we were not able to find the corresponding parallaxes in either DR2 or DR3. For W Leo and Y Lib we could not find their parallaxes in DR2, but found them in DR3. It is assumed that the data quality has improved from DR2 to DR3. In the case of W Hya, the annual parallax was listed in DR2, but we could not find it in DR3. The parallax of S Ser shows a negative value in DR3.
In Figure 2 we present the parallaxes of AGB stars determined from VLBI (\(\Pi_{\rm VLBI}\)) and DR2/DR3 (\(\Pi_{\rm Gaia\,DR2}\), \(\Pi_{\rm Gaia\,DR3}\)). A total of 41 sources are included in the figure. The horizontal and vertical axes represent the parallax values on logarithmic scales. Open circles represent the comparison between the parallaxes from the VLBI and DR2. Filled circles represent the comparison between the VLBI and DR3. A dotted line shows a relation of the form \(\Pi_{\rm VLBI}\!=\!\Pi_{\rm Gaia\,DR2/DR3}\). In Figure 2, source names are added near the comparison data between the VLBI and DR3 (filled circles). In the case of W Hya, we do not have a DR3 parallax, so the name is found near the open circle. The dispersion of the filled circles from the \(\Pi_{\rm VLBI}\!=\!\Pi_{\rm Gaia\,DR2/DR3}\) relation is significantly smaller than that of the open circles. This suggests that, for many sources, DR3 parallax measurements are closer to those carried out
\begin{table}
\begin{tabular}{l c c c c c c c} \hline Source name & \(\Pi_{\rm DR3}\) & Err. & Period & Source name & \(\Pi_{\rm DR3}\) & Err. & Period \\ (2017–2022)\({}^{\dagger}\) & [mas] & [\%] & [days] & (2023–)\({}^{\dagger}\) & [mas] & [\%] & [days] \\ \hline NSV17351 & 0.088\(\pm\)0.147 & 166 & 1122 & V697 Her & 1.029\(\pm\)0.129 & 13 & 497 \\ OH 127.8–0.0 & n/a & n/a & 1994 & NSV 23099 & 0.209\(\pm\)0.102 & 49 & 431 \\ NSV 25875 & n/a & n/a & 1535 & OH 358.667\(-\)0.044 & 0.207\(\pm\)0.142 & 69 & 300 \\ RAFGL 5201 & \(-\)0.131\(\pm\)0.253 & \(-\)194 & 600 & OH 358.23\(+\)0.11 & \(-\)0.061\(\pm\)0.190 & \(-\)313 & 704 \\ OH 83.4–0.9 & 0.836\(\pm\)0.556 & 66 & 1428 & OH 0.66\(-\)0.07 & n/a & n/a & n/a \\ OH 141.7+3.5 & n/a & n/a & 1750 & IRAS 18039\(-\)1903 & n/a & n/a & n/a \\ CU Cep & 0.231\(\pm\)0.057 & 25 & 700 & OH 9.097\(-\)0.392 & 0.261\(\pm\)0.232 & 89 & 634 \\ RAFGL 2445 & \(-\)1.548\(\pm\)0.369 & \(-\)24 & n/a & RAFGL 1686 & 1.053\(\pm\)0.359 & 34 & 500 \\ OH 39.7+1.5 & n/a & n/a & 1260 & IRAS 18176\(-\)1848 & 2.404\(\pm\)0.618 & 26 & n/a \\ OH 26.5+0.6 & n/a & n/a & 1589 & OH 44.8\(-\)2.3 & 0.918\(\pm\)0.631 & 69 & n/a \\ OH 42.3–0.1 & n/a & n/a & 1650 & & & & \\ IRC\(-\)30363 & 0.241\(\pm\)0.130 & 54 & 720 & & & & \\ IRC\(+\)10322 & 0.553\(\pm\)0.183 & 33 & 570 & & & & \\ IRC\(+\)10451 & 0.818\(\pm\)0.196 & 24 & 730 & & & & \\ OH 26.2\(-\)0.6 & n/a & n/a & 1330 & & & & \\ OH 51.8\(-\)0.1 & n/a & n/a & 1270 & & & & \\ OH 358.16+0.49 & n/a & n/a & 1507 & & & & \\ V1018 Sco & n/a & n/a & n/a & & & & \\ \hline \end{tabular} \(\dagger\) Duration of our VLBI observations using VERA.
\end{table}
Table 2: DR3 parallaxes of our VLBI target sources
with the VLBI than are those obtained with DR2. However, for some sources there are still large discrepancies between the VLBI and DR3 parallaxes. For three sources, NSV17351, OH231.8+4.2, and VX Sgr, DR3 measurements are smaller than the VLBI measurements. The types of the three LPVs are OH/IR stars (NSV17351 and OH231.8+4.2) and red supergiant (VX Sgr). Although we are curious about many other OH/IR stars in Table 2, they could not be shown in Figure 2 because _Gaia_ and/or VLBI parallaxes of many dust-obscured OH/IR stars are not available.
\begin{table}
\begin{tabular}{l c c c c c c} \hline Source & Var. & \(\Pi_{\rm VLBI}\) & \(\Pi_{\rm DR2}\) & \(\Pi_{\rm DR3}\) & Maser & Ref.\({}^{\dagger}\) \\ & type & [mas] & [mas] & [mas] & & II\({}_{\rm VLBI}\) \\ \hline SY Scl & Mira & 0.75\(\pm\)0.03 & 0.675\(\pm\)0.227 & 0.525\(\pm\)0.122 & H\({}_{2}\)O & nyu11 \\ WX Psc & OH/IR & 5.3\({}^{b}\) & n/a & n/a & OH & oro17 \\ S Per & SRC & 0.413\(\pm\)0.017 & 0.222\(\pm\)0.121 & \(-\)0.503\(\pm\)0.081 & H\({}_{2}\)O & asa10 \\ OH138.0+7.2 & OH/IR & 0.52\(\pm\)0.09 & n/a & n/a & OH & oro17 \\ V637 Per & Mira & 0.94\(\pm\)0.02 & 1.846\(\pm\)0.152 & 0.845\(\pm\)0.097 & H\({}_{2}\)O & ver20 \\ BX Eri & Mira & 2.116\(\pm\)0.105 & 2.477\(\pm\)0.110 & 2.349\(\pm\)0.063 & H\({}_{2}\)O & ver20 \\ T Lep & Mira & 3.06\(\pm\)0.04 & 2.958\(\pm\)0.189 & 3.086\(\pm\)0.103 & H\({}_{2}\)O & ask14 \\ BW Cam & Mira & 0.749\(\pm\)0.189 & 1.187\(\pm\)0.214 & 0.956\(\pm\)0.105 & H\({}_{2}\)O & ver20 \\ RW Lep & Mira & 1.62\(\pm\)0.16 & 2.355\(\pm\)0.134 & 2.539\(\pm\)0.075 & H\({}_{2}\)O & kam14 \\ BX Cam & Mira & 1.73\(\pm\)0.03 & 4.134\(\pm\)0.255 & 1.764\(\pm\)0.101 & H\({}_{2}\)O & am20 \\ U Lyn & Mira & 1.27\(\pm\)0.06 & 0.580\(\pm\)0.224 & 1.014\(\pm\)0.083 & H\({}_{2}\)O & kam16a \\ NSV17351 & Mira & 0.247\(\pm\)0.035 & 0.353\(\pm\)0.228 & 0.088\(\pm\)0.147 & H\({}_{2}\)O & mk23a \\ VY CMa & SRc & 0.88\(\pm\)0.08 & \(-\)5.917\(\pm\)0.825 & 0.419\(\pm\)0.408 & H\({}_{2}\)O & ch08 \\ OZ Gem & Mira & 0.806\(\pm\)0.039 & \(-\)0.961\(\pm\)0.456 & 0.458\(\pm\)0.325 & H\({}_{2}\)O & ur20 \\ OH231.8+4.2 & OH/IR & 0.61\(\pm\)0.03 & 0.096\(\pm\)0.182 & 0.030\(\pm\)0.160 & H\({}_{2}\)O & nak23b \\ HU Pup & Mira & 0.308\(\pm\)0.042 & 0.182\(\pm\)0.057 & 0.294\(\pm\)0.030 & H\({}_{2}\)O & ver20 \\ R Cnc & Mira & 3.84\(\pm\)0.29 & 4.435\(\pm\)0.549 & 3.938\(\pm\)0.179 & H\({}_{2}\)O & ver20 \\ X Hya & Mira & 2.07\(\pm\)0.05 & 1.891\(\pm\)0.276 & 2.531\(\pm\)0.111 & H\({}_{2}\)O & ver20 \\ R UMa & Mira & 1.97\(\pm\)0.05 & 2.045\(\pm\)0.202 & 1.747\(\pm\)0.086 & H\({}_{2}\)O & nak16 \\ W Leo & Mira & 1.03\(\pm\)0.02 & n/a & 0.878\(\pm\)0.108 & H\({}_{2}\)O & ver20 \\ HS UMa & Mira & 2.816\(\pm\)0.095 & 3.215\(\pm\)0.144 & 3.202\(\pm\)0.101 & H\({}_{2}\)O & ver20 \\ S Crt & SRb & 2.33\(\pm\)0.13 & 2.646\(\pm\)0.146 & 0.061\(\pm\)0.097 & H\({}_{2}\)O & nak08 \\ T UMa & Mira & 0.96\(\pm\)0.15 & 0.748\(\pm\)0.105 & 0.989\(\pm\)0.065 & H\({}_{2}\)O & nak18 \\ U CVn & Mira & 0.911\(\pm\)0.031 & 0.921\(\pm\)0.167 & 0.563\(\pm\)0.077 & H\({}_{2}\)O & ver20 \\ RT Vir & SRb & 4.417\(\pm\)0.134 & 2.050\(\pm\)0.291 & 4.137\(\pm\)0.227 & H\({}_{2}\)O & zha17 \\ R Hya & Mira & 7.93\(\pm\)0.18 & 4.468\(\pm\)0.394 & 6.736\(\pm\)0.464 & H\({}_{2}\)O & ver20 \\ W Hya & Mira & 10.18\(\pm\)2.36 & 6.091\(\pm\)0.816 & n/a & OH & velo3 \\ RX Boo & SRb & 7.31\(\pm\)0.50 & 7.829\(\pm\)0.300 & 6.424\(\pm\)0.231 & H\({}_{2}\)O & kam12 \\ FV Boo & Mira & 0.97\(\pm\)0.06 & 0.573\(\pm\)0.181 & 1.014\(\pm\)0.091 & H\({}_{2}\)O & kam16b \\ Y Lib & Mira & 0.855\(\pm\)0.050 & n/a & 0.832\(\pm\)0.083 & H\({}_{2}\)O & chi19 \\ S CrB & Mira & 2.39\(\pm\)0.17 & 2.322\(\pm\)0.285 & 2.596\(\pm\)0.114 & OH & velo7 \\ S Ser & Mira & 1.25\(\pm\)0.04 & \(-\)0.512\(\pm\)0.317 & 0.768\(\pm\)0.129 & H\({}_{2}\)O & ver20 \\ U Her & Mira & 3.76\(\pm\)0.27 & 1.749\(\pm\)0.149 & 2.357\(\pm\)0.077 & OH & velo7 \\ VX Sgr & SRC & 0.64\(\pm\)0.04 & 0.787\(\pm\)0.229 & 0.050\(\pm\)0.187 & H\({}_{2}\)O & xu18 \\ RR Aql & Mira & 1.58\(\pm\)0.40 & 3.146\(\pm\)0.298 & 1.953\(\pm\)0.113 & OH & velo7 \\ SY Aql & Mira & 1.10\(\pm\)0.07 & 3.433\(\pm\)0.206 & 1.067\(\pm\)0.091 & H\({}_{2}\)O & ver20 \\ NML Cyg & SRC & 0.62\(\pm\)0.047 & 1.526\(\pm\)0.568 & 0.528\(\pm\)0.348 & H\({}_{2}\)O & zha12 \\ UX Cyg & Mira & 0.54\(\pm\)0.06 & 0.176\(\pm\)0.167 & 0.701\(\pm\)0.094 & H\({}_{2}\)O & kur05 \\ SV Peg & SRb & 3.00\(\pm\)0.06 & 1.124\(\pm\)0.283 & 2.586\(\pm\)0.170 & H\({}_{2}\)O & sud19 \\ IRAS22480\(+\)60028 & SRc & 0.400\(\pm\)0.025 & 0.479\(\pm\)0.078 & 0.363\(\pm\)0.029 & H\({}_{2}\)O & ima12 \\ R Peg & Mira & 2.76\(\pm\)0.28 & 2.830\(\pm\)0.254 & 2.629\(\pm\)0.11
### Parallax and error
We will consider the parallaxes and their errors. In Figure 3 we have presented the parallaxes and their formal errors obtained from the VLBI (filled circles), DR2 (open triangles), and DR3
Figure 2: Annual parallaxes of 41 LPVs determined from the VLBI (horizontal axis) and DR2/DR3 (vertical axis) on a logarithmic scale. Open circles correspond to the comparison between the VLBI and DR2, whereas filled circles correspond to the comparison between the VLBI and DR3. Error bars are also shown on both axes.
(grey squares). From this figure we can see a positive correlation between the parallaxes and the errors for all data sets from the VLBI and DR2/DR3. We fitted the distribution to a linear function using a least-squares analysis and obtained three lines as follows,
\[\log\,\sigma_{\rm{TLVBI}} = (0.96\pm 0.13)\,\log\,\Pi_{\rm{VLBI}}-1.20\pm 0.06\, \tag{7}\] \[\log\,\sigma_{\rm{II_{Gaia\,DR2}}} = (0.29\pm 0.07)\,\log\,\Pi_{\rm{Gaia\,DR2}}-0.72\pm 0.04\,\] (8) \[\log\,\sigma_{\rm{II_{Gaia\,DR3}}} = (0.53\pm 0.09)\,\log\,\Pi_{\rm{Gaia\,DR3}}-1.07\pm 0.03. \tag{9}\]
It should be noted that five data points from DR3, presented with open squares, were excluded from this consideration because they were distributed outside the main group of data. Equations (7) to (9) show the relationships between parallax and errors for the VLBI (solid line), DR2 (one-dotted chain line), and DR3 (dotted line), respectively. Equations (7) and (8) intersect at \(\log\,\Pi=0.716\), corresponding to a distance of 192 pc. Equations (7) and (9) intersect at \(\log\,\Pi=0.302\), corresponding to a distance is 499 pc. Using the error ratio \(\sigma_{\rm{II}}/\Pi\) as an indicator of the effectiveness of parallax measurements, VLBI measurements can derive better distance estimates than DR3 for the LPVs further than \(\sim 499\) pc. We can understand that the VLBI and _Gaia_ are complementary for distance measurements of dusty AGB stars, and the distance of \(\sim\)500 pc would be a boundary of validity between the VLBI and _Gaia_ for parallax measurement of AGB stars.
Figure 3: Parallaxes and their formal errors obtained with the VLBI (filled circles), DR2 (triangles), and DR3 (squares). Three lines represent the relationships between parallax and errors for the VLBI (solid line; Eq. 7), DR2 (one-dotted chain line; Eq. 8), and DR3 (dotted line; Eq. 9). The points of intersection are \(\log\,\Pi=0.716\) for equations (7) and (8), and \(\log\,\Pi=0.302\) for equations (7) and (9).
### Results for individual sources
#### 3.3.1 Pulsation period of an OH/IR star NSV17351
NSV17351 is classified as an OH/IR star (Le Squeren et al., 1979). From long-term monitoring of our single-dish program, we find that this source can be a variable star. Although we searched for its pulsation period in online databases and the literature, we could not find it. We determined the pulsation period of NSV17351 from our single-dish monitoring of the H\({}_{2}\)O maser at 22 GHz. From our least-squares analysis assuming a simple sinusoidal function as presented in Section 2.1, the pulsation period of NSV17351 was solved to be 1122\(\pm\)24 days (Nakagawa et al., 2023). The fit of the model is shown with a solid curve in the left panel of Figure 4. As there is no prior information on this period in the literature or online databases, this is the first time we have successfully measured its periodicity. Since NSV17351 has a very long periodicity, we think that it is a candidate for an extreme OH/IR star.
In the right panel of Figure 4, we superimposed the H\({}_{2}\)O maser spectrum obtained on 22 April 2018 (solid line) and a 1612 MHz OH maser spectrum obtained in February 1978 (dotted line). We can see that the cut-off velocity in the blue-shifted component shows exactly the same value (38 km s\({}^{-1}\) to 40 km s\({}^{-1}\)). Since it is thought that OH molecules are supplied by photodissociation of H\({}_{2}\)O molecules carried to the outer part of the circumstellar envelope, we understand that the H\({}_{2}\)O molecules have been carried to the outermost region and the H\({}_{2}\)O gas has accelerated to the terminal velocity.
#### 3.3.2 Parallax of NSV17351
To estimate the annual parallax, we track the positions of 22 GHz H\({}_{2}\)O maser spots obtained from multiple VLBI images. In Figure 5 we show examples of maser spot images in the same velocity channel. Since the shape of the spot changes gradually with time, we carefully examined the maser structure, its time variation and continuity. In this velocity channel, we concluded that the southern components in the maps of Figure 5 (b) and (c) are identical to the peak in the map of Figure 5 (a). Using the 2018-2019 VERA observations of the H\({}_{2}\)O maser at 22 GHz, we derived a parallax of 0.247\(\pm\)0.035 mas for NSV17351, corresponding to a distance of 4.05\(\pm\)0.59 kpc. Figure 6 shows the position offsets after removal of proper motions and the fitted parallax along the RA (top) axis and DEC (bottom) axes. The observed data are indicated as filled circles, with their grey scales representing the local standard of rest
Figure 4: (Left): Time variation of the integrated H\({}_{2}\)O maser intensities of NSV17351. Filled circles represent successful detections. In the case of non-detections, open circles with downward arrows represent the upper limits of detection. The solid line is the model indicating a pulsation period of 1122\(\pm\)24 days. (Right): Superpositions of the H\({}_{2}\)O maser (solid line) and OH maser (dotted line) of NSV17351 obtained in 2018 and 1978, respectively. The cut-off velocity of the blue-shifted side appears to be exactly the same in both spectra.
(LSR) velocities of each maser spot. Error bars are 0.05 mas and 0.09 mas in RA and DEC, respectively. The solid curves are the best-fit models of the parallax. A systemic proper motion of (\(\mu_{\alpha}\) cos \(\delta\), \(\mu_{\delta}\)) = (-1.19 \(\pm\) 0.11, 1.30 \(\pm\) 0.19) mas yr\({}^{-1}\) was also obtained.
#### 3.3.3 H\({}_{2}\)O maser distribution of NSV17351
The circumstellar distribution and motions of H\({}_{2}\)O masers in an 80 \(\times\) 120 au region around the OH/IR star NSV17351 are shown in the left panel of Figure 7. Since the angular proper motions of each maser spot obtained from our VLBI observations are with respect to a position reference source, we have to convert them to motions on the frame fixed to NSV17351. To perform this conversion, we take the average motion of the whole maser spots and then subtract it from the original proper motions of each maser spot. The detailed procedure has been presented in Nakagawa et al. (2014). The estimated stellar position of NSV17351 is indicated by a cross symbol whose length represents the position error. We can see that the maser spots at different radial velocities are moving outwards from the expected position of the central star. As an average, we have derived a three-dimensional outward expansion velocity of the H\({}_{2}\)O masers of 15.7\(\pm\)3.3 km s\({}^{-1}\).
We can see that the bluest maser spots overlap the estimated position of the central star. In the case of OH masers in OH/IR stars, it is known that the most blue- and red-shifted maser spots are seen at the same position of the central star along the plane of the sky. For example, Orosz et al. (2017) revealed that the blue- and red-shifted OH masers coincide with the same position where the central star is assumed to exist. Thus, as suggested by the left panel of Figure 7, we can interpret that the most blue-shifted maser spots are superimposed on the position of the central star of NSV17351 along the line of sight. They can possibly be explained by the emission being excited along the line of sight to the central star.
#### 3.3.4 Position of NSV17351 in our galaxy
From the systemic proper motion of NSV17351 derived from our VLBI observations, we can estimate a motion of the source in Galactocentric coordinates. In Nakagawa et al. (2023), we derived a three-dimensional position of NSV17351 as \((X,Y,Z)\) = (-2.83\(\pm\)0.12, 11.05\(\pm\)0.12, \(-\)0.09\(\pm\)0.01) kpc, where the origin of the coordinate system is the Galactic center. We confirm that the \(Z\) value of NSV17351, \(Z=-0.09\pm 0.01\) kpc, is within the \(Z\) range of
Figure 5: VLBI images of maser spots of NSV17351 at a \(V_{\rm LSR}\) of 39.15 km s\({}^{-1}\) detected on (a) 16 April 2018, (b) 1 November 2018 and (c) 12 March 2019 (Nakagawa et al., 2023). The synthesised beams are presented in the lower left of each map.
SFRs (i.e., \(-0.12<Z<0.11\) kpc). In the right panel of Figure 7 we have shown the position of NSV17351 using a filled circle. NSV17351 is located a little bit outside of the Perseus arm. The location of NSV17351 can be understood by considering its age. Feast (2009) reported that Mira-type variables with a period of 1000 days have an initial mass of 3 to 4 \(M_{\odot}\). Assuming this mass, we can estimate the age of NSV17351 to be 1.6\(\times 10^{8}\) to 3.5\(\times 10^{8}\) years. The age of NSV17351 can be two orders of magnitude greater than the typical age of high-mass SFRs associated with spiral arms, and we can say that we are observing a state in which NSV17351 is leaving the arm in which it was born, but is not yet sufficiently dynamically relaxed. With more samples of OH/IR stars representing very long pulsation periods, we can provide observational results for studies of the Galactic spiral arms (Nakagawa et al., 2023).
#### 3.3.5 Oh39.7+1.5, Irc\(-\)30363 (OH/IR star), and AW Tau (Mira)
In addition to NSV17351, we have observed H\({}_{2}\)O masers in two OH/IR stars, OH39.7+1.5 and IRC\(-\)30363, and a Mira-type variable, AW Tau, at 22 GHz.
For OH39.7+1.5, using two maser spots with radial velocities of 34.6 and 8.6 km s\({}^{-1}\), a preliminary parallax of 0.54\(\pm\)0.03 mas was obtained. This corresponds to a distance of 1.85\(\pm\)0.10 kpc. This parallax measurement is the only one available, as DR3 has no data for OH39.7+1.5. The averaged proper motion of the two maser spots is \((\mu_{\alpha}\cos\delta,\mu_{\delta})=(-0.22\pm 0.13,-1.53\pm 0.13)\) mas yr\({}^{-1}\).
Figure 6: The annual parallax of NSV17351 along RA (top) and DEC (bottom) (Nakagawa et al., 2023). Results from ten observations are shown with filled circles. The grey scale indicates the LSR velocity \(V_{\rm LSR}\) of each maser spot. The solid curves are the best-fit models obtained from our fit, showing the 0.247 mas parallax.
For IRC\(-\)30363, using a maser spot with a radial velocity of 9.72 km s\({}^{-1}\), a parallax of 0.562\(\pm\)0.201 mas was obtained from our VLBI observations. This gives a distance of 1.78\(\pm\)0.73 kpc. In DR3, the parallax of IRC\(-\)30363 is reported to be 0.241\(\pm\)0.130 mas. They agree within their margins of error.
For the Mira-type variable AW Tau, a parallax of 0.449\(\pm\)0.032 mas was obtained using a maser spot at a radial velocity of \(-\)9.54 km s\({}^{-1}\). This corresponds to a distance of 2.23\(\pm\)0.16 kpc. In DR3, the parallax of AW Tau is reported to be 0.434\(\pm\)0.113 mas. For this source, the two measurements from the VLBI and DR3 are in very good agreement.
The pulsation periods of OH39.7\(+\)1.5, IRC\(-\)30363 (OH/IR star), and AW Tau are 1260, 720, and 672 days, respectively. All sources show longer pulsation periods than typical Mira-type variables, e.g., lack of Mira-type variables with periods of about 500 days or longer are reported in Habing (1996). We are currently working on a more detailed analysis of the sources with very long pulsation periods.
### Finding a new period-magnitude relation for OH/IR stars in mid-infrared
Based on the parallax measurements presented in Section 3.3.5, we estimated the absolute K-band magnitudes (\(M_{\rm K}\)) of three AGB stars, AW Tau, IRC\(-\)30363, and OH39.7\(+\)1.5. If we compare the three \(M_{\rm K}\) values with a period-\(M_{\rm K}\) diagram of the Galactic LPVs, we find that the \(M_{\rm K}\) value of OH39.7\(+\)1.5 is far below the expected one. About 6 years ago, we investigated the same problem. Using published literature results, we compiled \(\sim\)20 OH/IR stars with very long periods of more than 1000 days. The distances of some of the sources were determined using the "phase-lag method" (e.g., Engels et al., 2015; Etoka et al., 2018). For the other sources for which no distance estimate is available, Nakagawa et al. (2018) derived kinematic distances from their radial velocities. Using these distances, the K-band apparent magnitudes were converted to absolute magnitudes (\(M_{\rm K}\)) and presented on a period-\(M_{\rm K}\) diagram in Figure 1(b) of Nakagawa et al. (2018). As a result, the distribution of \(M_{\rm K}\) values of the OH/IR stars shows large scatter. We could not find a clear relationship between periods and
Figure 7: (Left): sky plane distribution and expansion motions of the H\({}_{2}\)O maser spots of NSV17351 (Nakagawa et al., 2023). Filled circles indicate maser spots and arrows indicate their internal motions. A cross symbol indicates the estimated position of the central star. (Right): Position of NSV17351 on the face-on view of the Milky Way. The Galactic center is at (0, 0) kpc and the Sun is indicated by the symbol (\(\odot\)) at (0, 8.15) kpc. The filled circle with an error bar indicates the position of NSV17351. Open circles indicate maser sources with Galactocentric distances \(>\) 7 kpc. Solid lines and grey regions indicate the centers of three spiral arms and their widths, respectively, reproduced from a study by Reid et al. (2019).
\(M_{\rm K}\) values. The OH/IR stars are thought to be surrounded by thick layers of circumstellar dust. The activity of the central star also affects the outer layers. In addition, the spatial structure of the dust layers surrounding the central star is unstable and anisotropic. We think that this is one reason for the large scatter seen in the K-band magnitudes. In the case of the recent result obtained for the OH/IR star OH39.7\(+\)1.5, if we assume a thick dust layer or a high mass loss rate, we can possibly explain this darkening by circumstellar absorption.
At longer wavelengths, re-radiation from the dust shell is known to be dominant. To minimise the circumstellar extinction, we have estimated the absolute magnitudes \(M_{\rm W3}\) in the mid-infrared using the W3 band data from the Wide-field Infrared Survey Explorer (WISE). The central wavelength of the WISE W3 band is 12 \(\mu\)m (Wright et al., 2010). In Table 4 we have compiled 36 Galactic LPVs whose parallaxes have been determined by VLBI observations. Some red giants and SR variables are also included in the table. Sources with pulsation periods shorter than 200 days have been excluded. Using the parallaxes from VLBI observations \(\Pi_{\rm VLBI}\) in the third column, we derived the absolute magnitude in the W3 band (\(M_{\rm W3}\)) and present it in the seventh column. In this table, \(\sigma M_{\rm W3}\) is the error of \(M_{\rm W3}\) estimated by taking into account the parallax error (\(\sigma\Pi_{\rm VLBI}\)) and the WISE measurement error. The pulsation period \(P\) and its logarithm \(\log P\) are also given. The \(M_{\rm W3}\) values of the latest VLBI observation sources AW Tau, IRC\(-\)30363, and OH39.7\(+\)1.5 were found to be \(-10.71\pm 0.21\), \(-12.35\pm 1.21\), and \(-12.90\pm 0.40\), respectively.
In Figure 8, we present the pulsation periods and the W3-band absolute magnitudes (\(M_{\rm W3}\)) of the sources in Table 4. The Mira-type and SR variables are presented with open circles. The OH/IR stars are represented by filled circles. First, all 37 sources were used to derive the period-\(M_{\rm W3}\) relation. The derived relation is
\[M_{\rm W3}=(-7.21\pm 1.18)\,\log P+(9.25\pm 3.09), \tag{10}\]
which is indicated by a solid line in the figure. We then derived another relationship using only the 9 OH/IR stars, represented by filled circles, and obtained a relation of the form
\[M_{\rm W3}=(-4.20\pm 1.33)\,\log P+(0.01\pm 3.89), \tag{11}\]
which is shown with a dotted line. At this stage, it is difficult to say that there is an obvious period-\(M_{\rm W3}\) relation. However, if a clear relationship is confirmed, it can be used as a new distance estimator for the AGB sources along the Galactic plane, or for the sources deeply obscured by circumstellar dust. It is known that there is a deep silicate absorption feature at \(\lambda\simeq 8\) to 18 \(\mu\)m in the spectral energy distribution of AGB stars. So, ideally, it would be necessary to use bolometric absolute magnitudes \(M_{\rm bol}\) to avoid the effects of absorption. We have tried to do this, but the variance of the \(M_{\rm bol}\) values is greater than that in the W3 band, and so far our attempt has not been successful. For a more detailed investigation of the period-\(M_{\rm W3}\) relation, we are continuing astrometric VLBI observations of OH/IR stars to increase the number of sources covering a wide period range.
## 4 Summary
We have been observing Mira-type variables and OH/IR stars with the astrometric VLBI method. The obtained distances and stellar parameters help us to understand the evolutionary relationship of the subclasses in the AGB phases. The VERA array was used for all VLBI observations in our study. The masers used are the H\({}_{2}\)O one at 22 GHz and the SiO one at 43 GHz. The phase referencing technique was used to accurately measure the distances of stars ranging from a few hundred pc to several kpc. Due to the properties of the circumstellar matter and its time variability, the parallax measurements of stars surrounded by thick dust shells in optical bands is sometimes very difficult. This can sometimes make parallax measurements with _Gaia_ difficult. The VLBI and _Gaia_ are complementary for distance measurements of AGB stars.
The results for NSV17351, AW Tau, IRC\(-\)30363, and OH39.7\(+\)1.5 are presented. The pulsation period of NSV17351 was determined from the time variability of the H\({}_{2}\)O maser. Parallax, circumstellar distribution of the masers and kinematics of NSV17351 were also presented. Absolute magnitudes in the near- and mid-infrared bands of OH/IR stars with very
\begin{table}
\begin{tabular}{l l c c c c c c} \hline Source & Var. & \(\Pi_{\rm VLBI}\) & \(\sigma\Pi_{\rm VLBI}\) & Period & \(\log P\) & \(M_{\rm W3}\) & \(\sigma M_{\rm W3}\) \\ & type & [mas] & [mas] & (\(P\)) [d] & & [mag] & [mag] \\ \hline SY Scl & Mira & 0.75 & 0.03 & 411 & 2.61 & \(-\)10.46 & 0.16 \\ OH127.8\(+\)0.0 & OH/IR & 0.22\({}^{\dagger}\) & 0.08 & 1591 & 3.20 & \(-\)14.32 & 1.23 \\ S Per & SRc & 0.413 & 0.017 & 822 & 2.92 & \(-\)13.92 & 0.50 \\ OH138.0\(+\)7.2 & OH/IR & 0.52\({}^{\dagger}\) & 0.09 & 1410 & 3.15 & \(-\)11.76 & 0.63 \\ R Tau & Mira & 2.04 & 0.05 & 321 & 2.51 & \(-\)9.09 & 0.40 \\ T Lep & Mira & 3.06 & 0.04 & 372 & 2.57 & \(-\)8.94 & 0.29 \\ BW Cam & Mira & 0.749 & 0.189 & 628 & 2.80 & \(-\)12.46 & 0.93 \\ AW Tau & Mira & 0.45 & 0.03 & 672 & 2.83 & \(-\)10.71 & 0.21 \\ RAFGL 5201 & OH/IR & 0.61\({}^{\dagger}\) & 0.04 & 600 & 2.78 & \(-\)11.26 & 0.47 \\ AP Lyn & Mira & 2.01 & 0.04 & 433 & 2.64 & \(-\)10.46 & 0.51 \\ U Lyn & Mira & 1.27 & 0.06 & 434 & 2.64 & \(-\)9.90 & 0.45 \\ NSV17351 & OHIR & 0.247 & 0.035 & 1100 & 3.04 & \(-\)13.31 & 0.45 \\ OZ Gem & Mira & 0.806 & 0.039 & 598 & 2.78 & \(-\)11.27 & 0.36 \\ OH231.8\(+\)4.2 & OH/IR & 0.61\({}^{\dagger}\) & 0.03 & 548 & 2.74 & \(-\)11.16 & 0.44 \\ R Cnc & Mira & 3.84 & 0.29 & 357 & 2.55 & \(-\)8.81 & 0.48 \\ X Hya & Mira & 2.07 & 0.05 & 300 & 2.48 & \(-\)9.04 & 0.32 \\ R UMa & Mira & 1.97 & 0.05 & 302 & 2.48 & \(-\)9.21 & 0.36 \\ W Leo & Mira & 1.03 & 0.02 & 392 & 2.59 & \(-\)9.99 & 0.28 \\ T UMa & Mira & 0.96 & 0.15 & 257 & 2.41 & \(-\)9.02 & 0.49 \\ U CVn & Mira & 0.911 & 0.031 & 346 & 2.54 & \(-\)9.97 & 0.19 \\ R Hya & Mira & 7.93 & 0.18 & 380 & 2.58 & \(-\)8.40 & 0.07 \\ FV Boo & Mira & 0.97 & 0.06 & 313 & 2.50 & \(-\)10.12 & 0.21 \\ Y Lib & Mira & 0.855 & 0.050 & 277 & 2.44 & \(-\)9.44 & 0.19 \\ S CrB & Mira & 2.39 & 0.17 & 360 & 2.56 & \(-\)9.68 & 0.47 \\ S Ser & Mira & 1.25 & 0.04 & 372 & 2.57 & \(-\)9.50 & 0.26 \\ U Her & Mira & 3.76 & 0.27 & 406 & 2.61 & \(-\)9.14 & 0.52 \\ IRC\(-\)30363 & OH/IR & 0.56\({}^{\dagger}\) & 0.2 & 720 & 2.86 & \(-\)12.35 & 1.21 \\ OH39.7\(+\)1.5 & OH/IR & 0.55\({}^{\dagger}\) & 0.03 & 1259 & 3.10 & \(-\)12.90 & 0.40 \\ RAFGL 2445 & OH/IR & 0.64\({}^{\dagger}\) & 0.01 & 626 & 2.80 & \(-\)12.11 & 0.36 \\ RR Aql & Mira & 1.58 & 0.4 & 396 & 2.60 & \(-\)10.94 & 0.93 \\ SY Aql & Mira & 1.10 & 0.07 & 356 & 2.55 & \(-\)9.64 & 0.24 \\ UX Cyg & Mira & 0.54 & 0.06 & 565 & 2.75 & \(-\)12.67 & 0.50 \\ NSV 25875 & OH/IR & 0.38\({}^{\dagger}\) & 0.13 & 1535 & 3.19 & \(-\)14.62 & 1.13 \\ R Peg & Mira & 2.76 & 0.28 & 378 & 2.58 & \(-\)8.94 & 0.45 \\ R Aqr & Mira & 4.7 & 0.8 & 390 & 2.59 & \(-\)9.51 & 0.53 \\ & Mira & 4.59 & 0.24 & 390 & 2.59 & \(-\)9.56 & 0.16 \\ PZ Cas & SRc & 0.356 & 0.026 & 925 & 2.97 & \(-\)14.10 & 0.50 \\ R Cas & Mira & 5.67 & 1.95 & 430 & 2.63 & \(-\)9.33 & 1.12 \\ \hline \end{tabular} \({}^{\dagger}\) The parallax values are preliminary results from our analysis and have not yet been published.
\end{table}
Table 4: WISE W3 band absolute magnitude
long pulsation periods were studied. Using the WISE W3 band data, a period-magnitude relation in the WISE W3 band \(M_{\rm W3}=(-7.21\pm 1.18)\log P+(9.25\pm 3.09)\) was found for the Galactic AGB stars. For further understanding, we need more detailed measurements of the circumstellar masers of AGB stars of different types, pulsation periods, and masses.
**Discussion**
**Question (Whitelock)**: These VLBI measurements are very important for deriving distances for the stars that _Gaia_ can't reach. Most of their luminosity will be in the dust and the amplitudes, even at long wavelengths, are very large, several magnitudes. You can only observe the O-rich AGB stars with H\({}_{2}\)O and SiO Masers. Metal-weak stars are C-rich and will not have these lines, so I wonder if you can do anything with the CO line from the C-rich stars.
**Answer**: Thank you for your comment, Patricia. One thing that I wanted to share at this conference was the importance of VLBI measurements of the parallaxes of dusty AGB stars, and I think we did that to some extent. The CO lines are usually observed in dusty AGB stars. The existence of CO masers has also been reported by Vlemmings et al. (2021). However, at present I think that astrometric VLBI observations of the CO maser are difficult due to the accuracy of the absolute position and the sensitivity. So I do not expect to be able to do the same studies with the CO masers any time soon. Regarding the CO line, if we can somehow select OH/IR stars with very long pulsation periods, they could be massive and young, and they are expected to show CO line emission. Since observing the CO line can give a better estimate of radial velocities than maser emissions, we can estimate their distances using the kinematic distance method and use them for the PL relation or to study the Galactic dynamics.
Figure 8: Absolute magnitudes of the WISE W3 band estimated from VLBI parallaxes. Filled circles and open circles represent OH/IR stars and Mira-type variables respectively. Two lines represent period-\(M_{\rm W3}\) relations in the mid-infrared. The solid line and dotted lines represent the relations of \(M_{\rm W3}=(-7.21\pm 1.18)\log P+(9.25\pm 3.09)\) and \(M_{\rm W3}=(-4.20\pm 1.33)\log P+(0.01\pm 3.89)\), respectively.
**Question (Jiang)**: It's very nice to see that the maser observation can be more accurate than _Gaia_ for the nearby dust-obscured stars. I just wonder how the other astrometric parameters, proper motions, measured using maser compare to _Gaia_?
**Answer**: Thank you for your comment, Biwei. VLBI measures the sky plane motion of each maser spot, which is distributed around the central AGB star. The maser spots show outward motion with respect to the central star. Thus, the proper motions of each maser spot obtained from VLBI observations are the sum of (1) the systemic proper motion of the central star and (2) the outward motion of the maser with respect to the central star. We average all the proper motions of the detected maser spots to estimate the systemic proper motion of the central star, which _Gaia_ can measure directly. The difference in proper motions between _Gaia_ and VLBI gives us information about the outward motion of the maser spots with respect to the central star. In Nakagawa et al. (2014), we presented the procedure to derive the systemic proper motion of the star from our VLBI measurements.
|
2304.01690
|
Quantum algorithms for charged particle track reconstruction in the LUXE
experiment
|
The LUXE experiment is a new experiment in planning in Hamburg, which will
study Quantum Electrodynamics at the strong-field frontier. LUXE intends to
measure the positron production rate in this unprecedented regime by using,
among others, a silicon tracking detector. The large number of expected
positrons traversing the sensitive detector layers results in an extremely
challenging combinatorial problem, which can become computationally expensive
for classical computers. This paper investigates the potential future use of
gate-based quantum computers for pattern recognition in track reconstruction.
Approaches based on a quadratic unconstrained binary optimisation and a quantum
graph neural network are investigated in classical simulations of quantum
devices and compared with a classical track reconstruction algorithm. In
addition, a proof-of-principle study is performed using quantum hardware.
|
Arianna Crippa, Lena Funcke, Tobias Hartung, Beate Heinemann, Karl Jansen, Annabel Kropf, Stefan Kühn, Federico Meloni, David Spataro, Cenk Tüysüz, Yee Chinn Yap
|
2023-04-04T10:40:11Z
|
http://arxiv.org/abs/2304.01690v1
|
# Quantum algorithms for charged particle track reconstruction in the LUXE experiment
###### Abstract
The LUXE experiment is a new experiment in planning in Hamburg, which will study Quantum Electrodynamics at the strong-field frontier. LUXE intends to measure the positron production rate in this unprecedented regime by using, among others, a silicon tracking detector. The large number of expected positrons traversing the sensitive detector layers results in an extremely challenging combinatorial problem, which can become computationally expensive for classical computers. This paper investigates the potential future use of gate-based quantum computers for pattern recognition in track reconstruction. Approaches based on a quadratic unconstrained binary optimisation and a quantum graph neural network are investigated in classical simulations of quantum devices and compared with a classical track reconstruction algorithm. In addition, a proof-of-principle study is performed using quantum hardware.
## 1 Introduction
The Laser Und XFEL Experiment (LUXE) [1] at DESY and the European XFEL (Eu.XFEL) aims at studying strong-field Quantum Electrodynamics (QED) processes in the interactions of a high-intensity optical laser and the 16.5 GeV electron beam of the Eu.XFEL (\(e^{-}\)-laser collisions), as well as with high-energy secondary photons. A strong background field is provided
by a Terawatt-scale laser pulse and enhanced by the Lorentz boost of the electrons, allowing LUXE to explore a previously uncharted intensity regime.
In this regime, one of the main goals of the LUXE experiment is to measure the positron rate as a function of the laser intensity parameter \(\xi\), defined as
\[\xi=\sqrt{4\pi\alpha}\ \frac{\epsilon_{L}}{\omega_{L}m_{e}}=\frac{m_{e}\epsilon_{L} }{\omega_{L}\epsilon_{cr}}, \tag{1}\]
where \(\alpha\) is the fine structure constant, \(\epsilon_{L}\) is the laser field strength, \(\omega_{L}\) is the frequency of the laser, \(m_{e}\) is the electron mass, and \(\epsilon_{cr}=1.32\times 10^{18}\) V/m is the critical field strength, also known as the Schwinger limit [2]. The measured positron rate will be compared to theoretical predictions. When considering electron-laser collisions, the dominant process is the non-linear Compton scattering [3, 4]. In non-linear Compton scattering, the incident electron absorbs multiple laser photons, emitting a Compton photon, which can then interact again with the laser field to produce electron-positron pairs [5, 6, 7]. The expected number of positrons per bunch crossing (BX) as a function of \(\xi\) spans over five orders of magnitude in the range shown in Figure 1.
The measurement of the positron rate will be performed by a dedicated set of detectors comprising a silicon pixel tracker and a calorimeter. The wide range of expected positron rates poses a significant challenge to event reconstruction, especially within the tracker, where the large number of energy deposits could lead to finding spurious tracks that do not correspond to a real particle. The most relevant tracking challenge for this work is to maintain a linear dependence of the number of reconstructed tracks as a function of the number of charged particles in the event up to very high particle multiplicities.
This work investigates the potential future use of gate-based quantum computers for pattern recognition in track reconstruction and compares the obtained performance to classical methods. Analogous studies have focused on track reconstruction in the proton-proton collision
Figure 1: Number of positrons per bunch crossing produced in \(e^{-}\)-laser collisions as a function of the laser field intensity parameter \(\xi\), for different values of the laser power. Based on Ref. [1], with additional simulated events.
environments of the Large Hadron Collider and its upgrades, by using quantum annealers [8, 9], quantum associative memories [10] or quantum graph neural networks [11]. A review of various quantum computing algorithms studied for charged particle tracking can be found in Ref. [12]. In this work, we present an update of our previous study of track reconstruction with quantum algorithms at LUXE [13, 14].
This paper is organised as follows. A brief characterisation of the current proposed detector layout and the data-taking environment are given in Section 2. The data sets used in this study are presented in Section 3, together with the dedicated simulation software. Section 4 presents the methodology used for the reconstruction of the simulated data. The results are discussed in Section 5, focusing first on classical simulations of quantum hardware and then presenting a set of studies performed on quantum hardware (ibm_nairobi). The summary and conclusion are given in Section 6, while an outlook on future developments and work is discussed in Section 7.
## 2 The LUXE experiment
This work focuses on the reconstruction of the electron-laser collisions. In this setup, the electron beam from the Eu.XFEL is guided to the interaction point (IP), where it collides with a laser beam. The experiment plans to start taking data with a 40 TW laser, which will later be upgraded to reach 350 TW. The electrons and positrons produced in the electron-laser interactions are deflected by a 0.95 T dipole magnet and then detected by a positron detection system, as shown in Figure 2. 1
Footnote 1: LUXE uses a right-handed coordinate system with its origin at the nominal interaction point and the \(z\)-axis along the beam line. The \(y\)-axis points upwards, and the \(x\)-axis points towards the positron detection system.
The outgoing positrons are detected using a silicon pixel tracking detector. The tracker consists of four layers, each comprising two \(\approx 27\) cm long staves placed next to each other, which overlap partially, as illustrated in the figure. The layers are spaced 10 cm away from each other along the beam axis. The average thickness of the staves is 0.357% of a radiation length. Each stave contains nine sensors, composed of \(512\times 1024\) pixels of size \(27\times 29\)\(\mu\)m\({}^{2}\). The pixel sensors have a detection efficiency above 99%, a noise hit rate much below \(10^{-5}\) and a spatial resolution of around 5 \(\mu\)m.
Figure 2: Schematic layout of the positron detection system in LUXE for the electron-laser setup. Adapted from Ref. [1]. The angle \(\theta\) represents the crossing angle of the Eu.XFEL and laser beams.
## 3 Simulated data
Monte Carlo simulated event samples are used to perform this study. The calculation for the electron-laser interaction processes was performed with the PTARMIGAN[15] Monte Carlo event generation software. The electron beam parameters were chosen as follows. The incoming electron energy \(\varepsilon_{e}\) is set to 16.5 GeV, the beam spot size to \(\sigma_{x}=\sigma_{y}=5\)\(\mu\)m, \(\sigma_{z}=24\)\(\mu\)m, and the normalised emittance to 1.4 mm\(\cdot\)mrad. The simulation of the laser assumes a 40 TW laser, an energy after compression of 1.2 J and a pulse length of 30 fs. The laser pulse is modelled as having a Gaussian profile both in the longitudinal and in the transverse direction. The laser spot waist, which for a Gaussian pulse corresponds to \(2\sigma\) in intensity, decreases with \(\xi\) and varies between 6 \(\mu\)m and 3 \(\mu\)m.
The particles produced in the electron-laser interactions are propagated through the dipole magnet and tracking detector using a custom fast simulation that was developed for this study. The fast simulation uses parameterised smearing functions to model the effects of multiple scattering and detector resolution. Furthermore, a simplified detector layout is considered. In this layout, the four detection layers are not split into two overlapping staves, but simply have a double length with no discontinuities.
To perform these studies, data sets corresponding to electron-laser interactions were generated with \(\xi\) values ranging from three to seven and a laser power of 40 TW. This corresponds to positron multiplicities ranging between \(1\times 10^{2}\) and \(7\times 10^{4}\). Figure 3 shows the resulting expected positron energy distribution for the three generated \(\xi\) values (left) and the number of hits/mm\({}^{2}\) in the first detector layer as a function of the \(x\) and \(y\) coordinates for \(\xi=7\) (right). The double-peaked structure visible in the \(xy\) plane reflects the initial positron momentum distribution along the \(y\)-axis at the interaction point.
## 4 Methodology
The starting point for the pattern recognition are either doublets or triplets, defined as a set of two or three hits in consecutive detector layers. A pre-selection is applied to the initial doublet or triplet candidates to reduce the combinatorial candidates while keeping the efficiency as close as possible to 100% for the doublets and triplets matching with a real positron. Doublets are formed first and are required to satisfy a pre-selection based on the ratio \(\delta x/x_{0}\), where \(\delta x\) is the difference of the \(x\) coordinates for the two hits composing the doublet, while \(x_{0}\) indicates the
Figure 3: Left: Positron energy distribution for different values of \(\xi\), normalised to unit area. Based on Ref. [1], using the data sets generated for this work. Right: Number of hits/mm\({}^{2}\) in the first detector layer as a function of the \(x\) and \(y\) coordinates for \(\xi=7\).
coordinate on the detector layer closest to the interaction point. A window of three standard deviations around the expected mean value of \(\delta x/x_{0}\) for true doublets, as determined in the simulation, is used for this selection. This requirement ensures that the particles come from the IP. Triplets are subsequently constructed by combining doublet candidates with a requirement on the maximum angle difference \(\delta\theta=\sqrt{\delta\theta_{xz}^{2}+\delta\theta_{yz}^{2}}\) of the doublet pairs. The maximum scattering threshold is chosen to be 1 mrad and was optimised taking into account multiple scattering with the detector material. Since triplets consist of three hits, they are formed either from the first to the third layer or from the second to the fourth layer.
Figure 4 shows the distributions of \(\delta x/x_{0}\) for doublets (left) and \(\delta\theta\) for pairs of doublets (right) originating from true positron tracks, shown separately for low-energy (\(E_{e^{+}}<3\) GeV) and high-energy positrons (\(E_{e^{+}}>3\) GeV), as well as the chosen thresholds. The distributions are obtained using \(\xi=7\), but are generally \(\xi\)-independent. The \(\delta x/x_{0}\) distribution shows a slight dependence on positron energy, while the triplet \(\delta\theta\) distribution demonstrates that the scattering is more pronounced for lower energy positrons. The resulting pre-selection efficiencies are shown in Figure 5 (left) for both doublet and triplet finding, in the case of electron-laser interaction for \(\xi=7\). The pre-selection requirements are found to be nearly fully efficient for the whole energy range, with a moderate efficiency loss, at the level of 16% for positron energies below 2 GeV, mostly due to multiple scattering with the detector material. Figure 5 (right) also shows the number of doublets and triplets passing the pre-selection criteria as a function of \(\xi\).
Three pattern recognition methods are employed and systematically compared to reconstruct tracks from the detector hits. The first method formulates the tracking problem as a quadratic unconstrained binary optimisation (QUBO), similar to the one used in Ref. [8], which is then processed with quantum algorithms. The second method uses a hybrid quantum-classical graph neural network approach [11], but is limited to specific scenarios compatible with the available devices. Finally, the results obtained with the quantum approaches are compared to an optimised classical approach based on a Kalman filter [16, 17], which is taken to be the reference for the state-of-the-art using no quantum computers.
Figure 4: Left: Distribution of doublet \(\delta x/x_{0}\) with red dashed lines indicating the range of the pre-selection. Right: Distribution of angle difference \(\delta\theta\) for the doublet pairs composing the triplets with a red dashed line indicating the upper limit allowed by the pre-selection.
### Quadratic unconstrained binary optimisation
In this approach, the pairs of triplet candidates that can be combined to form tracks are identified by solving a QUBO problem. The QUBO is expressed via the objective function
\[O=\sum_{i}^{N}\sum_{j<i}b_{ij}T_{i}T_{j}+\sum_{i=1}^{N}a_{i}T_{i}, \tag{2}\]
where \(T_{i}\) and \(T_{j}\) are triplets of hits and \(a_{i}\) and \(b_{ij}\) are real coefficients. The triplets \(T_{i}\) and \(T_{j}\) assume binary values. The solution of the QUBO determines whether each triplet is considered false and rejected, by being set to zero, or true and selected, by being set to one. The linear term of the QUBO weighs the individual triplets by their quality quantified by the coefficient \(a_{i}\). The \(a_{i}\) coefficient is set to the value of \(\delta\theta\) scaled to populate the \([-1;1]\) range. The quadratic term represents the interactions between triplet pairs, where the coefficient \(b_{ij}\) characterises their compatibility. The coefficient \(b_{ij}\) is computed from the doublets forming the two considered triplets. It is taken to be the norm of the sum of the standard deviations of the doublet angles in the \(xy\) and \(yz\) planes, translated and scaled to populate the \([-1;-0.9]\) range. If the two triplets are in conflict, the coefficient \(b_{ij}\) is set to one. If the triplets are not connected, it is set to zero.
The QUBO in Eq. (2) can be mapped to an Ising Hamiltonian by mapping \(T_{i}\rightarrow(1+Z_{i})/2\), where \(Z_{i}\) is the third Pauli matrix. Minimising the QUBO is equivalent to finding the ground state of the Hamiltonian. The Variational Quantum Eigensolver (VQE) [18] method, a hybrid quantum-classical algorithm, was used to find the ground state. In this work, the data is processed using the VQE implementation available in the Qiskit [19] library. Most results rely on classical simulations of quantum circuits, where no sources of noise or decoherence are included, and a simple ansatz with \(R_{Y}\) gates and a linear CNOT entangler is chosen, as shown in Figure 6. An ansatz with CNOTs between all possible pairs and a single circuit repetition was found to lead to results compatible within statistical uncertainties, but was discarded for simplicity. The selected optimiser is the Nakanishi-Fujii-Todo (NFT) [20] algorithm. The ansatz and optimiser were selected as those leading to the highest track reconstruction efficiency in previous work [14].
The number of qubits required to represent the tracking problem as a QUBO is determined by the number of triplet candidates. Due to the limited number of qubits available on the current quantum devices, the QUBO in this work is partitioned into QUBOs of smaller size (referred
Figure 5: Left: Doublet and triplet-finding efficiency as a function of the positron true energy. The combined efficiency is also shown. Right: Doublet and triplet multiplicities as a function of \(\xi\) (lower \(x\)-axis), corresponding to the average number of positrons (upper \(x\)-axis).
to as sub-QUBOs) to be solved iteratively. For small enough sub-QUBO sizes, such as the size 7 used in this work, an exact solution using matrix diagonalisation is possible and is used as a benchmark.
Figure 7 summarises the QUBO solving process. At the beginning of the processing, all triplet candidates are set to 1. The splitting into sub-QUBOs is done by extracting the sub-QUBO matrices of the desired size, by picking triplets in order of their impact. The impact is defined as the change in the value of the objective function when \(T_{i}\to 1-T_{i}\). Each triplet is assigned an additional constant term representing the sum of all interactions with triplets outside of the sub-QUBO to retain sensitivity to the connections outside of each sub-QUBO when computing the value of the objective function. After the sub-QUBOs are solved, the solution is combined. These steps are repeated for a number of iterations. The triplets selected by the QUBO minimisation are retained and matched to form track candidates.
Alternative algorithms for finding the optimal QUBO solution, such as the Quantum Approximate Optimisation Algorithm (QAOA) [21], were briefly investi
Figure 6: Layout of the variational quantum circuit using the ansatz with \(R_{Y}\) gates and a linear CNOT entangling pattern. For simplicity, only four qubits are shown.
Figure 7: Diagram illustrating the QUBO solving procedure.
lead to significantly worse performance. A dedicated optimisation and characterisation of the results of such alternative algorithms is left to future work.
### Quantum graph neural network
This approach is based on a graph neural network (GNN) [22, 23] that consists of both classical neural network layers and quantum circuits. The graph is constructed from doublets, where the hits are nodes and the connections between the hits are edges. All nodes of consecutive layers are connected and only the ones that satisfy the pre-selection criteria are kept. The quantum graph neural network (QGNN) model follows the implementation of Ref. [11] and consists of three networks. First, the _InputNet_ takes the input node features, i.e. the three spatial coordinates, and produces hidden node features. For this purpose, a single fully connected neural network layer that has 10 neurons with a _tanh_ activation function is used. Second, the _EdgeNet_ takes all connected node pairs as input and produces a scalar edge feature for each of them using a _sigmoid_ activation function. This will later be the prediction score of the model for each doublet, as this model is essentially a segment classifier. _Circuit 10_ with two layers and 10 qubits is selected for this task based on previous work [11]. Each layer of this circuit uses \(R_{Y}\) gates and linear CNOT entanglers between all possible pairs of qubits. Third, the _NodeNet_ considers each node and its connecting nodes to update the hidden node features. The architecture of the _NodeNet_ is similar to _EdgeNet_, but it uses the _tanh_ activation function for the last layer, as the _NodeNet_ is an intermediate step, and _sigmoid_ activation functions are known to lead to vanishing gradients.
The quantum graph neural network (QGNN) model first starts with the _InputNet_. Then, the _EdgeNet_ and the _NodeNet_ are applied alternately four times to allow the node features to be updated using farther nodes, as determined in a scan of the optimal model parameters. At the end, the _EdgeNet_ is applied one last time to obtain the predictions for each doublet connection. Finally, the edges are discarded if the prediction value is less than a fixed threshold (chosen to be 0.5 in our simulations) and the rest are retained and used to form track candidates.
### Combinatorial Kalman filter
A tracking algorithm based on A Common Tracking Software (ACTS) toolkit [24] with the combinatorial Kalman Filter (CKF) technique for track finding and fitting is used as a benchmark. In this classical tracking method, track finding starts from seeds, which are the triplets formed from the first three detector layers. To avoid a combinatorial growth in the number of seeds at high particle density, further constraints are placed on seeds sharing the same hits by prioritising the better-aligned seeds. An initial estimate of track parameters is obtained from the seed and is used to predict the next hit and is updated progressively, with the measurement search performed at the same time as the fit.
### Final track selection
A final step in the track reconstruction is common to all considered methods. Track candidates are required to have four hits and, as explained in the previous subsections, can be found with the QUBO approach that combines triplets into quadruplets (see Section 4.1), by employing the QGNN approach that combines doublets into quadruplets (see Section 4.2), or by using the classical CKF method (see Section 4.3). After finding these track candidates, the final tracks now have to be selected among these candidates, using a final step explained in the following.
The track candidates are fitted to straight lines with the least-square method, as the particles propagate through the tracking detector in absence of a magnetic field. A track candidate is considered matched if it has at least three out of four hits matched to the same particle. Figure 8 (left) shows the duplication rate, i.e. the fraction of matched particles that are matched to more than one track candidate, as a function of \(\xi\). The substantially larger duplication rate
of the CKF technique is due to this method being a local approach with no knowledge of the overall BX, unlike the QUBO and QGNN-based approaches.
To resolve the overlaps between the track candidates and to reject fake tracks, an ambiguity resolution step is performed. The track candidates are scored based on the \(\chi^{2}\)/ndf of the track fit and the number of shared hits with other track candidates. The track candidates with the most shared hits are evaluated first. They are compared to the other track candidates sharing the same hits, and the ones with worse \(\chi^{2}\)/ndf of the track fit are rejected. The procedure is repeated until all remaining tracks have up to one shared hit. Figure 8 (right) shows the effect of the ambiguity resolution on matched and fake tracks for a QUBO solved using matrix diagonalisation in a BX with \(\xi=7\). This scenario was selected to show the effect for the highest particle multiplicity considered in this work.
## 5 Results
### Studies with classical hardware
The results presented in the following are obtained on classical hardware, including classical simulations of quantum hardware. A set of studies performed on quantum hardware (ibm_nairobi) will be presented in Section 5.2. The performance of various tracking methods is assessed using the efficiency and the fake rate as metrics, which are computed on the final set of tracks. The efficiency and fake rate are defined as
\[\text{Efficiency}=\frac{N_{\text{tracks}}^{\text{matched}}}{N_{\text{tracks}}^{ \text{generated}}}\qquad\text{and}\qquad\text{Fake rate}=\frac{N_{\text{tracks}}^{\text{fake}}}{N_{\text{tracks}}^{\text{reconstructed}}}\,. \tag{3}\]
Figure 9 shows the average track reconstruction efficiency (left) and fake rate (right) as a function of the laser field intensity parameter \(\xi\) for all tested approaches: QUBO-based tracking
Figure 8: Left: Duplication rate as a function of \(\xi\). The results from the exact matrix diagonalisation are shown as a line to help the comparison between the methods. The results based on hybrid quantum-classical methods rely on classical simulations of quantum devices. The decrease in duplication rate for CKF for \(\xi>5\) is due to the limit set on seeds with shared hits and an overall decrease in tracking efficiency in this scenario. See Figure 5 (right) for the number of positrons corresponding to each \(\xi\). Right: Distribution of the \(\chi^{2}\) divided by the number of degrees of freedom for fitted track candidates found using the QUBO approach with exact matrix diagonalisation, shown separately for matched (blue) and fake (red) track candidates. The dashed lines represent the track candidates from the QUBO solution, while the solid lines represent the selected tracks after the resolution of reconstruction ambiguities.
(both with the approximate solution obtained with VQE and the exact solution via matrix diagonalisation), QGNN-based tracking, and conventional CKF-based tracking.
The performance of CKF-based tracking is used as a state-of-the-art benchmark. The excellent performance of the classical method deteriorates with \(\xi\), because of the increasing hit density. The results using the exact matrix diagonalisation to solve the QUBO are well aligned with the CKF algorithm and achieve a higher efficiency by 1-2% for large values of \(\xi\) at the cost of an increase in the fake rate of approximately a factor of two. The rate of purely combinatorial tracks, i.e. tracks reconstructed from four hits belonging to four distinct positrons, accounts for about 50% of the total fake rate, independently of the reconstruction algorithm considered. The results for VQE are in excellent agreement, within the statistical uncertainties, with those from the matrix diagonalisation.
The results for the QGNN-based tracking are shown up to \(\xi=4\), above which simulating the quantum circuits becomes computationally prohibitive with the currently available resources. The reconstruction efficiency is found to be compatible with the other methods, with a substantially higher fake rate. Further work aimed at optimising the selection of the _EdgeNet_ predictions could mitigate this effect. The QGNN results were validated by implementing a classical GNN [22, 23] with the same architecture, but with 128 node hidden features, finding excellent agreement. For \(\xi=3\), two values of QGNN efficiency are shown. The empty triangle is the result based on 100 BXs, i.e. the same number of BX used to evaluate the performance of the CKF and QUBO-based methods, using 90% of the data for the training of the model and 10% for the inference. Because of the modest particle multiplicity expected at \(\xi=3\), the number of true tracks used in the QGNN training is too small to obtain an optimal result. The full triangles show the efficiency obtained with the QGNN training based on data generated with \(\xi=4\), which corresponds to a substantially larger set of true tracks, restoring a higher efficiency.
The dependency of the track reconstruction efficiency on the GNN-based approaches was further studied in \(e^{-}\)-laser collisions with \(\xi=3\), comparing the results obtained with the QGNN and with a classical GNN for different numbers of true tracks used in the training. The findings
Figure 9: Left: Track reconstruction efficiency as a function of the field intensity parameter \(\xi\). Right: Track fake rate as a function of \(\xi\). The results from the exact matrix diagonalisation are shown as a line to help the comparison between the methods. The empty triangles show the results of a QGNN training limited to 90 BXs. See Figure 5 (right) for the number of positrons corresponding to each \(\xi\). The results based on hybrid quantum-classical methods rely on classical simulations of quantum devices.
are presented in Figure 10. The efficiency results for the largest track multiplicity of both GNNs are obtained performing the training on events with a larger value of \(\xi=5\) for the classical GNN and \(\xi=4\) for the QGNN. All other data points are obtained by increasing the number of BXs considered at \(\xi=3\). While it is not expected for the QGNN and the classical GNN to perfectly overlap in performance because of the slightly different model architectures, the results show compatible trends when considering additional data for a fixed value of \(\xi\) and using models trained on BXs with larger \(\xi\).
Figure 11 shows the track reconstruction efficiency (left) and fake rate (right) as a function of the true positron energy for the case of \(\xi=5\), for the CKF and QUBO-based methods. The methods show similar behaviours, with a decrease in the region corresponding the highest detector occupancy. Because of effects coming from the propagation through the magnetic field and from the longitudinal size of the interaction region, the maximum occupancy shown in Figure 3, does not correspond to the maximum of the positron energy distribution. The reduced efficiency of the QUBO-based methods for positrons with an energy below 3 GeV is dominated by the pre-selection efficiency shown in Figure 5 (left).
The average energy resolution of the reconstructed tracks was also compared between the different methods. The track energy resolution was found to be 0.5% and independent of the reconstruction method within the statistical uncertainty of the analysed data set.
Figure 10: Track reconstruction efficiency as a function of the number of true tracks used in the training of a classical GNN (green squares) and the QGNN (brown triangles) for \(e^{-}\)-laser collisions with \(\xi=3\). The QGNN results rely on classical simulations of quantum hardware.
### Studies with quantum hardware
This section presents a detailed assessment of the performance of the VQE algorithm on QUBOs of size seven, chosen to be the same as the sub-QUBO size used for the results based on classically simulated VQE in Section 5.
A QUBO representing two nearby particles, leading to a total of seven triplets, was selected for this test. The VQE method was applied first in an exact classical simulation assuming an ideal quantum device with shot noise only, then in a classical simulation involving a noise model extracted from a snapshot of the measured noise of the ibm_nairobi device (fake_nairobi) and finally using real quantum hardware (ibm_nairobi).
For each of these scenarios, 512 circuit evaluations (shots) were considered. When performing the computations with fake_nairobi and ibm_nairobi, a measurement error mitigation based on the generation of a calibration matrix was used [25, 26]. The readout error probabilities were calibrated every 30 function evaluations of the optimiser.
Figure 12 shows the probabilities of the returned results for these three scenarios, where the correct binary solution 0001111 is also the most probable.
## 6 Conclusion
This work investigated the use of hybrid quantum-classical algorithms for particle track reconstruction. Focusing on a VQE approach for a QUBO formulation of track reconstruction and a QGNN approach, the performance of these hybrid quantum-classical methods was compared to results obtained from a state-of-the-art classical tracking method.
In order to produce these results, a standalone fast simulation of the LUXE tracking detector was put in place as well as a software framework to reconstruct tracks up to the maximum number of positrons expected during the data taking with a laser power of 40 TW.
The results were analysed in terms of reconstruction efficiency, fake rate and energy resolution. Hybrid quantum-classical algorithms were found to lead to competitive results when compared to classical algorithms. For large particle multiplicities, a QUBO approach based on VQE using a classical simulation of a quantum device was found to have moderately higher efficiency than classical tracking, but with a significant increase in the fake rate. It was not possible, due to limitations in the computing resources, to evaluate the performance of the approach based on QGNNs beyond a few thousand charged particles.
Figure 11: Left: Track reconstruction efficiency as a function of the positron true energy for \(\xi=5\). Right: Track fake rate as a function of the measured track energy for \(\xi=5\). The results based on hybrid quantum-classical methods rely on classical simulations of quantum devices. On average, about 10500 positrons are expected to be produced in a BX with \(\xi=5\).
## 7 Outlook
In this work, it was observed that the impact-based processing order leads to a significant fraction of trivially-solvable sub-QUBOs with no interacting triplets. Future work will be aimed at developing alternative algorithms for the sorting of the binary vector representing the triplet candidates and for the splitting of the problem into sub-QUBOs. To further reduce the computation time and the rate of fake tracks reconstructed with this method, future work will focus on optimising the scaling ranges for the \(a_{i}\) and \(b_{ij}\) coefficients.
While the initial study of the VQE performance on real quantum hardware (ibm_nairobi) yielded promising results, a more systematic study of hybrid quantum-classical algorithms using NISQ-era devices will be performed in future work.
Finally, the choice of the optimiser used for VQE has a significant impact on the probability to find the true minimum of the cost function, and a careful optimisation will be required when considering larger sub-QUBO sizes.
## Acknowledgments
The authors thank the LUXE collaboration for fostering this work. The work by B.H., A.K., F.M., D.S. and Y.Y. were in part funded by the Helmholtz Association - "Innopool Project LUXE-QED". A.C., K.J. and C.T. are supported in part by the Helmholtz Association - "Innopool Project Variational Quantum Computer Simulations (VQCS)". L.F. is partially supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C\({}^{2}\)QA) under contract number DE-SC0012704, by the DOE QuantiSED Consortium under subcontract number 675352, by the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, [http://iaifi.org/](http://iaifi.org/)), and by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under grant contract numbers DE-SC0011090 and DE-SC0021006. S.K. acknowledges financial support from the Cyprus Research and Innovation Foundation under project "Future
Figure 12: Distribution of the VQE results for a test QUBO composed of seven triplets. The blue bars indicate the results obtained from 512 shots on the ibm_nairobi quantum computer, compared with a realistic (green bars) and an ideal (blue bars) classical simulation of the same system. The results from the realistic classical simulation and from ibm_nairobi use a measurement error mitigation technique based on the generation of a calibration matrix [25, 26].
proofing Scientific Applications for the Supercomputers of Tomorrow (FAST)", contract no. COMPLEMENTARY/0916/0048. This work is supported with funds from the Ministry of Science, Research and Culture of the State of Brandenburg within the Centre for Quantum Technologies and Applications (CQTA). This work is funded within the framework of QUEST by the European Union's Horizon Europe Framework Programme (HORIZON) under the ERA Chair scheme with grant agreement No. 101087126. This work has benefited from computing services provided by the German National Analysis Facility (NAF).
|
2304.11483
|
The Logic of Prefixes and Suffixes is Elementary under Homogeneity
|
In this paper, we study the finite satisfiability problem for the logic BE
under the homogeneity assumption. BE is the cornerstone of Halpern and Shoham's
interval temporal logic, and features modal operators corresponding to the
prefix (a.k.a. "Begins") and suffix (a.k.a. "Ends") relations on intervals. In
terms of complexity, BE lies in between the "Chop logic C", whose
satisfiability problem is known to be non-elementary, and the PSPACE-complete
interval logic D of the sub-interval (a.k.a. "During") relation. BE was shown
to be EXPSPACE-hard, and the only known satisfiability procedure is primitive
recursive, but not elementary. Our contribution consists of tightening the
complexity bounds of the satisfiability problem for BE, by proving it to be
EXPSPACE-complete. We do so by devising an equi-satisfiable normal form with
boundedly many nested modalities. The normalization technique resembles Scott's
quantifier elimination, but it turns out to be much more involved due to the
limitations enforced by the homogeneity assumption.
|
Dario Della Monica, Angelo Montanari, Gabriele Puppis, Pietro Sala
|
2023-04-22T21:40:05Z
|
http://arxiv.org/abs/2304.11483v1
|
# The Logic of Prefixes and Suffixes
###### Abstract
In this paper, we study the finite satisfiability problem for the logic \(\mathsf{BE}\) under the homogeneity assumption. \(\mathsf{BE}\) is the cornerstone of Halpern and Shoham's interval temporal logic, and features modal operators corresponding to the prefix (a.k.a. "Begins") and suffix (a.k.a. "Ends") relations on intervals. In terms of complexity, \(\mathsf{BE}\) lies in between the "Chop" logic \(\mathsf{C}\), whose satisfiability problem is known to be non-elementary, and the \(\mathsf{PSpace}\)-complete interval logic \(\mathsf{D}\) of the sub-interval (a.k.a. "During") relation. \(\mathsf{BE}\) was shown to be \(\mathsf{ExpSpace}\)-hard, and the only known satisfiability procedure is primitive recursive, but not elementary. Our contribution consists of tightening the complexity bounds of the satisfiability problem for \(\mathsf{BE}\), by proving it to be \(\mathsf{ExpSpace}\)-complete. We do so by devising an equi-satisfiable normal form with boundedly many nested modalities. The normalization technique resembles Scott's quantifier elimination, but it turns out to be much more involved due to the limitations enforced by the homogeneity assumption.
## 1 Introduction
In this paper, we study the computational complexity of the satisfiability problem for the logic \(\mathsf{BE}\) of the prefix and suffix interval relations. The considered interpretation setting is the one with intervals over finite linear orders, under the homogeneity assumption (see below). The logic \(\mathsf{BE}\) is at the core of the galaxy of interval temporal logics [9] and has interesting connections with standard point-based temporal logics [2]. In general, formulas of interval temporal logics can express properties of _pairs_ of time points, rather than properties of single time points, and are evaluated as sets of such pairs, that is, as binary relations on points. They are very expressive in comparison to point-based ones, and it does not come as a surprise that, in general, there is no reduction of their satisfiability problem to satisfiability of classical monadic second-order logic.
The _logic_\(\mathsf{BE}\) has two (unary) modalities, \(\langle B\rangle\) and \(\langle E\rangle\), that quantify over prefixes and suffixes of the current interval, respectively. These modalities can be viewed as the logical counterparts of Allen's binary relations _Begins_ and _Ends_[1]. In particular, the logic \(\mathsf{BE}\) can be considered as a fragment of Halpern and Shoham's interval temporal logic [9], denoted \(\mathsf{HS}\), which features one modal operator for each of the twelve non-trivial Allen's relations.
The satisfiability problem for \(\mathsf{BE}\) turns out to be _undecidable_ over all relevant classes of interval structures [7, 10]. One can however escape this bleak landscape by constraining the semantics, in particular, the interpretation of the propositional letters. An interesting example is given by the _homogeneity_ assumption, according to which a propositional letter holds at an interval if and only if it holds at all of its points. In other words, according to the homogeneous semantics, the labelling of an arbitrary interval in a model is uniquely determined by those of the singleton intervals contained in it.
An advantage of the homogeneity assumption is that it makes it possible to define a natural interpretation of interval logics over Kripke structures. For example, this comes in handy when studying the _model-checking problem_, which is defined as the problem of deciding whether a given formula is valid over all (homogeneous) interval structures generated by a given Kripke structure. As such, the problem can be seen as a variant of the classical validity/satisfiability problem, and many decidability and complexity results can be transferred from one problem to another. In [12] it was shown that the model-checking and satisfiability problems for BE, and in fact for full HS logic, become decidable when one restricts to _homogeneous_ interval structures.
Despite its simple syntax and the homogeneity assumption, the logic BE turns out to be quite expressive and succinct. In [2], Bozzelli et al. have shown that, when interpreted over finite words, LTL (Linear Temporal Logic) and BE, under homogeneity, define the same class of star-free regular languages, but with the latter formalism being at least exponentially more succinct than the former. This is also reflected in the complexity of the satisfiability problem for BE, which was shown to be ExpSpace-hard [3, Theorem 3.1].1 On the other hand, the only known decision procedure [12] for satisfiability of BE formulas is basically the one for full HS, and is not elementary.
Footnote 1: In fact, the cited result focuses on the model-checking problem for BE, which takes as input, not only a formula, but also a Kripke structure. It happens that the Kripke structure used in the proof of the ExpSpace lowerbound generates every possible homogeneous interval structure, and hence the result can be immediately transferred to the validity/satisfiability problem for BE.
It is also worth contrasting the expressiveness and complexity of BE, under homogeneity, with those of two close relatives of it: the Chop logic C [16] and the logic D of the sub-interval relation [4]. The _logic_ C has a binary modality (\(C\)) that allows one to split the current interval in two parts and predicate separately on them. The _logic_ D has a unary modality (\(D\)) that allows one to predicate about sub-intervals of the current interval. It is easy to see that, in terms of expressiveness, BE lies in between D and C, in the sense that modality (\(D\)) can be defined in BE, i.e., \(\left\langle D\right\rangle\varphi\) is equivalent to \(\left\langle B\right\rangle\left\langle E\right\rangle\varphi\), and modalities (\(B\)) and \(\left\langle E\right\rangle\) can in turn be defined in C, e.g., \(\left\langle B\right\rangle\varphi\) is equivalent to \(\varphi\left\langle C\right\rangle\text{true}\). Under the homogeneity assumption, the satisfiability problem for C is non-elementarily decidable, precisely, tower-complete, in view of the existence of straightforward reductions to and from language-emptiness of generalized star-free regular expressions [13, 15], while the satisfiability problem for D was shown to be PSpace-complete by a suitable contraction method [4]. It is also worth pointing out that if the homogeneity assumption is removed, the satisfiability problem for D becomes undecidable [11].
Based on the observations above, it is crucial to close the gap between the complexity lowerbound and upperbound of the satisfiability problem for BE. Significant effort has been invested in recent years towards both raising the ExpSpace lowerbound, e.g. using variants of Stockmeyer's counters [15], and developing elementary satisfiability procedures. Despite these efforts, the complexity gap remained unchanged and proved to be an intriguing challenge. The special status of BE is witnessed by the fact that the many results about the complexity of the satisfiability and/or model-checking problems for proper fragments of HS, under the homogeneity assumption, concern logics that include neither modality (\(B\)) nor modality \(\left\langle E\right\rangle\) or feature only one of them (an up-to-date picture can be found in [5, 6]).
In this paper, we manage to prove that the satisfiability problem for the logic BE, under homogeneity, is elementarily decidable, and precisely ExpSpace-complete. This result is established using a rather unexpected normalization technique, which consists of transforming an arbitrary BE formula into an equi-satisfiable one with boundedly many nested modalities. Specifically, we will show that one can compute, in polynomial time, normalized formulas with nesting depth of modalities at most 4, and with at most 2 alternations between universal and existential modalities. The transformation of BE formulas into normalized ones can be also viewed as a quantifier elimination technique a-la Scott [14]. In this perspective, however, the transformation has to deal with an increased difficulty: due to the homogeneity assumption, the elements over which we predicate cannot be labelled in an arbitrary way. In view of this difficulty, it is quite surprising that an equi-satisfiable normalized formula can be computed in polynomial time from any given arbitrary BE formula.
The rest of the paper is organized as follows. In Section 2, we introduce the logic BE and we point out the relevant implications of the homogeneity assumption. In Section 3, we define the transformation of BE formulas into normalized ones. In Section 4, we derive an optimal satisfiability procedure and analyse its complexity. Conclusions provide an assessment of the work done and outline future research directions. For reader convenience, technical terms and notation in the electronic version of the paper are linked to their definitions, which can then be accessed with a mouse click.
## 2 Preliminaries
Let the _time domain_ be a finite prefix of the natural numbers \((N,<)\). Intervals over \(N\) are denoted by \([x,y]\), for \(x,y\in N\) and \(x\leq y\), and the set of all intervals over \(N\) is denoted \(\mathbb{I}(N)\). We let \(<_{B}\) (resp., \(<_{E}\)) be the proper _prefix_ (resp., _suffix_) relation on intervals, defined by \(J<_{B}I\) if and only if \(\min(I)=\min(J)\leq\max(J)<\max(I)\) (resp., \(J<_{E}I\) if and only if \(\min(I)<\min(J)\leq\max(J)=\max(I)\)).
Formulas of the _log_\(\mathsf{BE}\) are constructed starting from propositional letters belonging to a finite non-empty set \(\Sigma\), called _signature_, using classical Boolean connectives and modal operators. The latter operators are used to quantify over prefixes and suffixes of the current interval. Formally, \(\mathsf{BE}\) formulas satisfy the following grammar:
\[\varphi\ \ :=\ \ p\ \ (\mathrm{for}\ p\in\Sigma)\ \ |\ \neg\varphi\ \mid\ \varphi\vee\varphi\ \mid\ \langle B\rangle\,\varphi\ \mid\ \langle E\rangle\,\varphi.\]
Semantics is given in terms of an interval structure \(\mathcal{S}\) and one of its intervals \(I\). Formally, an _interval structure_ over a signature \(\Sigma\) is a pair \(\mathcal{S}=(\mathbb{I}(N),\sigma)\), where \(\sigma:\mathbb{I}(N)\to\wp(\Sigma)\) is a labelling of intervals by subsets of \(\Sigma\). Whether a \(\mathsf{BE}\) formula \(\varphi\)_holds at_ an interval \(I\) of \(\mathcal{S}\), denoted \(\mathcal{S},I\vDash\varphi\), is determined by the following rules:
* \(\mathcal{S},I\vDash p\) if \(p\in\sigma(I)\);
* \(\mathcal{S},I\vDash\neg\varphi\) if \(\mathcal{S},I\not\vDash\varphi\);
* \(\mathcal{S},I\vDash\varphi_{1}\vee\varphi_{2}\) if \(\mathcal{S},I\vDash\varphi_{1}\) or \(\mathcal{S},I\vDash\varphi_{2}\);
* \(\mathcal{S},I\vDash(B)\varphi\) if \(\mathcal{S},J\vDash\varphi\) for some \(J<_{B}I\);
* \(\mathcal{S},I\vDash(E)\varphi\) if \(\mathcal{S},J\vDash\varphi\) for some \(J<_{E}I\).
A formula is _valid_ if it holds at every interval of every interval structure; similarly, it is _satisfiable_ if it holds at some interval of some interval structure. Two formulas \(\varphi\) and \(\varphi^{\prime}\) are _equivalent_ if for every interval structure \(\mathcal{S}\) and every interval \(I\) in it, \(\mathcal{S},I\vDash\varphi\) if \(\mathcal{S},I\vDash\varphi^{\prime}\). They are _qui-satisfiable_ if either they are both satisfiable or none of them is. The notions of validity, satisfiability, and equivalence can be relativized to a specific class of interval structures (possibly even to a single interval structure). As an example, we say that a formula \(\varphi\) is _valid over_ a class \(\mathscr{C}\) of interval structures if \(\mathcal{S},I\vDash\varphi\) for all \(\mathcal{S}\in\mathscr{C}\) and all \(I\in\mathcal{S}\). In the particular case where \(\mathscr{C}\) contains a single interval structure \(\mathcal{S}\), we will say that a formula \(\varphi\) is _valid over_\(\mathcal{S}\) if \(\mathcal{S},I\vDash\varphi\) for all \(I\in\mathcal{S}\).
It is possible to add syntactic sugar to the logic \(\mathsf{BE}\). As an example, we will often use shorthands like \(\varphi_{1}\wedge\varphi_{2}=\neg(\neg\varphi_{1}\vee\neg\varphi_{2})\), false \(=p\wedge\neg p\) (for any \(p\in\Sigma\)), true \(=\neg\)false, and \([X]\varphi=\neg(X)\neg\varphi\), for \(X\in\{B,E\}\). Some other useful shorthands are \(\pi=[B]\)false, which constrains the interval where it is evaluated to be a singleton, and \([G]\varphi=\varphi\ \wedge\ [B]\,\varphi\ \wedge\ [E]\,\varphi\ \wedge\ [B]\,[E]\,\varphi\), which constrains all sub-intervals (including the current interval, its proper prefixes, and its proper suffixes) to satisfy \(\varphi\). The shorthands \(\pi\) and \([G]\) can be viewed as derived nullary and unary modal operators, respectively, and can be added as syntactic sugar to \(\mathsf{BE}\).
Homogeneity assumption.We recall from [10, 12] that the satisfiability problems for the logic \(\mathsf{BE}\) is undecidable, unless one restricts to homogeneous interval structures. An interval structure \(\mathcal{S}=(\mathbb{I}(N),\sigma)\) is _homogeneous_ if its labelling satisfies the condition \(\sigma(I)=\bigcap_{\mathsf{I}\in I}\sigma([x,x])\) for all \(I\in\mathbb{I}(N)\). Intuitively, this means that the labelling \(\sigma\) is uniquely determined by its restriction to singleton intervals. Let us take a closer look at the implications of homogeneity.
First of all, we have that every formula \(\langle B\rangle(p_{1}\wedge p_{2})\) is equivalent to \(\langle B\rangle\,p_{1}\wedge\langle B\rangle\,p_{2}\), and similarly for \((E)\). Note, however, that homogeneity does not imply similar properties for arbitrary formulas \(\varphi_{1},\varphi_{2}\) replacing the propositional letters \(p_{1},p_{2}\). As an example, the formulas \(\langle B\rangle(p\wedge\neg p)\) and \((\langle B\rangle\,p)\wedge(\langle B\rangle\neg p)\) are not equivalent.
Homogeneity can also be exploited to efficiently rewrite any \(\mathsf{BE}\) formula into an equivalent one where every occurrence of a propositional letter is conjoined with \(\pi\). Based on this observation, we introduce the following mild normal form:
**Definition 1**.: _A \(\mathsf{BE}\) formula \(\psi\) is in homogeneous normal form if every occurrence of a propositional letter \(p\) in \(\psi\) appears inside the subformula \(\pi\wedge p\)._
Basically, the homogeneous normal form restricts propositional letters to be only evaluated at singleton intervals. As an example, the formula \((\pi\wedge q)\vee\langle B\rangle(\pi\wedge\neg(\pi\wedge p))\) is in homogeneous normal form and holds at an interval \(I\) iff \(I\) consists of a single point labelled by \(q\) or the left endpoint of \(I\) is not labelled by \(p\).
**Proposition 2**.: _One can transform in linear time any formula \(\psi\) into one in homogeneous normal form that is equivalent to \(\psi\) when interpreted over homogeneous interval structures._
Proof.: It suffices to replace every occurrence of a propositional letter \(p\) in \(\psi\) by the formula \(\mathtt{everywhere}(p)=(\pi\wedge p)\,\vee\,\big{(}\,(B)(\pi\wedge p)\,\wedge\,(E) (\pi\wedge p)\,\wedge\,[B](\pi\vee(E)(\pi\wedge p))\big{)}\). The resulting formula is equivalent to \(\psi\) since, over homogeneous interval structures, \(p\) is equivalent to \(\mathtt{everywhere}(p)\).
We denote by \(\mathtt{BE}_{\pi}\) the fragment of logic \(\mathtt{BE}\) that contains only formulas in homogeneous normal form. _From this point forward, we will exclusively work with \(\mathtt{BE}_{\pi}\) formulas, with the understanding that this assumption may occasionally go unstated. Accordingly, we will treat (sub)formulas of the form \(\pi\wedge p\) as atomic._
## 3 A bounded-nesting normal form for \(\mathtt{BE}_{\pi}\)
In this section, we describe a transformation of arbitrary \(\mathtt{BE}_{\pi}\) formulas into equi-satisfiable ones with boundedly many nested modalities. The transformation is somehow reminiscent of the so-called Scott normal form for the two-variable fragment of first-order logic [14], since it results in a formula, over an extended set of propositional letters, that is satisfiable if and only if the original formula was. The increased difficulty here is that the valuation of the new propositional letters emerging from the transformation must satisfy the homogeneity assumption. This is to say that we cannot identify intervals satisfying a certain (sub)formula \(\varphi\) by labelling them with a fresh propositional letter \(q_{\varphi}\). Rather, we will identify these intervals by appropriately correlating fresh labels assigned to their endpoints. Our transformation will exploit in a crucial way the fact that, under homogeneity, valuations of formulas at two overlapping intervals have "less degrees of freedom" than valuations of the same formulas at disjoint intervals.
**Definition 3**.: _The modal depth (or simply depth) of a \(\mathtt{BE}_{\pi}\) formula is the maximum number of nested modal operators \(\langle B\rangle\) and \(\langle E\rangle\) in it, not counting those defining the operator \(\pi\). A \(\mathtt{BE}_{\pi}\) formula is in shallow normal form if it is of the form \(\psi\,\wedge\,[G]\,\xi\), where both \(\psi\) and \(\xi\) have depth at most \(2\)._
Concerning the above definition, we recall that \([G]\,\xi\) is a shorthand for \(\xi\,\wedge\,[B]\,\xi\,\wedge\,[E]\,\xi\,\wedge\,[B]\,[E]\,\xi\), so a formula in shallow normal form has depth at most \(4\). However, not all depth-\(4\) formulas are in shallow normal form.
**Theorem 4**.: _Given any \(\mathtt{BE}_{\pi}\) formula \(\psi\), one can compute in in polynomial time an equi-satisfiable formula \(\psi^{*}\) that is in shallow normal form._
To highlight one of the key ideas underlying the proof of the theorem, which we postpone to the next subsections, we give an example of normalization of a formula.
**Example 5**.: _Consider the formula \(\psi=\langle B\rangle\,\varphi\) over the signature \(\Sigma=\{p\}\), where \(\varphi=\langle B\rangle\,\langle E\rangle(\pi\wedge p)\). Figure 1 shows an example of an interval structure satisfying \(\psi\); in particular, it highlights intervals witnessing \(\varphi\) (in red) and \(\langle E\rangle(\pi\wedge p)\) (in blue). Note that \(\psi\) has depth \(3\) and is not in shallow normal form. To rewrite \(\psi\) into an equi-satisfiable formula in shallow normal form, we introduce a new propositional letter \(q\) with the purpose of marking the right endpoints of the intervals that satisfy \(\varphi\) and that are minimal w.r.t. the prefix relation (we call these intervals prefix-minimal, for short). Note that the right endpoints of these intervals are immediately to the right of the \(p\)-labelled points. We thus consider interval structures over the expanded signature \(\Sigma^{\prime}=\{p,q\}\) that make the following formula valid:_
\[\xi\,=\,\underbrace{\,\left(\neg\pi\,\wedge\,\neg\,\langle B\rangle\,\neg \pi\right)}_{\text{interval has exactly two points}}\quad\rightarrow\quad\underbrace{\,\left(\langle B \rangle(\pi\wedge p)\,\leftrightarrow\,\langle E\rangle(\pi\wedge q)\right)} _{\text{$q$ is to the right whenever $p$ is to the left}}\]
_We can verify that, over interval structures that make \(\xi\) valid, every prefix-minimal interval that satisfies \(\varphi\) also satisfies \(\varphi^{\prime}=\langle E\rangle(\neg\pi)\,\wedge\,\langle E\rangle(\pi \wedge q)\), and, conversely, every interval that satisfies \(\varphi^{\prime}\) also satisfies \(\varphi^{\prime}=\langle E\rangle(\neg\pi)\,\wedge\,\langle E\rangle(\pi \wedge q)\)._
Proof.: We first consider the case where \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\). We have \(\varphi=\langle B\rangle\varphi\). We have \(\varphi=\langle B\rangle\,\varphi\).
\(\varphi\). This implies that, again over interval structures that make \(\xi\) valid, the depth-\(3\) formula \(\psi=\left\langle B\right\rangle\varphi\) is equivalent to the depth-\(2\) formula \(\psi^{\prime}=\left\langle B\right\rangle\varphi^{\prime}\). Moreover, since the labelling of any interval structure over \(\Sigma=\left\{p\right\}\) can always be expanded with the fresh letter \(q\) so as to satisfy \(\left[G\right]\xi\), we conclude that \(\psi\) is equi-satisfiable as the formula \(\psi^{*}=\psi^{\prime}\,\wedge\,\left[G\right]\xi\). Since \(\xi\) has depth \(1\), \(\psi^{*}\) is also in shallow normal form._
The normalization procedure for an arbitrary formula \(\psi\) iterates a rewriting similar to the one presented in Example 5. More precisely, we start by replacing every outermost subformula of \(\psi\) of depth \(d>2\) and of the form \(\left\langle B\right\rangle\varphi\) (resp., \(\left\langle E\right\rangle\varphi\)) with an equi-satisfiable formula \(\left\langle B\right\rangle\varphi^{\prime}\) (resp., \(\left\langle E\right\rangle\varphi^{\prime}\)) of depth \(2\). This rewriting step extends the signature with new propositional letters, which are constrained while preserving equi-satisfiability using formulas similar to the \(\left[G\right]\xi\) of Example 5. Constraints will contain occurrences of the original subformula \(\varphi\), and thus need to be normalized in their turn in order to eventually obtain formulas of depth at most \(2\). More details and formal arguments about the normalization procedure of Theorem 4 will be provided in the next subsections.
We conclude this part by observing an immediate consequence of Theorem 4. We recall from [12] the existence of a rather simple, but non-elementary procedure for deciding satisfiability of a BE formula \(\psi\) under homogeneity. A close inspection to the description of this procedure shows that it has non-deterministic time complexity \(\mathcal{O}(\mathit{tow}(h,|\psi|))\), where \(\mathit{tow}(h,n)=2^{2^{{}^{\prime}}^{\prime}}\) is the tower of \(h\) exponents ending with \(n\) and \(h\) is the maximum number of nested modal operators in the input formula \(\psi\). As the shorthand \(\pi\) can be directly handled in constant time, the parameter \(h\) of the said complexity bound can be identified with our notion of modal depth for \(\mathsf{BE}_{\pi}\) formulas. In particular, when we consider a formula \(\psi\) in shallow normal form, the parameter \(h\) is at most \(4\). Together with Proposition 2 and Theorem 4, this gives a first rough complexity bound to the satisfiability problem for BE logic under the homogeneity assumption:
**Corollary 6**.: _The satisfiability problem for BE logic restricted to homogeneous interval structures is elementarily decidable, i.e., at least in \(4\mathrm{NExpTime}\)._
We shall provide later, in Section 4, a more careful complexity analysis, showing that the satisfiability problem for BE logic under homogeneity is actually ExpSpace-complete.
### Expanders
A first ingredient of the normalization procedure of \(\mathsf{BE}_{\pi}\) formulas is that of an expander. Intuitively, this is a formula that constrains new propositional letters on the basis of the old ones in an arbitrary (homogeneous) interval structure.
**Definition 7**.: _Let \(\Sigma\subseteq\Sigma^{\prime}\) be two signatures, and let \(\mathcal{S}=\left(\mathbb{I}(N),\sigma\right)\) and \(\mathcal{S}^{\prime}=\left(\mathbb{I}(N^{\prime}),\sigma^{\prime}\right)\) be interval structures over \(\Sigma\) and \(\Sigma^{\prime}\), respectively. We say that \(\mathcal{S}^{\prime}\) is an expansion of \(\mathcal{S}\) if \(N^{\prime}=N\) and \(\sigma^{\prime}(I)\cap\Sigma=\sigma(I)\) for all intervals \(I\in\mathbb{I}(N)\)._
_An expander from \(\Sigma\) to \(\Sigma^{\prime}\) is a \(\mathsf{BE}_{\pi}\) formula \(\xi\) over \(\Sigma^{\prime}\) such that, for every interval structure \(\mathcal{S}\) over \(\Sigma\), there is an expansion \(\mathcal{S}^{\prime}\) of \(\mathcal{S}\) over \(\Sigma^{\prime}\) that makes \(\xi\) valid._
We report below a simple lemma about expanders.
**Lemma 8**.: _If \(\xi\) is an expander from \(\Sigma\) to \(\Sigma^{\prime}\), \(\psi\) and \(\psi^{\prime}\) are formulas over the signatures \(\Sigma\) and \(\Sigma^{\prime}\), respectively, and \(\psi,\psi^{\prime}\) are equivalent over all interval structures where \(\xi\) is valid, then \(\psi\) and \(\psi^{\prime}\wedge\left[G\right]\xi\) are equi-satisfiable._
Proof.: Suppose that \(\psi^{\prime}\wedge\left[G\right]\xi\) is satisfied by an interval structure \(\mathcal{S}^{\prime}\) over \(\Sigma^{\prime}\). Because, \(\xi\) is valid over \(\mathcal{S}^{\prime}\), \(\psi\) is equivalent to \(\psi^{\prime}\) over \(\mathcal{S}^{\prime}\), and hence \(\mathcal{S}^{\prime}\) satisfies \(\psi\). Conversely, if \(\psi\) is satisfied by an interval structure \(\mathcal{S}\) over \(\Sigma\), then there is an expansion \(\mathcal{S}^{\prime}\) of \(\mathcal{S}\) that makes \(\xi\) valid. This implies that \(\psi\) and \(\psi^{\prime}\) are equivalent over \(\mathcal{S}^{\prime}\). Hence \(\mathcal{S}^{\prime}\) satisfies \(\psi^{\prime}\), and \(\psi^{\prime}\wedge\left[G\right]\xi\) as well.
### Minimal witnessing intervals
Recall that the normalization of a \(\mathsf{BE}_{\pi}\) formula replaces subformulas \(\left\langle B\right\rangle\varphi\) (resp., \(\left\langle E\right\rangle\varphi\)) of depth \(d>2\) with equivalent formulas \(\left\langle B\right\rangle\varphi^{\prime}\) (resp., \(\left\langle E\right\rangle\varphi^{\prime}\)) of depth \(2\). In this respect, a simple observation is that, in order to determine which intervals satisfy \(\left\langle B\right\rangle\varphi\) (resp., \(\left\langle E\right\rangle\varphi\)), one could look at intervals that satisfy \(\varphi\) and that are _minimal_ for the prefix (resp., suffix) relation.
**Definition 9**.: _Given a \(\mathsf{BE}_{\pi}\) formula \(\varphi\), an interval structure \(\mathcal{S}\), and an interval \(I\) in it, we say that \(I\) is prefix-minimal (resp., suffix-minimal) for \(\varphi\) if \(\mathcal{S},I\!=\!\varphi\) and \(\mathcal{S},J\!\neq\!\varphi\) for every \(J\!<_{B}I\) (resp., \(J\!<_{E}I\))._
We will see later that prefix/suffix-minimal intervals for \(\varphi\) can be unambiguously identified, once their endpoints are annotated with fresh propositional letters, using a formula \(\varphi^{\prime}\) of size proportional to that of \(\varphi\), but with depth just \(1\). A simplified account of this technique was already given in Example 5. Below, we discuss the approach under a more general perspective and highlight a potential issue with overlapping minimal witnesses.
**Example 10**.: _Suppose that \(\varphi\) is a formula of depth \(2\). We aim at replacing it with a formula \(\varphi^{\prime}\) of depth \(1\), so that \(\left(B\right)\varphi^{\prime}\) turns out to be equivalent to \(\left(B\right)\varphi\) in an appropriate expansion of the interval structure. As discussed earlier, a natural approach is to focus only on intervals that are prefix-minimal for \(\varphi\), and mark their endpoints with suitable fresh propositional letters. For example, two prefix-minimal intervals for \(\varphi\) are represented in Figure 2 by the red brackets. We mark their left and right endpoints with fresh propositional letters \(\ell\) and \(r\), respectively, and we assume that the interval structure is expanded so as to satisfy the intended use of \(\ell\) and \(r\). We then define \(\varphi^{\prime}=\left(\left(B\right)\!\left(\pi\wedge\ell\right)\wedge\left( E\right)\!\left(\pi\wedge r\right)\right)\vee\left(\left(\pi\wedge\ell\right)\wedge \left(\pi\wedge r\right)\right)\) and observe that every interval satisfying \(\left(B\right)\varphi\) must also satisfy \(\left(B\right)\varphi^{\prime}\). So one might be tempted to replace \(\varphi\) with \(\varphi^{\prime}\). Unfortunately, while \(\left(B\right)\varphi\) entails \(\left(B\right)\varphi^{\prime}\), the converse is not true, as the intersection of any two prefix-minimal intervals for \(\varphi\) does not always satisfy \(\varphi\) (see the blue bracket in Figure 2). In general, in order to mark the endpoints of minimal intervals without ambiguities, one could use different letters to mark the endpoints of any two overlapping intervals. More precisely, one should introduce as many copies of letters \(\ell,r\) as the maximum number of overlapping prefix-minimal intervals for \(\varphi\) that have different right endpoints._
### Encoding of minimal witnessing intervals
Example 10 brings up a third ingredient that is crucial for the normalization procedure, as it suggests that, in order to mark without ambiguities the endpoints of prefix-minimal (resp., suffix-minimal) intervals for a formula \(\varphi\), one must first bound the number of distinct right (resp., left) endpoints of overlapping intervals. A bound will be shown precisely in Corollary 14 below.
**Definition 11**.: _A set \(\mathcal{I}\) of intervals is an intersecting family if there is a point \(x\) that is contained in every interval of \(\mathcal{I}\)._
An example of an intersecting family of intervals is shown to the left of Figure 3.
Towards proving the desired bound, we shall first establish two auxiliary lemmas. The first lemma relates the maximum cardinality of a partially ordered set (e.g., an intersecting family of intervals, partially ordered by containment) to the maximum cardinality of its chains and anti-chains. Formally, a _chain_ of a partially ordered set is a subset of pairwise comparable elements. An _anti-chain_ is a subset of pairwise incomparable elements. The first lemma is in fact a rephrasing of Dilworth's theorem [8] (we give a proof here for self-containment):
**Lemma 12**.: _Let \(X\) be a partially ordered set and suppose that all its chains and anti-chains have cardinality at most \(n\). Then the cardinality of \(X\) is at most \(n^{2}\)._
Proof.: To begin with, notice that \(X\) is well-founded, due to the hypothesis that chains have cardinality at most \(n\). Define the partition \(Y_{1},Y_{2},\ldots\) of \(X\), where each \(Y_{i}\) contains all and only the _minimal_ elements
Figure 2: Overlapping prefix-minimal intervals for \(\varphi\), and their intersection.
of \(X\smallsetminus\bigcup_{j\in i}Y_{j}\) -- in particular, each \(Y_{i}\) is defined inductively on the basis of the previous sets \(Y_{1},\ldots,Y_{i-1}\). By construction, every subset \(Y_{i}\) is an anti-chain, and hence, by the hypotheses of the claim, it has cardinality at most \(n\).
Let us now bound by \(n\) the number of subsets of the partition. Towards a contradiction, assume that \(Y_{1},Y_{2},\ldots,Y_{n+1}\) belong to the partition of \(X\). By construction, for every \(1\prec i\leq n+1\) and every \(y\in Y_{i}\), there is \(y^{\prime}\in Y_{i-1}\) such that \(y^{\prime}\prec y\) (otherwise \(y\) should have been added to \(Y_{i-1}\)). Using this property and a simple induction, we can construct a chain of length \(n+1\): we start by taking an arbitrary \(y_{n+1}\in\Gamma_{n+1}\) and then we repeatedly use the property to prepend to a chain \(y_{i}<y_{i+1}<\cdots<y_{n+1}\), with \(i>1\), \(y_{i}\in Y_{i}\), \(y_{i+1}\in Y_{i+1}\),..., \(y_{n+1}\in Y_{n+1}\), a new element \(y_{i-1}<y_{i}\), with \(y_{i-1}\in Y_{i-1}\). Clearly, such a chain of length \(n+1\) leads to a contradiction, and hence the partition \(Y_{1},Y_{2},\ldots\) of \(X\) contains at most \(n\) elements. We conclude that \(|X|=\sum_{i}|Y_{i}|\leq n^{2}\).
Ultimately, we aim at applying Lemma 12 to bound the cardinality of every intersecting family of prefix-minimal (resp., suffix-minimal) intervals with pairwise distinct right (resp., left) endpoints, using the containment relation as partial order. To this end, it is crucial to bound the cardinalities of the chains and anti-chains of such an intersecting family. It will be also convenient to avoid singleton intervals when reasoning about intersecting families (note that there is at most one singleton interval in every intersecting family).
**Lemma 13**.: _Let \(\mathcal{S}\) be an interval structure, \(\varphi\) a \(\mathsf{BE}_{\pi}\) formula, \(\mathcal{I}\) an intersecting family of non-singleton prefix-minimal (resp., suffix-minimal) intervals for \(\varphi\), with pairwise distinct right (resp., left) endpoints, and \(\mathcal{I}^{\prime}\) a chain or an anti-chain of \(\mathcal{I}\), where the partial order is given by containment. We have that_
\[|\mathcal{I}^{\prime}|\;\leq\;2^{2|\varphi|}. \tag{1}\]
Proof.: We present the proof for an intersecting family of non-singleton _prefix-minimal_ intervals for \(\varphi\) (the case of suffix-minimal intervals uses symmetric arguments). Towards a contradiction, assume that there exist a \(\mathsf{BE}_{\pi}\) formula \(\varphi\), an intersecting family \(\mathcal{I}\) of non-singleton prefix-minimal intervals for \(\varphi\) with pairwise distinct right endpoints, and a subset \(\mathcal{I}^{\prime}\) of \(\mathcal{I}\) that is a chain or an anti-chain and that violates the bound (1), i.e., \(\mathcal{I}^{\prime}\) contains more than \(2^{2|\varphi|}\) intervals. We also assume, without loss of generality, that \(\varphi\) is a smallest formula witnessing this violation of the bound (later we will exploit this assumption when considering families of prefix-minimal intervals for subformulas of \(\varphi\)).
Let \(\boldsymbol{\partial}_{B}\,\varphi\) (resp., \(\boldsymbol{\partial}_{E}\,\varphi\)) be the set of formulas \(\alpha\) such that \((B)\,\alpha\) (resp., \((E)\,\alpha\)) is a subformula of \(\varphi\) with no other modal operator above it. For example, if \(\varphi=\{B\}\,\alpha_{1}\,\wedge\,(B)\,(B)\,\alpha_{2}\,\wedge\,(E)\,(B)\, \alpha_{3}\), then \(\boldsymbol{\partial}_{B}\varphi=\{\alpha_{1},(B)\,\alpha_{2}\}\) and \(\boldsymbol{\partial}_{E}{=}\{(B)\,\alpha_{3}\}\). Note that \(|\varphi|\geq|\boldsymbol{\partial}_{B}\varphi|+|\boldsymbol{\partial}_{E}\varphi|\).
Define the \(\varphi\)_-profile_ of a non-singleton interval \(I\) as the pair \((B,E)\), where \(B\) (resp., \(E\)) is the set of formulas \(\alpha\in\boldsymbol{\partial}_{B}\varphi\) (resp., \(\alpha\in\boldsymbol{\partial}_{E}\varphi\)) that hold at prefixes (resp., suffixes) of \(I\). Note that any two non-singleton intervals with the same \(\varphi\)-profile either both satisfy \(\varphi\) or both satisfy \(\neg\varphi\); in particular, this holds thanks to the fact that \(\varphi\) is in homogeneous normal form.
We also observe that there are at most \(2^{|\boldsymbol{\partial}_{B}\varphi|+|\boldsymbol{\partial}_{E}\varphi|}\) distinct \(\varphi\)-profiles. Therefore, by our assumption on \(\mathcal{I}^{\prime}\), there are
\[n\;>\;2^{2|\varphi|-|\boldsymbol{\partial}_{B}\varphi|-|\boldsymbol{\partial} _{E}\varphi|}\]
intervals \(I_{1},\ldots,I_{n}\in\mathcal{I}^{\prime}\) with the same \(\varphi\)-profile. Without loss of generality, assume that the intervals \(I_{1},\ldots,I_{n}\) are listed based on the natural ordering of their right endpoints, that is, \(\max(I_{1})<\cdots<\max(I_{n})\). Depending on \(\mathcal{I}^{\prime}\) being a chain or an anti-chain, the left endpoints of these intervals are also ordered, in descending, resp., ascending order (see Figure 3).
For the rest of the proof, unless otherwise stated, \(i\) will denote a natural number from \(1\) to \(n-1\), and will be used in particular to index pairs of consecutive intervals, say \(I_{i}\) and \(I_{i+1}\). For every \(i\), let \(w_{i}<x_{i}\leq y_{i}<z_{i}\) be the four endpoints of \(I_{i}\) and \(I_{i+1}\). Further, let \(\text{left-}\Delta_{i}=[w_{i}+1,x_{i}]\) and right-\(\Delta_{i}=[y_{i},z_{i}-1]\) (these
Figure 3: From left to right: an intersecting family of intervals, a chain, and an anti-chain.
intervals are represented by the red dashed rectangles in Figure 3). Thanks to the fact that \(\mathcal{I}^{\prime}\) is a chain or an anti-chain, the left-\(\Delta_{i}\)'s and the right-\(\Delta_{i}\)'s are pairwise disjoint across all \(i\) (this property will be used later and is the main reason for restricting our attention to chains and anti-chains).
Given \(\alpha\in\boldsymbol{\partial}_{B}\varphi\) and \(1\leq i<n\), a _special \((B)\)-witness of \(\alpha\) at \(i\)_ (if it exists) is the prefix-minimal interval for \(\alpha\) that has the same left endpoint as \(I_{i+1}\) and whose right endpoint belongs to right-\(\Delta_{i}\). Figure 4 gives two examples of special \((B)\)-witnesses, represented by green brackets: one example is for the chain arrangement and the other is for the anti-chain arrangement. Symmetrically, given \(\alpha\in\boldsymbol{\partial}_{E}\varphi\) and \(1\leq i<n\), a _special \((E)\)-witness of \(\alpha\) at \(i\)_ (if it exists) is the suffix-minimal interval for \(\alpha\) that has the same right endpoint as \(I_{i}\) and whose left endpoint belongs to left-\(\Delta_{i}\). Special \((E)\)-witnesses are represented in Figure 4 by blue brackets.
Now, we tag an index \(1\leq i<n\) with a pair \((B,\alpha)\) (resp., \((E,\alpha)\)) whenever \(\alpha\in\boldsymbol{\partial}_{B}\varphi\) (resp., \(\alpha\in\boldsymbol{\partial}_{E}\varphi\)) and there is a special \((B)\)-witness (resp., \((E)\)-witness) of \(\alpha\) at \(i\). If there is no special witness for any \(\alpha\), then we tag \(i\) with the symbol \(\bot\). Let \(K_{i}=[\min(I_{i+1}),\max(I_{i})]\) and observe that \(K_{i}\) is a _proper prefix_ of \(I_{i+1}\). We will prove that, for some index \(i\), the interval \(K_{i}\) satisfies \(\varphi\), thus contradicting prefix-minimality of \(I_{i+1}\). Towards this, it will be sufficient to find an index \(i\) tagged with \(\bot\). Indeed, if this happens, then we claim that
**Claim 13.1**.: _Every formula \(\alpha\in\boldsymbol{\partial}_{B}\varphi\) (resp., \(\alpha\in\boldsymbol{\partial}_{E}\varphi\)) that holds at a prefix (resp., suffix) of \(I_{i}\) also holds at a prefix (resp., suffix) of \(K_{i}\), and vice versa._
The above claim would then imply that the \(\varphi\)-profile of \(K_{i}\) coincides with that of \(I_{i}\), and hence \(K_{i}\vDash\varphi\).
Proof of the claim.: Assume that index \(i\) is tagged with \(\bot\). Consider some \(\alpha\in\boldsymbol{\partial}_{B}\varphi\). If \(\alpha\) holds at a prefix of \(I_{i}\), then \(\alpha\) holds at some prefix of \(I_{i+1}\) as well, because \(I_{i}\) and \(I_{i+1}\) have the same \(\varphi\)-profile. Let \(J\) be the smallest prefix of \(I_{i+1}\) that satisfies \(\alpha\). Due to \(i\) being tagged with \(\bot\), we have that \(\max(J)<\max(I_{i})=\max(K_{i})\), meaning that \(J\) is also a prefix of \(K_{i}\). Conversely, if \(\alpha\) holds at a prefix of \(K_{i}\), then it trivially holds at a prefix of \(I_{i+1}\) as well, and thus it holds at a prefix of \(I_{i}\), too, because \(I_{i}\) and \(I_{i+1}\) have the same \(\varphi\)-profile. Next, consider some \(\alpha\in\boldsymbol{\partial}_{E}\varphi\). If \(\alpha\) holds at some suffix of \(I_{i}\), then let \(J\) be the smallest suffix of \(I_{i}\) that satisfies \(\alpha\). Due to \(i\) being tagged with \(\bot\), we have that \(\min(J)>\min(I_{i+1})=\min(K_{i})\), meaning that \(J\) is also a suffix of \(K_{i}\). Conversely, assume that \(\alpha\) holds at some suffix of \(K_{i}\) and let \(J\) be the smallest suffix of \(K_{i}\) that satisfies \(\alpha\). Once again, since \(i\) is tagged with \(\bot\), we have that \(\min(J)>\min(I_{i})\), meaning that \(J\) is a suffix of \(I_{i}\), too.
It remains to prove that at least one index \(i\) is tagged with \(\bot\). For this, we bound the number of indices tagged with pairs of the form \((X,\alpha)\), with \(X=B\) (resp., \(X=E\)) and \(\alpha\in\boldsymbol{\partial}_{X}\varphi\). By construction, for each tag \((X,\alpha)\), the special \((X)\)-witnesses of \(\alpha\) form an intersecting (anti-)chain \(\mathcal{I}_{X,\alpha}\) of prefix-minimal (resp., suffix-minimal) intervals for \(\alpha\). Moreover, we know that:
* All intervals in \(\mathcal{I}_{X,\alpha}\) are non-singleton. This is because the only scenario where a _singleton_ special \((X)\)-witness arises is when \(\mathcal{I}^{\prime}\) is anti-chain, \(n=2\), and \(\max(I_{1})=\min(I_{2})\). This scenario is however excluded by the fact that \(n>2^{2|\varphi|-|\boldsymbol{\partial}_{B}\varphi|-|\boldsymbol{\partial}_{E} \varphi|}\geq 2\).
* The intervals in \(\mathcal{I}_{X,\alpha}\) have pairwise distinct right (resp., left) endpoints. This is because those endpoints belong to the intervals right-\(\Delta_{i}\) (resp., left-\(\Delta_{i}\)), which are pairwise disjoint across all \(i\)'s.
* The cardinality of each (anti-)chain \(\mathcal{I}_{X,\alpha}\) is at most \(2^{2|\alpha|}\). This is thanks to the previous properties and because \(\alpha\) is a proper subformula of \(\varphi\), which was assumed to be a smallest formula violating the bound (1).
Figure 4: Special witnesses in a chain (left) and in an anti-chain (right).
In view of the last property, we derive that the number of indices that are _not_ tagged with \(\bot\) is
\[n^{\prime} \leq\ \sum_{\alpha\epsilon}\boldsymbol{\partial}_{B}\varphi\;2^{2| \alpha|}\ +\ \sum_{\alpha\epsilon}\boldsymbol{\partial}_{E}\varphi\;2^{2|\alpha|}\] \[\leq\ 2^{\Sigma_{\alpha}}\boldsymbol{\partial}_{B}\varphi\;2^{| \alpha|+\,\Sigma_{\alpha\epsilon}}\boldsymbol{\partial}_{E}\varphi\;2^{| \alpha|}\;,\]
where the last inequality follows from majorating sums with products. Next, recall that \(n>2^{2|\varphi|-|\boldsymbol{\partial}_{B}\varphi|-|\boldsymbol{\partial}_{E} \varphi|}\), and hence the number of indices \(1\leq i<n\) that are tagged with \(\bot\) is
\[n-1-n^{\prime}\ \geq\ 2^{2|\varphi|-|\boldsymbol{\partial}_{B}\varphi|-| \boldsymbol{\partial}_{E}\varphi|}\ -\ 2^{\Sigma_{\alpha}}\boldsymbol{\partial}_{B}\varphi\;2^{| \alpha|+\,\Sigma_{\alpha\epsilon}}\boldsymbol{\partial}_{E}\varphi\;2^{| \alpha|}\;.\]
We prove that the right hand-side number is always positive by showing that \(2|\varphi|-|\boldsymbol{\partial}_{B}\varphi|-|\boldsymbol{\partial}_{E} \varphi|>\sum_{\alpha\epsilon}\boldsymbol{\partial}_{B}\varphi\;2|\alpha|+\, \sum_{\alpha\epsilon}\boldsymbol{\partial}_{E}\varphi\;2|\alpha|\). We distinguish two cases, depending on whether or not \(\varphi\) contains modal operators. If \(\varphi\) contains no modal operators, then \(2|\varphi|-|\boldsymbol{\partial}_{B}\varphi|-|\boldsymbol{\partial}_{E} \varphi|=2|\varphi|>0=\sum_{\alpha\epsilon}\boldsymbol{\partial}_{B}\varphi\;2 |\alpha|+\,\sum_{\alpha\epsilon}\boldsymbol{\partial}_{E}\varphi\;2|\alpha|\). Otherwise, if \(\varphi\) contains at least one modal operator, then we observe that _(i)_ the size of \(\varphi\) is at least the sum of the sizes of the subformulas \(\left\langle X\right\rangle\alpha\), for \(X\in\left\{B,E\right\}\) and \(\alpha\in\boldsymbol{\partial}_{X}\varphi\), which are \(|\left\langle X\right\rangle\alpha|=|\alpha|+1\), and _(ii)_\(\sum_{\alpha\epsilon}\boldsymbol{\partial}_{X}\varphi\left(|\alpha|+1\right)= \left(\,\sum_{\alpha\epsilon}\boldsymbol{\partial}_{X}\varphi\;|\alpha| \right)+|\boldsymbol{\partial}_{X}\varphi|\), for \(X\in\left\{B,E\right\}\). From this we derive:
\[2|\varphi|-|\boldsymbol{\partial}_{B}\varphi|-|\boldsymbol{ \partial}_{E}\varphi|\] \[\geq\ \sum_{\alpha\epsilon}\boldsymbol{\partial}_{B}\varphi\;2| \alpha|\ +\ \sum_{\alpha\epsilon}\boldsymbol{\partial}_{E}\varphi\;2|\alpha|\ +\ | \boldsymbol{\partial}_{B}\varphi|\ +\ |\boldsymbol{\partial}_{E}\varphi|\] \[>\ \sum_{\alpha\epsilon}\boldsymbol{\partial}_{B}\varphi\;2| \alpha|\ +\ \sum_{\alpha\epsilon}\boldsymbol{\partial}_{E}\varphi\;2|\alpha|\;.\]
We have just shown that at least one index \(i\) must be tagged with \(\bot\), which completes the proof of the lemma.
Putting together Lemmas 12 and 13, we obtain the desired bound for an arbitrary intersecting family of non-singleton prefix/suffix-minimal intervals for \(\varphi\):
**Corollary 14**.: _Let \(\mathcal{S}\) be an interval structure, \(\varphi\) a \(\mathsf{BE}_{\pi}\) formula, and \(\mathcal{I}\) an intersecting family of non-singleton prefix-minimal (resp., suffix-minimal) intervals for \(\varphi\). Then the number of distinct right (resp., left) endpoints of intervals of \(\mathcal{I}\) is at most \(2^{4|\varphi|}\)._
We conclude this part by showing how prefix-minimal intervals for \(\varphi\) can be characterized using fresh propositional letters and suitable formulas \(\operatorname{flat}(\varphi)\) and \(\operatorname{enc}(\varphi)\) (a similar corollary can be stated for suffix-minimal intervals).
**Corollary 15**.: _Consider a \(\mathsf{BE}_{\pi}\) formula \(\varphi\) over a signature \(\Sigma\) and let \(\Sigma^{\prime}=\Sigma\,\omega\,\{p_{1},\ldots,p_{m},\ell,r,s\}\), where \(p_{1},\ldots,p_{m}\), \(\ell,r,s\) are fresh propositional letters and \(m=4|\varphi|\) (this \(m\) is precisely the exponent appearing in the bound of Corollary 14). Define the \(\mathsf{BE}_{\pi}\) formulas2_
Footnote 2: Note that, despite the notation, the formula \(\operatorname{flat}(\varphi)\) only depends on the signature and the size of \(\varphi\), whereas \(\operatorname{enc}(\varphi)\) depends entirely on \(\varphi\).
\[\operatorname{flat}(\varphi) =\] \[\ \
Proof.: Let us first explain the intended use of the fresh propositional letters \(p_{1},\ldots,p_{m}\), \(\ell,r,s\). The letters \(p_{1},\ldots,p_{m}\) will annotate points of an interval structure with \(m\)-tuples of bits, thus enumerating an exponentially-large set (e.g., \(\{1,\ldots,2^{m}\}\)). More precisely, the left and right endpoints of every non-singleton prefix-minimal interval for \(\varphi\) will be identified by having labels \(\ell\) and \(r\), respectively, and the same \(m\)-tuple of bits -- this correlation between endpoints is checked by the first disjunct of \(\operatorname{flat}(\varphi)\). Singleton prefix-minimal intervals for \(\varphi\) will instead be identified by the special label \(s\) -- this is checked in the second disjunct of \(\operatorname{flat}(\varphi)\). Another important constraint is that every two intersecting intervals that are non-singleton, prefix-minimal for \(\gamma\), and not a suffix one of another will have their endpoints marked by different \(m\)-tuples of bits.
Corollary 14 guarantees the existence of an annotation satisfying all the above constraints. Such an annotation is enforced precisely by the formula \(\operatorname{enc}(\varphi)\), which turns out to be an expander from \(\Sigma\) to \(\Sigma^{\prime}\) (namely, every interval structure over \(\Sigma\) admits an expansion over \(\Sigma^{\prime}\) that makes \(\operatorname{enc}(\varphi)\) valid). Moreover, if the annotation is correct, namely, if \(\operatorname{enc}(\varphi)\) is valid over an expanded interval structure, then every prefix-minimal interval for \(\varphi\) is also a prefix-minimal interval for \(\operatorname{flat}(\varphi)\), and vice versa. Note that there may still exist intervals that satisfy \(\varphi\) but not \(\operatorname{flat}(\varphi)\), or vice versa; however, those intervals will always contain proper prefixes that satisfy both \(\varphi\) and \(\operatorname{flat}(\varphi)\). Overall, this proves that the two formulas \(\langle B\rangle\,\varphi\) and \(\langle B\rangle\operatorname{flat}(\varphi)\) are equivalent over expanded interval structures that make \(\operatorname{enc}(\varphi)\) valid.
### Normalization procedure
We are now ready to describe the normalization procedure underlying Theorem 4. Let \(\psi\) be a \(\operatorname{\mathsf{BE}}_{\pi}\) formula. The normalization of \(\psi\) consists of repeatedly applying some rewriting steps that preserve satisfiability and progressively reduce the number of distinct subformulas of depth larger than \(2\), until a shallow normal form is eventually obtained.
Every rewriting step is applied to a formula of the form \(\psi_{i}\,\wedge\,[\![G]\!]\,\xi_{i}\) over a signature \(\Sigma_{i}\) (initially, \(\psi_{0}=\psi\), \(\xi_{i}=\operatorname{true}\), and \(\Sigma_{i}=\Sigma\)), and results in an equi-satisfiable formula \(\psi_{i+1}\,\wedge\,[\![G]\!]\,\xi_{i+1}\) over an extended signature \(\Sigma_{i+1}\). To perform the rewriting step, we must choose a subformula \(\langle X\,\rangle\,\varphi\) of \(\psi_{i}\,\wedge\,[\![G]\!]\,\xi_{i}\), for some \(X\in\{B,E\}\), that has depth \(d>2\) and that does not occur under the scope of any other modal operator, except possibly the operator \([\![G]\!]\) that has \(\xi_{i}\) as argument. We then use Corollary 15 to obtain an expander \(\operatorname{enc}(\varphi)\) from \(\Sigma_{i}\) to \(\Sigma_{i+1}\) and a formula \(\langle X\rangle\operatorname{flat}(\varphi)\) equivalent to \(\langle X\rangle\,\varphi\) over every interval structure that makes \(\operatorname{enc}(\varphi)\) valid. We then rewrite \(\psi_{i}\,\wedge\,[\![G]\!]\,\xi_{i}\) into the formula
\[\begin{split}&\underbrace{\psi_{i}[\,\{\langle X\rangle\,\varphi\,/ \,\langle X\rangle\operatorname{flat}(\varphi)\}\,\operatorname{\wedge}]}_{ \psi_{i+1}}\,\,\,\wedge\\ &\,[\![G]\!]\,\bigl{(}\underbrace{\xi_{i}[\,\{\langle X\rangle \,\varphi\,/\,\langle X\rangle\operatorname{flat}(\varphi)\}\,\operatorname{ \wedge}\,\operatorname{\operatorname{enc}(\varphi)}\,\bigr{)}}_{\xi_{i+1}} \,\,\wedge\,\operatorname{enc}(\varphi)\,\bigr{)}\end{split}\] ( \[\![\![G]\!]\,\xi_{i}\!]\bigl{(}\underbrace{\xi_{i}[\,\{\langle X \rangle\,\varphi\,/\,\langle X\rangle\operatorname{flat}(\varphi)\}\,\operatorname {\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\,\bigr{)}}_{\xi_{i+1}} \,\,\wedge\,\operatorname{enc}(\varphi)\,\bigr{)}\end{split}\] ( \[\![\![G]\!]\,\xi_{i}\!]\bigl{(}\underbrace{\xi_{i}[\,\{\langle X \rangle\,\varphi\,/\,\langle X\rangle\operatorname{flat}(\varphi)\}\,\operatorname {\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\,\bigr{)}}_{\xi_{i+1}}\,\, \wedge\,\operatorname{enc}(\varphi)\,\bigr{)}\end{split}\] ( \[\![\![G]\!]\,\xi_{i}\!]\bigl{(}\underbrace{\xi_{i}[\,\{\langle X \rangle\,\varphi\,/\,\langle X\rangle\operatorname{flat}(\varphi)\}\,\operatorname {\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\,\bigr{)}}_{\xi_{i+1}}\,\, \wedge\,\operatorname{enc}(\varphi)\,\bigr{)}\] ( \[\![\![G]\!]\,\xi_{i}\!]\bigl{(}\underbrace{\xi_{i}[\,\{\langle X \rangle\,\varphi\,/\,\langle X\rangle\operatorname{flat}(\varphi)\}\,\operatorname {\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\bigr{)}}_{\xi_{i+1}}\,\,\wedge\, \operatorname{enc}(\varphi)\,\bigr{)}\end{split}\] ( \[\![\![G]\!]\,\xi_{i}\!]\bigl{(}\underbrace{\xi_{i}[\,\{\langle X \rangle\,\varphi\,/\,\langle X\rangle\operatorname{flat}(\varphi)\}\,\operatorname {\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\,\operatorname{ \operatorname{enc}(\varphi)}\,\operatorname{\wedge}\,\operatorname{\operatorname{enc }(\varphi)}\,\operatorname{\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\, \operatorname{\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\,\operatorname{ \wedge}\,\operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\,\operatorname{ \operatorname{enc}(\varphi)}\,\operatorname{\wedge}\,\operatorname{\operatorname{enc }(\varphi)}\,\operatorname{\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\, \operatorname{\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\,\operatorname{ \wedge}\,\operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\,\operatorname{ \operatorname{enc}(\varphi)}\,\operatorname{\wedge}\,\operatorname{\operatorname{enc }(\varphi)}\,\operatorname{\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\, \operatorname{\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\,\operatorname{ \operatorname{enc}(\varphi)}\,\operatorname{\wedge}\,\operatorname{\operatorname{enc}( \varphi)}\,\operatorname{\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\, \operatorname{\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\,\operatorname{ \operatorname{\wedge}\,\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\,\operatorname{ \operatorname{enc}(\varphi)}\,\operatorname{\wedge}\,\operatorname{\operatorname{enc }(\varphi)}\,\operatorname{\operatorname{\wedge}\,\operatorname{enc}( \varphi)}\,\operatorname{\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\, \operatorname{\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\,\operatorname{ \operatorname{\wedge}\,\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\,\operatorname{ \operatorname{enc}(\varphi)}\,\operatorname{\wedge}\,\operatorname{\operatorname{enc}( \varphi)}\,\operatorname{\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\, \operatorname{\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\, \operatorname{\operatorname{\wedge}\,\operatorname{enc}(\varphi)}\,\operatorname{ \operatorname{\wedge}\,\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\, \operatorname{\operatorname{enc}(\varphi)}\,\operatorname{\wedge}\,\operatorname{\operatorname{enc}( \varphi)}\,\operatorname{\wedge}\,\operatorname{\operatorname{enc}(\varphi)}\, \operatorname{\operatorname{\wedge}\,\operatorname{
Complexity of the satisfiability problem
In this section, we build up on the previous normalization result to prove a tight complexity bound:
**Theorem 16**.: _The satisfiability problem for \(\mathsf{BE}\) logic restricted to homogeneous interval structures is \(\textsc{ExpSpace}\)-complete._
An ExpSpace lowerbound for \(\mathsf{BE}\) under homogeneity was already proven in [3], so we focus on the upperbound. In view of Proposition 2 and Theorem 4, given any \(\mathsf{BE}\) formula \(\psi\), one can compute in polynomial time a \(\mathsf{BE}_{\pi}\) formula \(\psi^{*}\) that is equi-satisfiable over homogeneous interval structures. Of course, this also means that \(\psi^{*}\) has size at most polynomial in \(|\psi|\). We argue below that one can test satisfiability of a formula in shallow normal form in exponential space with respect to the size of the formula itself. Together with the previous observations, this proves Theorem 16.
### Composition of logical types
We need to formalize a notion of logical type, similar to the notion of profile used in the proof of Lemma 13, that not only determines which formulas hold at a given interval, but also satisfies mild compositional properties, that is, under suitable conditions, one can compute the type of the sum of two adjacent intervals on the basis of the types of the original intervals. It will be convenient to define types separately for formulas of depth 0, 1, and 2 (there is no need to consider higher depths, as we assume to deal with formulas in shallow normal form). We will first present the rather simple definitions and properties of depth-0 and depth-1 types, and then focus on the more complex notion of depth-2 type.
_Depth-\(0\) and depth-\(1\) types._ We fix, once and for all, an interval structure \(\mathcal{S}=(\mathbb{I}(N),\sigma)\) and we assume that all formulas are over the signature \(\Sigma\) of \(\mathcal{S}\).
**Definition 17**.: The _depth-\(0\) type of an interval \(I\), denoted \(\mathrm{type}^{0}(I)\), is either the set \(\{\pi\}\cup\{p\in\Sigma:\mathcal{S},I=\pi\wedge p\}\) or the empty set, depending on whether \(I\) is a singleton or not._
The _depth-\(1\) type of an interval \(I=[x,y]\) is the quadruple \(\mathrm{type}^{1}(I)=(S,T,B,E)\), where \(S\) is the symbol 1, 2, or 3, depending on whether \(I\) contains one point, two points, or more, \(T=\mathrm{type}^{0}(I)\), \(B=\mathrm{type}^{0}([x,x])\), and \(E=\mathrm{type}^{0}([y,y])\)._
It is easy to see that depth-0 (resp., depth-1) types of adjacent intervals can be composed to form the depth-0 (resp., depth-1) type of the sum of the two intervals. One can also verify that the depth-0 (resp., depth-1) type of an interval determines which formulas of depth 0 (resp., depth at most 1) hold at that interval. These simple results are formalized in the next two lemmas below.
**Lemma 18**.: _For both \(d=0\) and \(d=1\), there is a composition operator \(\cdot\) on depth-\(d\) types that is computable in polynomial time and such that, for all pairs of adjacent intervals \(I,J\), with \(\max(I)+1=\min(J)\), \(\mathrm{type}^{d}(I)\cdot\mathrm{type}^{d}(J)=\mathrm{type}^{d}(I\cup J)\)._
Proof.: The composition of depth-0 types is trivial: for every pair of depth-0 types \(T,T^{\prime}\), we simply let \(T\cdot T^{\prime}=\emptyset\). This is correct because the sum of two adjacent intervals always results in a non-singleton interval, whose depth-0 type is the empty set.
As for the composition of two depth-1 types, say \(\mathcal{T}=(S,T,B,E)\) and \(\mathcal{T}^{\prime}=(S^{\prime},T^{\prime},B^{\prime},E^{\prime})\), we let \(\mathcal{T}\cdot\mathcal{T}^{\prime}=(S^{\prime\prime},T\cdot T^{\prime},B,E^ {\prime})\), where \(S^{\prime\prime}\) is either 2 or 3 depending on whether \(S=S^{\prime}=1\) or not, and \(T\cdot T^{\prime}\) is the composition of the depth-0 types \(T\) and \(T^{\prime}\), as defined just above. It is immediate to check that if \(\mathcal{T}=\mathrm{type}^{1}([x,y])\) and \(\mathcal{T}^{\prime}=\mathrm{type}^{1}([y+1,z])\), then \(\mathrm{type}^{1}([x,z])=\mathcal{T}\cdot\mathcal{T}^{\prime}\).
**Lemma 19**.: _For both \(d=0\) and \(d=1\), for every \(\mathsf{BE}_{\pi}\) formula \(\varphi\) of depth at most \(d\), and for all intervals \(I,J\) such that \(\mathrm{type}^{d}(I)=\mathrm{type}^{d}(J)\), we have \(\mathcal{S},I\!=\!\varphi\) iff \(\mathcal{S},J\!=\!\varphi\). Moreover, whether \(\mathcal{S},I\!=\!\varphi\) holds or not can be decided in polynomial time given \(\varphi\) and \(\mathrm{type}^{d}(I)\)._
Proof.: We first prove the claim for \(d=0\). For the case \(\varphi=\pi\), we have \(\mathcal{S},I\!=\!\varphi\) if and only if \(I\) is a singleton, or, equally, \(\pi\in\mathrm{type}^{0}(I)\). The case \(\varphi=\pi\wedge p\) is trivial as well, as we have \(\mathcal{S},I\!=\!\varphi\) if and only if \(p\in\mathrm{type}^{0}(I)\). It remains to consider the case where \(\varphi\) is a Boolean combination of the previous atomic formulas. In this case, we determine the evaluation of \(\varphi\) at \(I\) "homomorphically" on the basis of the evaluations of the atomic formulas.
Let us now prove the claim for \(d=1\). The interesting cases are when \(\varphi\) has depth 0 or it is of the form \((B)\,\alpha\) or \((E)\,\alpha\), with \(\alpha\) again of depth 0. Once the claim is proved for these cases, it can be
generalized to Boolean combinations of those formulas using the same arguments as before. Let \(I=[x,y]\) and \(\operatorname{type}^{1}(I)=(S,T,B,E)\), and recall that \(T=\operatorname{type}^{0}(I)\), \(B=\operatorname{type}^{0}([x,x])\), and \(E=\operatorname{type}^{0}([y,y])\).
If \(\varphi\) has depth \(0\), then we know that the component \(T\) (\(=\operatorname{type}^{0}(I)\)) already determines whether or not \(S,I\vDash\varphi\).
If \(\varphi=(B)\,\alpha\), we further distinguish three subcases, depending on \(S\). If \(S=1\), then \(I\) is a singleton and hence \(\mathcal{S},I\vDash\alpha\). If \(S=2\), then the only prefix of \(I\) is the singleton interval \([x,x]\), hence \(\mathcal{S},I\vDash\langle B\rangle\,\alpha\) iff \(\mathcal{S},[x,x]\vDash\alpha\). Since \(\alpha\) has depth \(0\), the latter condition can be decided using the type \(B=\operatorname{type}^{0}([x,x])\). If \(S=3\), then since \(\alpha\) is a Boolean combination of formulas of the form \(\pi\) or \(\pi\wedge p\), with \(p\in\Sigma\), it suffices to consider only two prefixes of \(I\): the singleton interval \(J_{0}=[x,x]\) and the interval \(J_{1}=[x,x+1]\). In particular, we have \(\mathcal{S},I\vDash\langle B\rangle\,\alpha\) if and only if \(\mathcal{S},J_{0}\vDash\alpha\) or \(\mathcal{S},J_{1}\vDash\alpha\). Again, the latter two conditions are determined by the depth-\(0\) types of \(J_{0}\) and \(J_{1}\), which are \(B\) and \(\varnothing\), respectively. This shows how to determine whether \(\mathcal{S},I\vDash\langle B\rangle\,\alpha\) using the type \(\operatorname{type}^{1}(I)=(S,T,B,E)\).
The remaining case is that of a formula \(\varphi=\langle E\rangle\,\alpha\), which can be handled by symmetric arguments, using the component \(E\) instead of \(B\).
Depth-\(2\) types.We now introduce types for depth-\(2\) formulas. The machinery here is not as neat as one could hope, as there is a trade-off between the desired compositional properties and the number of possible depth-\(2\) types. As an example, full compositionality of types for depth-\(2\) formulas can only hold if we allow doubly exponentially many types with respect to the size of the underlying signature -- this can be shown formally using arguments based on communication complexity and the fact that a depth-\(2\) formula can describe a Stockmeyer's counter of level \(2\)[15]. In order to ease compositional properties while maintaining the number of types as low as possible, we will parameterise depth-\(2\) types by a formula and some contexts.
We first discuss a couple of tentative definitions, with their drawbacks. Following the same principle used to define depth-\(1\) types, one may define the depth-\(2\) type of an interval \(I\) as \((\mathcal{T},\mathscr{B},\mathscr{E})\), where \(\mathcal{T}=\operatorname{type}^{1}(I)\), \(\mathscr{B}=\{\operatorname{type}^{1}(J)\,:\,J<_{B}I\}\), and \(\mathscr{E}=\{\operatorname{type}^{1}(J)\,:\,J<_{E}I\}\). This notion of depth-\(2\) type would be fully compositional and would determine the evaluation of every depth-\(2\) formula in homogeneous normal form (proofs omitted). Unfortunately, there could be doubly exponentially many such types with respect to the size of the signature, and this would not be compatible with the intended use that we will make in the satisfiability procedure. Another option would be to parameterise the depth-\(2\) type of \(I\) by a formula \(\varphi\) and define it as the triple \((\mathcal{T},\mathscr{B},\mathscr{E})\), where \(\mathcal{T}=\operatorname{type}^{1}(I)\) as before, and \(\mathscr{B}\) (resp., \(\mathscr{E}\)) is the set of subformulas \(\alpha\) of \(\varphi\) that hold at proper prefixes (resp., suffixes) of \(I\). Of course, the resulting type would determine the evaluation of \(\varphi\) at the interval \(I\). This second attempt would also generate at most exponentially many depth-\(2\) type with respect to the size of \(\varphi\). On the other hand, the resulting types would not carry enough information to be composable, the reason being that it is not sufficient to know which depth-\(1\) subformulas hold at two adjacent intervals in order to derive which depth-\(1\) subformulas hold at the union interval. The appropriate notion of depth-\(2\) type is somehow a blend of the two attempts that we have just discussed.
Let us now fix some other useful notation and terminology:
* Given a \(\mathsf{BE}_{\pi}\) formula \(\varphi\), we denote by \(\operatorname{Depth}^{\leq 1}(\varphi)\) the set of subformulas of \(\varphi\) of depth at most \(1\).
* Lemma 19 states that the depth-\(1\) type of an interval \(I\) effectively determines which formulas of depth at most \(1\) hold at \(I\). This motivates the following notation: given a depth-\(1\) type \(\mathcal{T}\) and a formula \(\alpha\in\operatorname{Depth}^{\leq 1}(\varphi)\), we write \(\mathcal{T}\vDash\alpha\) to state that \(\mathcal{S},I\vDash\alpha\) for some (or, equally, for every) interval \(I\) such that \(\operatorname{type}^{1}(I)=\mathcal{T}\) (this latter property can be tested efficiently given \(\mathcal{T}\) and \(\alpha\)).
* By Lemma 18, depth-\(1\) types are equipped with a composition operation \(\cdot\) that forms a semigroup structure. We complete the structure into a monoid by introducing the _dummy depth-\(1\) type \(\varepsilon\)_ and by assuming that \(\varepsilon\cdot\mathcal{T}=\mathcal{T}\cdot\varepsilon=\mathcal{T}\) for every depth-\(1\) type \(\mathcal{T}\).
**Definition 20**.: _Let \(\mathcal{L},\mathcal{R}\) be some (possibly dummy) depth-\(1\) types. The depth-\(2\)\(\varphi\)-type of an interval \(I\) with left and right contexts\(\mathcal{L},\mathcal{R}\) is the tuple \(\operatorname{type}^{2}_{\varphi,\mathcal{L},\mathcal{R}}(I)=(\mathcal{L}, \mathcal{R},\mathcal{T},\mathscr{B},\mathscr{E})\), where_
* \(\mathcal{T}=\operatorname{type}^{1}(I)\)_,_
* \(\mathscr{B}=\{\alpha\in\operatorname{Depth}^{\leq 1}(\varphi)\,:\,\exists J<_{B}I\;\; \;\mathcal{L}\cdot\operatorname{type}^{1}(J)\vDash\alpha\}\)_,_
* \(\mathscr{E}=\{\alpha\in\operatorname{Depth}^{\leq 1}(\varphi)\,:\,\exists J<_{E}I\;\; \operatorname{type}^{1}(J)\cdot\mathcal{R}\vdash\alpha\}\)_._
We give some intuition about the components of a depth-\(2\)\(\varphi\)-type (the reader can also refer to Figure 5). The component \(\mathcal{T}\) is nothing but the depth-\(1\) type of the reference interval \(I\), thus determining which formulas of depth at most \(1\) hold at \(I\). The components \(\mathcal{L}\) and \(\mathcal{R}\) represent the depth-\(1\) types of some intervals adjacent to \(I\), to the left and to the right respectively, and will be used as contexes for an operation of composition. The set \(\mathscr{B}\) represents which subformulas of \(\varphi\) of depth at most \(1\) hold at some intervals \(I^{\prime}\) that overlap \(I\) to the left (i.e., such that \(\min(I^{\prime})\leq\min(I)\leq\max(I^{\prime})<\max(I)\)), provided that the depth-\(1\) type of \(K=I^{\prime}\setminus I\) coincides with the left context \(\mathcal{L}\). The set \(\mathscr{E}\) provides similar information for the intervals \(I^{\prime}\) that overlap \(I\) to the right and such that \(\operatorname{type}^{1}(I^{\prime}\setminus I)=\mathcal{R}\). As a special case, we observe that when \(\mathcal{L}=\mathcal{R}=\varepsilon\), one could let \(I^{\prime}\) range over prefixes or suffixes of \(I\), thus determining which subformulas hold at prefixes and suffixes of the reference interval \(I\). In particular, this can be used to determine the evaluation of \(\varphi\) at \(I\), and generalizes the second attempt of definition of type that we discussed earlier.
Below, we prove the analogous of Lemmas 18 and 19 for depth-\(2\) types.
**Lemma 21**.: _There is a composition operator \(\cdot\) on depth-\(2\)\(\varphi\)-types that is computable in polynomial time and such that, for all contexts \(\mathcal{L},\mathcal{L}^{\prime},\mathcal{R},\mathcal{R}^{\prime}\) and for all pairs of adjacent intervals \(I,I^{\prime}\), if \(\mathcal{L}\cdot\operatorname{type}^{1}(I)=\mathcal{L}^{\prime}\) and \(\operatorname{type}^{1}(I^{\prime})\cdot\mathcal{R}^{\prime}=\mathcal{R}\), then_
\[\operatorname{type}^{2}_{\varphi,\mathcal{L},\mathcal{R}}(I)\cdot \operatorname{type}^{2}_{\varphi,\mathcal{L}^{\prime},\mathcal{R}^{\prime}}(I ^{\prime})\ =\ \operatorname{type}^{2}_{\varphi,\mathcal{L},\mathcal{R}^{\prime}}(I \cup I^{\prime})\.\]
Proof.: For the sake of brevity, let \(\mathscr{T}=\operatorname{type}^{2}_{\varphi,\mathcal{L},\mathcal{R}}(I)\) and \(\mathscr{T}^{\prime}=\operatorname{type}^{2}_{\varphi,\mathcal{L}^{\prime}, \mathcal{R}^{\prime}}(I^{\prime})\), where \(\mathscr{T}=(\mathcal{L},\mathcal{R},\mathcal{T},\mathscr{B},\mathscr{E})\), \(\mathscr{T}^{\prime}=(\mathcal{L}^{\prime},\mathcal{R}^{\prime},\mathcal{T}^{ \prime},\mathscr{B}^{\prime},\mathscr{E}^{\prime})\), \(\mathcal{L}\cdot\mathcal{T}=\mathcal{L}^{\prime}\), and \(\mathcal{T}^{\prime}\colon\mathscr{R}^{\prime}=\mathcal{R}\). We define the composition as
\[\mathscr{T}\cdot\mathscr{T}^{\prime}\ =\ (\mathcal{L},\,\mathcal{R}^{\prime},\, \mathcal{T}\cdot\mathcal{T}^{\prime},\,\mathscr{B}\cup\mathscr{B}^{\prime} \cup\mathscr{B}_{\star},\,\mathscr{E}\cup\mathscr{E}^{\prime}\cup\mathscr{E}_ {\star})\]
where
\[\mathcal{B}_{\star} =\ \{\alpha\in\operatorname{Depth}^{\varepsilon 1}(\varphi)\,:\,\mathcal{L}^{\prime}\vdash\alpha\}\] \[\mathcal{E}_{\star} =\ \{\alpha\in\operatorname{Depth}^{\varepsilon 1}(\varphi)\,:\,\mathcal{R}\vdash\alpha\}\]
(see Figure 6).
Note that, thanks to Lemma 19, the composition \(\mathscr{T}\cdot\mathscr{T}^{\prime}\) can be computed in polynomial time given the types \(\mathscr{T}\) and \(\mathscr{T}^{\prime}\).
Below, we prove that the defined composition \(\mathscr{T}\cdot\mathscr{T}^{\prime}\) is correct, namely, it coincides with \(\operatorname{type}^{2}_{\varphi,\mathcal{L},\mathcal{R}^{\prime}}(I\cup I^{ \prime})\). The latter type is of the form \((\mathcal{L},\mathcal{R}^{\prime},\mathcal{T}^{\prime\prime},\mathscr{B}^{ \prime\prime},\mathscr{E}^{\prime\prime})\), so the first two components of \(\mathscr{T}\cdot\mathscr{T}^{\prime}\) are clearly correct. It remains to prove that \(\mathcal{T}^{\prime\prime}=\mathcal{T}\cdot\mathcal{T}^{\prime}\), \(\mathscr{B}^{\prime\prime}=\mathscr{B}\cup\mathscr{B}^{\prime}\cup\mathscr{B}_ {\star}\), and \(\mathscr{E}^{\prime\prime}=\mathscr{E}\cup\mathscr{E}^{\prime}\cup\mathscr{E}_ {\star}\). By Lemma 18 we have \(\mathcal{T}^{\prime\prime}=\operatorname{type}^{1}(I\cup I^{\prime})= \operatorname{type}^{1}(I)\cdot\operatorname{type}^{1}(I^{\prime})=\mathcal{T }\cdot\mathcal{T}^{\prime}\). Moreover, by Definition 20, \(\mathscr{B}^{\prime\prime}\) contains the formulas \(\alpha\in\operatorname{Depth}^{\varepsilon 1}(\varphi)\) that satisfy one of the following conditions:
1. \(I^{\prime\prime}\!\vDash\!\alpha\), for some interval \(I^{\prime\prime}\) that overlaps \(I\) to the left (i.e., \(\min(I^{\prime\prime})\leq\min(I)\leq\max(I^{\prime\prime})<\max(I)\) and such that \(\operatorname{type}^{1}(I^{\prime\prime}\smallsetminus I)=\mathcal{L}\).
2. \(I^{\prime\prime}\!\vDash\!\alpha\), for some interval \(I^{\prime\prime}\) that overlaps \(I\) to the left (i.e., \(\min(I^{\prime\prime})\leq\min(I)\leq\max(I^{\prime\prime})<\max(I)\) and such that \(\operatorname{type}^{1}(I^{\prime\prime}\smallsetminus I)=\mathcal{L}\).
Figure 5: Components of a depth-\(2\) type. Figure 6: Composition of depth-\(2\) types.
Letting \(K=I^{\prime\prime}\setminus I\) and \(J=I^{\prime\prime}\cap I\) and using Lemma 19, this condition is equivalent to \[\operatorname{type}^{1}(I^{\prime\prime})\ =\ \operatorname{type}^{1}(K)\cdot \operatorname{type}^{1}(J)\ =\ \mathcal{L}\cdot\operatorname{type}^{1}(J)\ \vdash\ \alpha\] and hence to \(\alpha\in\mathscr{B}\).
2. \(I^{\prime\prime}\vDash\alpha\), for some interval \(I^{\prime\prime}\) that has \(I\) as a suffix (i.e., \(\min(I^{\prime\prime})\leq\min(I)\leq\max(I^{\prime\prime})=\max(I)\)) and such that \(\operatorname{type}^{1}(I^{\prime\prime}\setminus I)=\mathcal{L}\). Letting \(K=I^{\prime\prime}\setminus I\) and using Lemma 19, together with the assumptions about the contexts \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\), this condition turns out to be equivalent to \[\operatorname{type}^{1}(I^{\prime\prime})\ =\ \operatorname{type}^{1}(K)\cdot \mathcal{T}\ =\ \mathcal{L}\cdot\mathcal{T}\ =\ \mathcal{L}^{\prime}\ \vdash\ \alpha\] and hence to \(\alpha\in\mathscr{B}_{\star}\).
3. \(I^{\prime\prime}\vDash\alpha\), for some interval \(I^{\prime\prime}\) that contains \(I\), overlaps \(I^{\prime}\) to the left (i.e., \(\min(I^{\prime\prime})\leq\min(I)\leq\max(I)<\max(I^{\prime\prime})<\max(I^{ \prime})\)), and such that \(\operatorname{type}^{1}(I^{\prime\prime}\setminus I^{\prime})=\mathcal{L}\). Letting \(K=I^{\prime\prime}\setminus I^{\prime}\) and \(J=I^{\prime\prime}\cap I^{\prime}\), and using again Lemma 19 and the assumptions about the contexts \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\), this condition turns out to be equivalent to \[\operatorname{type}^{1}(I^{\prime\prime}) =\ \operatorname{type}^{1}(K)\cdot\mathcal{T}\cdot \operatorname{type}^{1}(J)\] and hence to \(\alpha\in\mathscr{B}^{\prime}\).
We have just shown that \(\mathscr{B}^{\prime\prime}=\mathscr{B}\cup\mathscr{B}^{\prime}\cup\mathscr{B}_ {\star}\). One proves \(\mathscr{E}^{\prime\prime}=\mathscr{E}\cup\mathscr{E}^{\prime}\cup\mathscr{E}_ {\star}\) using symmetric arguments.
**Lemma 22**.: _For all intervals \(I\) and \(J\) such that \(\operatorname{type}^{2}_{\varphi,\varepsilon,\varepsilon}(I)=\operatorname{ type}^{2}_{\varphi,\varepsilon,\varepsilon}(J)\) and for every \(\operatorname{\sf BE}_{\pi}\) formula \(\varphi\) of depth at most \(2\), we have \(\mathcal{S},I\vDash\varphi\) iff \(\mathcal{S},J\vDash\varphi\). Moreover, whether \(\mathcal{S},I\vDash\varphi\) holds can be decided in polynomial time from the given type \(\operatorname{type}^{2}_{\varphi,\varepsilon,\varepsilon}(I)\)._
Proof.: Let \(\varphi\) be a formula of depth at most \(2\) and let \(I\) be an interval with depth-\(2\) type \((\varepsilon,\varepsilon,\mathcal{T},\mathscr{B},\mathscr{E})\), where both left and right contexts are \(\varepsilon\).
If \(\varphi\) has depth smaller than \(2\), then by Lemma 19 the component \(\mathcal{T}=\operatorname{type}^{1}(I)\) already determines (effectively in polynomial time) whether \(\mathcal{S},I\vDash\varphi\).
Otherwise, if \(\varphi\) has depth \(2\) and is of the form \(\langle B\rangle\,\alpha\), then \(\mathcal{S},I\vDash\langle B\rangle\,\alpha\) iff there is a proper prefix \(J\) of \(I\) such that \(\mathcal{S},J\vDash\alpha\). Since \(\alpha\in\operatorname{Depth}^{\leq 1}(\varphi)\), the latter condition is equivalent to \(\varepsilon\cdot\operatorname{type}^{1}(J)\vDash\alpha\), and hence \(\mathcal{S},I\vDash\langle B\rangle\,\alpha\) iff \(\alpha\in\mathscr{B}\). The case of \(\varphi=\langle E\rangle\,\alpha\) is similar, but uses the component \(\mathscr{E}\).
Finally, Boolean combinations of the previous formulas are evaluated homomorphically.
### Satisfiability procedure
As a warm-up, let us first describe the satisfiability procedure for a formula of depth at most \(2\); later we will generalize this to a formula in shallow normal form.
Let us fix a \(\operatorname{\sf BE}_{\pi}\) formula \(\psi\) of depth at most \(2\). Deciding satisfiability of \(\psi\) can be done in polynomial space, by reducing to non-emptiness of a language recognized by a suitable finite state automaton. To formalize the construction of the automaton from the given formula \(\psi\), it is convenient to encode an interval structure \(\mathcal{S}=(\mathbb{I}(N),\sigma)\) over the signature \(\Sigma\) by the finite word \(w_{\mathcal{S}}=a_{0}\ldots a_{\max(N)}\) over the alphabet \(\mathfrak{p}(\Sigma)\), where \(a_{i}=\sigma(i)\) for all \(i\in N\) (recall that \(N\) is a finite prefix of the natural numbers).
**Lemma 23**.: _Given a \(\operatorname{\sf BE}_{\pi}\) formula \(\psi\) of depth at most \(2\), one can compute in polynomial space3 a finite state automaton \(\mathcal{A}_{\psi}\) that accepts all and only the encodings \(w_{\mathcal{S}}\) of the interval structures \(\mathcal{S}\) such that \(\mathcal{S},I\vDash\psi\), where \(I\) is the largest interval of \(\mathcal{S}\)._
Footnote 3: By computing an automaton in polynomial space we mean that its initial states, final states, and transitions can be enumerated in polynomial space. The enumeration procedures can be used within other algorithms of similar complexity, e.g., to test emptiness of the recognized language.
_Proof sketch._: The construction of \(\mathcal{A}_{\psi}\) is quite standard, as it is the cascade product of three automata:
1. a deterministic automaton that computes in its states the depth-\(1\) types of intervals corresponding to prefixes of the input,
2. a co-deterministic automaton that computes in its states the depth-\(1\) types of intervals corresponding to suffixes of the input,
3. a deterministic automaton that computes the depth-\(2\)\(\psi\)-type of prefixes of the input, with a constant dummy left context and right contexts given by the states of the previous automaton.
Transitions of these automata are defined using compositional properties of depth-\(1\) and depth-\(2\) types (Lemmas 18 and 21).
Below, we provide full details for the construction of \(\mathcal{A}_{\psi}\). Like we have done for depth-\(1\) types, we introduce _dummy depth-\(2\) types_ for abstracting an empty interval: these are tuples of the form \((\mathcal{L},\mathcal{R},\varepsilon,\varnothing,\varnothing)\), where \(\mathcal{L}\) and \(\mathcal{R}\) are left and right contexts and \(\varepsilon\) is the dummy depth-\(1\) type (of course, there is exactly one dummy depth-\(2\) type for each choice of the left and right contexts). As usual, a dummy type behaves as an identity w.r.t. composition with a depth-\(2\) type, provided the contexts are compatible. We shall also use a generalization of the relation \(\vdash\) that works with depth-\(2\) types. Precisely, given a depth-\(2\) type \(\mathscr{T}\), we write \(\mathscr{T}\vdash\psi\) whenever \(\mathcal{S},I\vdash\psi\) for some (or, equally, for every) interval \(I\) such that \(\operatorname{type}^{2}_{\varphi,\varepsilon,\varepsilon}(I)=\mathscr{T}\).
* the alphabet \(A\) consists of subsets of the signature \(\Sigma\);
* the state space \(Q\) consists of triples \(q=(\mathcal{L},\mathcal{R},\mathscr{T})\), where \(\mathcal{L},\mathcal{R}\) are a depth-\(1\) types and \(\mathscr{T}\) is a depth-\(2\)\(\psi\)-type with \(\varepsilon\) as left context and \(\mathcal{R}\) as right context;
* the set \(I\) of initial states consists of triples \(q=(\mathcal{L},\mathcal{R},\mathscr{T})\), where \(\mathcal{L}=\varepsilon\) is the dummy depth-\(1\) type and \(\mathscr{T}=(\mathcal{L},\mathcal{R},\varepsilon,\varnothing,\varnothing)\) is a dummy depth-\(2\) type;
* the set \(F\) of final states consists of triples \(q=(\mathcal{L},\mathcal{R},\mathscr{T})\), with \(\mathcal{R}=\varepsilon\) and \(\mathscr{T}\vdash\psi\);
* the set \(T\) of transition rules consists of the triples \((q,a,q^{\prime})\), with \(q=(\mathcal{L},\mathcal{R},\mathscr{T})\), \(a\subseteq\Sigma\), and \(q^{\prime}=(\mathcal{L}^{\prime},\mathcal{R}^{\prime},\mathscr{T}^{\prime})\), such that \(\mathcal{L}^{\prime}=\mathcal{L}\cdot\operatorname{type}^{1}(I_{a})\), \(\mathcal{R}=\operatorname{type}^{1}(I_{a})\cdot\mathcal{R}\), and \(\mathcal{T}^{\prime}=\mathscr{T}\cdot\operatorname{type}^{2}_{\psi,\mathcal{L },\mathcal{R}^{\prime}}(I_{a})\), where \(I_{a}\) denotes the singleton interval labelled by the set \(a\) of propositional letters.
It is worth noting that the automaton \(\mathcal{A}_{\psi}\) is unambiguous, namely, it admits at most one successful run on each input.
We now claim that, on every input \(w_{\mathcal{S}}=a_{0}\ldots a_{n-1}\), the only possible runs of \(\mathcal{A}_{\psi}\) that start and end in arbitrary states (not necessarily initial or final ones) are of the form
\[q_{0}\xrightarrow{a_{0}}q_{1}\xrightarrow{a_{1}}\ldots\ldots\xrightarrow{a _{n-1}}q_{n}\]
with \(q_{i}=(\mathcal{L}_{i},\mathcal{R}_{i},\mathscr{T}_{i})\) such that, for all \(i=0,\ldots,n\),
1. \(\mathcal{L}_{i}=\mathcal{L}_{0}\cdot\operatorname{type}^{1}([0,i-1])\),
2. \(\mathcal{R}_{i}=\operatorname{type}^{1}([i,n-1])\cdot\mathcal{R}_{n}\),
3. \(\mathscr{T}_{i}=\mathscr{T}_{0}\cdot\operatorname{type}^{2}_{\psi,\mathcal{E },\mathcal{R}_{i}}([0,i-1])\).
Each of the above properties can be verified using a simple induction, either from smaller to larger \(i\)'s or vice versa (we omit the tedious details).
From the properties stated in items 1., 2., 3. and the definitions of initial and final states, it immediately follows that \(\mathcal{A}_{\psi}\) admits a successful run on \(w_{\mathcal{S}}\) if and only if \(\mathcal{S},[0,n-1]\vdash\psi\).
Finally, as for the complexity of constructing \(\mathcal{A}_{\psi}\), we recall from Lemmas 18 and 21 that depth-\(1\) and depth-\(2\) types can be enumerated in polynomial space, and can be composed in polynomial time. This implies that the initial states and the transitions of \(\mathcal{A}_{\psi}\) can be enumerated in polynomial space. To enumerate the final states, it suffices to test properties like \(\mathscr{T}\vdash\psi\), for a given depth-\(2\) type \(\mathscr{T}\). This can be done in polynomial time thanks to Lemma 22.
The fact that the automaton \(\mathcal{A}_{\psi}\) above can be constructed from \(\psi\) in polynomial space, implies that (non-)emptiness of the recognized language can also be decided in polynomial space w.r.t. \(|\psi|\). In its turn, this shows that the satisfiability of a \(\mathsf{BE}_{\pi}\) formula \(\psi\) of depth at most \(2\) can be decided in polynomial space.
To conclude the proof of Theorem 16 it remains to reduce the satisfiability problem for a \(\mathsf{BE}_{\pi}\) formula \(\psi\) in shallow normal form to the non-emptiness problem of an automaton \(\mathcal{A}_{\psi}\) that is computable from \(\psi\) in exponential space. For this, it suffices to recall that \(\psi\) must be of the form \(\varphi\wedge[G]\,\xi\), where both \(\varphi\) and \(\xi\) are \(\mathsf{BE}_{\pi}\) formulas of depth at most \(2\). One uses Lemma 23 to construct the automata \(\mathcal{A}_{\varphi}\) and \(\mathcal{A}_{\neg\xi}\), whose
languages contain encodings of models of \(\varphi\) and \(\neg\xi\), respectively. From \(\mathcal{A}_{\prec\xi}\), one can efficiently construct an automaton \(\mathcal{A}_{\left(G\right)\prec\xi}\) recognizing the language of words with infixes accepted by \(\mathcal{A}_{\prec\xi}\), thus encoding models of \(\left(G\right)\prec\xi\) (\(=\neg\left[G\right]\xi\)). One then complements the latter automaton to obtain an automaton \(\mathcal{A}_{\left[G\right]\xi}\) accepting the encodings of models of \(\left[G\right]\xi\). Note that the latter step can be performed in exponential space in the size of \(\xi\), by using an online version of the classical subset construction. Finally, one computes the product of the automata \(\mathcal{A}_{\varphi}\) and \(\mathcal{A}_{\left[G\right]\xi}\), so as to recognize the language of encodings of models of \(\psi=\varphi\,\wedge\,\left[G\right]\xi\). It follows that non-emptiness of the latter language can be decided in exponential space w.r.t. the size of the original formula \(\psi\).
## 5 Conclusions
We have settled the question of whether the logic \(\mathsf{BE}\), interpreted over homogeneous interval structures, admits an elementary satisfiability problem. We have actually answered the question by giving an optimal ExpSpace decision procedure (ExpSpace-hardness was shown in [3]). As a by-product result, we have also devised a normal form for \(\mathsf{BE}\) formulas that enforces a small bound to the number of nested modalities, while preserving satisfiability. Quite suprisingly such a normal form can be computed in polynomial time from arbitrary \(\mathsf{BE}\) formulas, using a series of rewriting steps reminiscent of a quantifier elimination technique a-la Scott.
As for future work, one could try to see whether similar techniques are applicable to extensions of \(\mathsf{BE}\) with modalities based on other Allen's interval relations (e.g., overlap, meet, the inverses of the prefix and suffix relations, etc.).
|
2307.08482
|
Shape transitions of RBC under oscillatory flows in microchannels
|
We investigate the dynamics of the Red Blood Cell (RBC) in microfluidic
channels under oscillatory flows. The simulations employ a hybrid
continuum-particle approach, in which the cell membrane and cytosol fluid are
modeled using Dissipative Particle Dynamics (DPD) method, and the blood plasma
is modeled as an incompressible fluid via the Immersed Boundary Method (IBM).
The goal of this study is to understand the morphological modes of the RBC
under transient shear rates. Our simulations show good agreement with previous
experimental and computational works. Our findings demonstrate the ability to
control the transient dynamics of the RBC by adjusting the oscillatory waveform
at the microchannel inlet. These results suggest that oscillatory flows can be
used to manipulate cells, which may have implications for cell separation and
identification of pathological cells.
|
Lahcen Akerkouch, Trung Bao Le
|
2023-07-13T01:00:05Z
|
http://arxiv.org/abs/2307.08482v1
|
# Shape transitions of RBC under oscillatory flows in microchannels
###### Abstract
We investigate the dynamics of the Red Blood Cell (RBC) in microfluidic channels under oscillatory flows. The simulations employ a hybrid continuum-particle approach, in which the cell membrane and cytosol fluid are modeled using Dissipative Particle Dynamics (DPD) method, and the blood plasma is modeled as an incompressible fluid via the Immersed Boundary Method (IBM). The goal of this study is to understand the morphological modes of the RBC under transient shear rates. Our simulations show good agreement with previous experimental and computational works. Our findings demonstrate the ability to control the transient dynamics of the RBC by adjusting the oscillatory waveform at the microchannel inlet. These results suggest that oscillatory flows can be used to manipulate cells, which may have implications for cell separation and identification of pathological cells.
+
Footnote †: preprint: AIP/123-QED
Introduction
Extensive research has been conducted in the last few decades on the morphological changes of Red Blood Cells (RBCs) in fluid flows due to its importance in blood pathology [1; 2; 3]. It has been shown that the response of RBC membrane to blood plasma dynamics can affect the overall patterns of microvascular blood flows [4; 5; 6]. Despite a substantial body of literature, the dynamics of RBCs remain a significant challenge to be studied due to the complexity of various response modes, which result from the interaction of the suspended cellular membrane with the shear flow [7]. There are several factors that can affect the dynamics of RBCs, such as the stiffness of the membrane [8], the shear rate \(\left(\vec{\dot{\gamma}}\right)\)[9; 10], and the viscosity contrast \(\left(\lambda\right)\) between the blood plasma and cytosol [10], among other factors. As a result, RBC deformation process in shear flow is not well understood, especially under time-dependent shear rates [11; 12].
In free shear flows with constant shear rate \(\dot{\gamma}\), the shear strength[13] is the controlling parameter of the RBC dynamics. The shape of RBCs becomes increasingly complex (more lobes) as the shear rate increases. In the range of shear rate (\(\bar{\dot{\gamma}}\)) from 10 \(s^{-1}\) to 2,000 \(s^{-1}\), the dynamics of RBCs can be classified into three main regions [9]: (i) tumbling at weak shear rate \(\left(\bar{\dot{\gamma}}<10\ s^{-1}\right)\); (ii) circular/elliptical rims \(\left(10\ s^{-1}<\bar{\dot{\gamma}}<400\ s^{-1}\right)\); and (iii) multilobes (\(400\ s^{-1}<\bar{\dot{\gamma}}<2,000\ s^{-1}\)). In the tumbling region, the deformation was minimal and reversible, which allows the RBCs to maintain their biconcave discoid shape. As the shear rate increases to \(40s^{-1}\), the percentage of discocytes decreases and is replaced by the emergence of stomatocytes. The rolling and tumbling stomatocytes [10] appear at \(\dot{\gamma}=150\ s^{-1}\) and \(250\ s^{-1}\), respectively. This pattern persists up until \(\bar{\dot{\gamma}}=400\ s^{-1}\) when the stomatocytes assume a shape with an elliptical rim. In the range \(400\ s^{-1}<\bar{\dot{\gamma}}<2,000\ s^{-1}\), RBCs with large lobes on their surface, which are referred as trilobes or hexalobes, emerge.
Studies of RBC dynamics in microchannels have shown that the RBC can transition from its biconcave discoid shape to different morphologies [5; 14; 15] under specific com
binations (state diagram) of viscosity contrast, shear rate and channel confinement \((\chi)\)[6; 10; 15]. The state diagram has revealed two main categories of RBC morphological shapes: \((i)\) symmetrical; and \((ii)\) asymmetrical types [4; 16; 17]. The symmetrical type contains three modes [18]: \((a)\) bullet; \((b)\) crossaint (in rectangular channels); and \((c)\) parachute (in circular channels) shapes, while the asymmetrical type includes [6; 19]: \((a)\) slipper; \((b)\) multilobes; \((c)\) trilobes; and \((d)\) hexalobes shapes. The shape transition in the symmetrical type has been shown to reach the stationary shape (either bullet, croissant, or parachute). However, it is still not fully clear whether or not the asymmetrical shapes are stable or they are just transient states [14; 20]. It has been shown that the shape transition in the symmetric type depends on the bulk flow velocity, the channel confinement, and the shear rate (the capillary number - \(Ca\)) [21]. In the asymmetric mode [19], the shape transition mostly depends on the flow lag, which is the difference between the translation velocity of the RBC and the velocity of plasma. In brief, it is unclear how the asymmetrical shapes emerge from the biconcave discoid shape.
Two shapes are the most frequently observed: \((i)\) the croissant shape (symmetrical) [5]; and \((ii)\) the slipper shape (asymmetrical). In particular, the slipper shape is characterized by the tank-treading motion of the cell membrane, which is essentially a self-rotation of the membrane around its own center of mass during the RBC propagation [5; 22]. Experimental and computational studies have shown that these morphological shapes might result in distinct flow structures of blood plasma in the vicinity of the RBC [5]. For instance, there exists a closed vortex downstream of the RBC when the slipper shape emerges [23]. Such a vortex is absent during the croissant shape. To our knowledge, there has been no systematic effort in understanding the emergence of the extracellular flow patterns as the morphological shape of the RBC changes.
Recently, oscillatory flow (time-dependent shear rate) has been shown to be a promising technique for cell separation because cell deformation is irreversible under time-dependent shear rates [11; 24]. Furthermore, oscillatory flows has been utilized to sort RBCs based
on their size and deformability [11; 12]. Oscillatory flows can reduce the required travel distance of cells because it induces the lateral migration of cells in a short axial distance. This feature simplifies the design of microfluidic channel and thus improves the cell separation process[25]. However, it is unclear on the process of morphological transition as the RBC responds to the time-dependent shear rate during this lateral migration. Therefore, it is necessary to investigate this process in details.
In this work, we utilized our hybrid continuum-particle simulation methodology [26] to study the response of the RBC to a time-dependent shear rate. Our paper is organized as follows. First, a brief description of the numerical methods for simulating the blood plasma and the RBC is presented. Second, the obtained RBC dynamics are validated with experimental data under: \((i)\) stretching force; \((ii)\) constant shear rates (croissant and slipper shapes); \((iii)\) oscillatory shear rates. Third, we perform a parametric study where the shear rate waveform, the peak flow rate, and the initial position of the RBC were varied to induce a host of RBC morphological changes. Finally, the relationships between the RBC's shape and the extracellular flow patterns are reported as a basis for cell manipulation in future applications.
## II Methodology
### The idealized shape of the RBC
The idealized shape of the RBC membrane is given by a set of points with coordinates \((x,y,z)\) in \(3D\) space with the analytical equation [27]:
\[z=\pm D_{0}\sqrt{1-\frac{4(x^{2}+y^{2})}{D_{0}^{2}}}\left[a_{0}+a_{1}\frac{x ^{2}+y^{2}}{D_{0}^{2}}+a_{2}\frac{(x^{2}+y^{2})^{2}}{D_{0}^{4}}\right], \tag{1}\]
the parameters are chosen in this work as \(D_{0}=7.82\)\(\mu m\) (equilibrium diameter), \(a_{0}=0.00518\), \(a_{1}=2.0026\), and \(a_{2}=-4.491\). Note that the idealized shape will be used as
the initial shape of the RBC membrane only. The membrane mechanics that governs the cellular deformation under loadings will be described in the following sections.
### RBC membrane model
As the idealized surface of the RBC membrane is known precisely according to Equation (1), a triangulation procedure is carried out to mimic the distribution of the spectrin links on the membrane as edges of each triangular elements (links) [27]. A network of non-linear springs is generated for each edge to model the dynamics of the spectrin links [26; 27; 28]. At each vertex \(i\), the dynamics of the links are determined by the membrane force \(\mathbf{F}_{i}^{membrane}\), which is linked to the Helmholtz's free energy \(V_{i}\) at the same vertex \(i\) through the following relationship:
\[\mathbf{F}_{i}^{membrane}=-\frac{\partial V_{i}}{\partial\mathbf{r_{i}}}, \tag{2}\]
with \(\mathbf{r_{i}}\) is the position vector of the vertex \(i\).
The potential \(V(\{\mathbf{r}_{i}\})\) incorporates the physical properties of the lipid bilayer: \((a)\) in-plane stretching; \((b)\) bending stiffness; \((c)\) area and volume conservation; and \((d)\) membrane viscosity
\[V(\{\mathbf{r}_{i}\})=V_{in-plane}+V_{bending}+V_{area}+V_{volume} \tag{3}\]
#### ii.2.1 Action potential models
The in-plane free energy term \(V_{in-plane}\) includes the elastic energy stored in the membrane modeled using the nonlinear Wormlike Chain and power \((WLC-POW)\) spring model. Here, the \(WLC-POW\) potential is computed for each link \(j\) formed by two vertices as,
\[V_{in-plane}=\sum_{j\in 1\ldots N_{s}}\left[U_{WLC}(l_{j})+U_{POW}(l_{j})\right], \tag{4}\]
with \(N_{s}\) is the total number of links forming the triangulated mesh.
The \(WLC\) attractive potentials \(U_{WLC}(l_{j})\) for individual link \(l_{j}\) is expressed as:
\[U_{WLC}=\frac{k_{B}Tl_{max}}{4p}\frac{3x^{2}-2x^{3}}{1-x}, \tag{5}\]
where the value \(x=\frac{l_{j}}{l_{max}}\) represents the spring deformation, in which \(l_{j}\), \(l_{max}\), \(p\), \(k_{B}\), and \(T\) are the length of the spring \(j\), the maximum allowable length of the links, the persistence length, Boltzmann's constant, and the temperature, respectively.
The repulsive force, described by the energy potential \(U_{POW}(l_{j})\), takes the form of a power function \((POW)\). The separation distance \(l_{j}\) is a determining factor in the calculation of \(U_{POW}\), which is given by:
\[U_{POW}(l_{j})=\frac{k_{p}}{(m-1)l_{j}^{m-1}}\ \ \ m>0\ \mbox{and}\ m\neq 1, \tag{6}\]
where \(k_{p}\) is the \(POW\) force coefficient. The value \(m=2\) is used for the exponent [27].
The bending energy \(V_{bending}\) accounting for the membrane resistance to bending is defined as,
\[V_{bending}=\sum_{j\in 1\ldots N_{s}}k_{b}[1-cos(\theta_{j}-\theta_{0})], \tag{7}\]
with \(k_{b}\), \(\theta_{0}\) and \(\theta_{j}\) are the bending rigidity, the spontaneous angle and the instantaneous angle between the normal vectors of two adjacent triangles sharing a common edge (link) \(j\), respectively.
The area and volume conservation constraints account for the incompressibility of the
lipid bilayer and the inner cytosol, respectively. They are defined as:
\[\begin{split} V_{area}&=\frac{k_{a}(A-A_{0}^{tot})^{2} }{2A_{0}^{tot}}+\sum_{k\in 1\ldots N_{t}}\frac{k_{d}(A_{k}-A_{0})}{2A_{0}},\\ V_{volume}&=\frac{k_{v}(V-V_{0}^{tot})^{2}}{2V_{0}^{ tot}},\end{split} \tag{8}\]
with \(N_{t}\), \(k_{a}\), \(k_{d}\), and \(k_{v}\) are the total number of triangles, the global area, local area, and volume constraint coefficients, respectively. \(A_{k}\) and \(A_{0}\) are the instantaneous area of the \(k^{th}\) triangle (element) and the initial value of the average area per element. \(A_{0}^{tot}\) and \(V_{0}^{tot}\) are the RBC's equilibrium total area and volume, respectively. \(A\) and \(V\) are the instantaneous total area and total volume of the RBC. The detailed procedure to evaluate the values of \(A\) and \(V\) for individual elements was reported in our previous work[26].
Equation (2) is used to calculate the precise nodal forces for each potential energy \(V\) in Equations (4) - (8)[26; 27]. The internal force \(\mathbf{F}_{i}^{membrane}\) contribution from \(i^{th}\) vertex can be computed by summing all the nodal forces as:
\[\mathbf{F}_{i}^{membrane}=\mathbf{F}_{i}^{WLC}+\mathbf{F}_{i}^{POW}+\mathbf{F} _{i}^{Bending}+\mathbf{F}_{i}^{Area_{g}}+\mathbf{F}_{i}^{Area_{loc}}+\mathbf{F }_{i}^{Volume}. \tag{9}\]
#### ii.1.2 Cellular membrane/cytoskeleton interaction
To account for the interactions between the cytoskeleton and the lipid bilayer, the bilayer-cytoskeletal interactions force \(\mathbf{F}^{E}\) was incorporated into the total RBC membrane forces [29]. In particular, \(\mathbf{F}^{E}\) is applied when the distance between two membrane triangles with opposite normal vectors is less than the minimal activation distance \(d_{a}=0.2\ \mu m\). The force \(\mathbf{F}^{E}\) is applied equally to all the vertices \((i=1,2\) and \(3)\) for each of the two elements. The bilayer-cytoskeletal interactions force is given by:
\[\mathbf{F}_{i}^{E}=k_{bs}\ \mathbf{n}, \tag{10}\]
with the stiffness of the bilayer-cytoskeletal \((k_{bs}=4.1124\)\(pN/\mu m)\) was assumed to be in the same order of the membrane spectrin network [29]. \(\mathbf{n}\) is the normal vector of the triangle.
### Modeling membrane-cytosol interactions
The interaction between the membrane and the cytosol is modeled using the Dissipative Particles Dynamics (DPD method). DPD is a microscopic simulation technique widely used to model flow of complex fluids, in which the flow is described as group of clustered interacting particles moving as a soft lump of fluid according to the Lagrangian approach [27]. In this work, the cytosol within the RBC is modeled using a set of randomly distributed DPD particles (\(N_{f}\)) that fill the internal volume of the cell[27; 28].
Due to the different nature of the interactions, the component of the total force of each particle \(\mathbf{F_{i}}\) is different depending on the nature of the particle \(i\) (either membrane or cytosol particle). In general, each DPD particle \(i\) interact with surrounding particles \(j\) within a cutoff radius \(r_{c}\) through three pairwise additive forces: \((a)\) the conservative force \(\mathbf{F}_{ij}^{C}\); \((b)\) the dissipative force \(\mathbf{F}_{ij}^{D}\); and \((c)\) the random force \(\mathbf{F}_{ij}^{R}\). The relative position vector between the particles \(i\) and \(j\) and related terms are given by: \(\mathbf{r_{ij}}=\mathbf{r_{i}}-\mathbf{r_{j}}\), the distance \(r_{ij}=|\mathbf{r_{ij}}|\), and the unit vector \(\mathbf{\hat{r}}_{ij}=\frac{\mathbf{r_{ij}}}{r_{ij}}\). Also, \(\mathbf{v}_{i,j}=\mathbf{v}_{i}-\mathbf{v}_{j}\) is the relative velocity between the particles \(i\) and \(j\) with velocities \(\mathbf{v}_{i}\) and \(\mathbf{v}_{j}\).
For a DPD particle \(i\) of the cytosol fluid, the total force \(\mathbf{F_{i}}\) is:
\[\mathbf{F_{i}}=\sum_{j\neq i}\mathbf{F}_{ij}^{C}+\mathbf{F}_{ij}^{D}+\mathbf{ F}_{ij}^{R}. \tag{11}\]
For the membrane particles, the total force \(\mathbf{F}_{i}\) acting on each membrane particle is given by the sum of the membrane force \(\mathbf{F}_{i}^{membrane}\), the bilayer-cytoskeletal interactions force
and the contributing forces from the surrounding DPD fluid particles from the cytosol:
\[\mathbf{F_{i}}=\mathbf{F}_{i}^{membrane}+\mathbf{F}_{i}^{E}+\sum_{j\neq i}\mathbf{F}_{ij}^{C}+ \mathbf{F}_{ij}^{D}+\mathbf{F}_{ij}^{R}. \tag{12}\]
The mathematical formulation of the conservative force \(\mathbf{F}_{ij}^{C}\), the dissipative force \(\mathbf{F}_{ij}^{D}\), and the random force \(\mathbf{F}_{ij}^{R}\) for the membrane and the cytosol fluid particles are explained below.
#### ii.1.1 The conservative force
The conservative force \(\mathbf{F}_{ij}^{C}\) is given by :
\[\mathbf{F}_{ij}^{C} =F_{ij}^{C}(r_{ij})\mathbf{\hat{r}}_{ij},\] \[\mathbf{F}^{C}(r_{ij}) =\begin{cases}a_{ij}\left(1-\frac{r_{ij}}{r_{c}}\right)&\text{for }\ r_{ ij}\leq\ r_{c},\\ 0&\text{for }\ r_{ij}>\ r_{c},\end{cases} \tag{13}\]
where \(a_{ij}=20\) is the conservative force coefficient between particles \(i\) and \(j\). Note that the particle \(i\) and \(j\) can be either membrane or cytosol fluid particle. Thus, there are two types of interactions: i) cytosol fluid/fluid; and ii) membrane/fluid particle interactions [27].
#### ii.1.2 The dissipative force
The dissipative force \(\mathbf{F}_{ij}^{D}\) for the membrane particles is computed as:
\[\mathbf{F}_{ij}^{D}=-\Gamma^{T}\mathbf{v}_{ij}-\Gamma^{C}\left(\mathbf{v}_{ij} \cdot\mathbf{\hat{r}}_{ij}\right)\mathbf{\hat{r}}_{ij}. \tag{14}\]
The membrane viscosity is a function of both dissipative parameters, \(\Gamma^{T}\) and \(\Gamma^{C}\). The superscripts \(T\) and \(C\) denote the translational and central components. Here, \(\Gamma^{T}\) is responsible for a large portion of the membrane viscosity in comparison to \(\Gamma^{C}\). In addition,
\(\Gamma^{C}\) is assumed to be equal to one third of \(\Gamma^{T}\) in Equations (14)[27]. Consequently, these parameters relate to the physical viscosity of the membrane \(\eta_{m}\) as:
\[\begin{cases}\eta_{m}=\sqrt{3}\Gamma^{T}+\dfrac{\sqrt{3}\Gamma^{C}}{4},\\ \quad\Gamma^{C}=\dfrac{\Gamma^{T}}{3}.\end{cases} \tag{15}\]
Hence, the dissipative force \(\mathbf{F}_{ij}^{D}\) of the membrane particles can be expressed as:
\[\mathbf{F}_{ij}^{D}=-\dfrac{12}{13\sqrt{3}}\eta_{m}\mathbf{v}_{ij}-\dfrac{4}{13 \sqrt{3}}\eta_{m}\left(\mathbf{v}_{ij}\cdot\mathbf{\hat{r}}_{ij}\right)\mathbf{ \hat{r}}_{ij}. \tag{16}\]
The dissipative force \(\mathbf{F}_{ij}^{D}\) for the cytosol fluid particles is defined as:
\[\mathbf{F}_{ij}^{D}=-\gamma\omega^{D}(r_{ij})(\mathbf{v}_{ij}\cdot\mathbf{ \hat{r}}_{ij})\mathbf{\hat{r}}_{ij}, \tag{17}\]
the quantity \(\gamma\) is a constant coefficient defining the strength of the dissipative force. The weight functions, \(\omega^{D}(r_{ij})\) and \(\omega^{R}(r_{ij})\) are given by:
\[\omega^{D}(r_{ij})=\left[\omega^{R}(r_{ij})\right]^{2}, \tag{18}\]
\[\omega^{R}(r_{ij})=\begin{cases}\left(1-\dfrac{r_{ij}}{r_{c}}\right)^{s}& \text{for }\ r_{ij}\leq\ r_{c},\\ 0&\text{for }\ r_{ij}>\ r_{c},\end{cases} \tag{19}\]
with \(s=1\) following the original DPD method [27]. However, other works revealed that the decrease of this parameter \(s=0.5\) to \(0.25\) increases the viscosity of the DPD fluid [30]. The particle \(i\) represent the fluid particle, while the particle \(j\) can be fluid or membrane particle within the cut-off radius \(r_{c}\).
The random force
Using the assumptions in Equation (15), the random force for membrane particles can be simplified as:
\[\mathbf{F}_{ij}^{R}=\sqrt{2k_{B}T}\left(2\sqrt{\frac{2\sqrt{3}}{13}\eta_{m}}\ dW_{ij}^{S}\right)\mathbf{\hat{r}}_{ij}, \tag{20}\]
where \(tr(d\mathbf{W}_{ij})\) is the trace of the random matrix of independent Wiener increments \(d\mathbf{W}_{ij}\), and \(d\overline{\mathbf{W}_{ij}^{S}}=d\mathbf{W}_{ij}^{S}-\frac{tr(d\mathbf{W}_{ij} ^{S})}{3}\) is the traceless symmetric part.
The random force \(\mathbf{F}_{ij}^{R}\) for the cytosol fluid are defined as:
\[\mathbf{F}_{ij}^{R}=\sigma\omega^{R}(r_{ij})\cdot\frac{\vartheta_{ij}}{\sqrt{ dt}}\cdot\mathbf{\hat{r}}_{ij},\;\;\;\sigma^{2}=2\gamma k_{B}T, \tag{21}\]
where \(\sigma\) is a constant coefficient defining the strength of the random force, \(dt\) is the physical time step, \(\vartheta\) is a normally distributed random variable with zero mean and unit variance and \(\vartheta_{ij}=\vartheta_{ji}\). Note that the particle \(i\) and \(j\) must be both cytosol fluid particles.
#### ii.2.4 Plasma and cytosol viscosity contrast
At the physiological blood conditions, the viscosity ratio between the blood plasma and the RBC cytosol is equal to 5.0 (\(\lambda=\frac{\mu_{cytosol}}{\mu_{plasma}}=5.0\)) [31]. To ensure that this condition is met, the dynamic viscosity of the plasma is set to be \(\mu_{plasma}=1.2\ mPa\ s\). The viscosity condition is enforced on the cytosol fluid by calibrating the parameters of the dissipative and the random forces [32] (e.g \(\gamma\) and \(\sigma\)). Specifically, the dynamic properties of the DPD particles of the cytosol fluid are given in the dimensionless DPD unit as [30]:
\[\begin{split}\text{mass diffusivity:}\quad D_{f}=\frac{45k_{B}T}{2 \pi\gamma\rho r_{c}^{3}},\\ \text{dynamics viscosity:}\quad\mu=\frac{\rho D_{f}}{2}+\frac{2 \pi\gamma\rho^{2}r_{c}^{5}}{1575},\end{split} \tag{22}\]
with \(\rho\) is the density. The DPD dimensionless parameters and physical units are linked [33] in order to compute the coefficients of the dissipative and randoms forces for the cytosol dynamic viscosity of \(\mu_{cytosol}=6\ mPa\ s\), based on the viscosity ratio \(\lambda=5\). The details of the conversion procedure are summarized in Table 1.
### Scaling of model and physical units
One challenge in DPD modeling is establishing a relationship between the modeled quantities and the physical values [34]. Since this relationship is not explicit, it is necessary to use a scaling argument to recover this relationship [27; 35]. For each parameter, the superscript \(M\) and \(P\) corresponds to the model and physical units, respectively.
The length scale \(r^{M}\) is defined as:
\[r^{M}=\frac{D_{0}^{P}}{D_{0}^{M}}\ (m), \tag{23}\]
The energy per unit mass \(k_{B}T\) and the force \(N\) scaling values are given by:
\[\begin{split}(k_{B}T)^{M}&=\frac{Y^{P}}{Y^{M}} \left(\frac{D_{0}^{P}}{D_{0}^{M}}\right)^{2}(k_{B}T)^{P},\\ N^{M}&=\frac{Y^{P}}{Y^{M}}\frac{D_{0}^{P}}{D_{0}^{M }}N^{P},\end{split} \tag{24}\]
with \(Y\) is the membrane Young's modulus.
The timescale \(\tau\) is defined as following:
\[\tau=\left(\frac{D_{0}^{P}}{D_{0}^{M}}\frac{\eta_{m}^{P}}{\eta_{m}^{M}}\frac{ Y^{M}}{Y^{P}}\right)^{\alpha}, \tag{25}\]
with \(\alpha=1\) is the scaling exponent.
### Coarse-graining Procedure
A full-scale model of a RBC typically consists of millions of particles, which are required to accurately simulate protein dynamics [36]. However, it is not feasible to use
such a full-scale model in a Fluid-Structure Interaction (FSI) simulation due to the high computational cost. We followed the coarse-graining procedure of Pivkin et al. (2008) [28] to represent the RBC membrane by a smaller number of particles (coarse-grained model). This procedure does not allow a detailed simulation of separate proteins, but it is versatile enough to capture the overall dynamics of the RBC membrane. The parameters of the coarse-grained model (_c_) are computed from the ones of the fine-scaled model (_f_) by a scaling procedure. The examples of such paramemters are explained below.
Based on the equilibrium condition, Pivkin et al. [28] proposed a coarse-graining procedure based on the area/volume constraint for the spring equilibrium \(l_{0}\) and maximum \(l_{max}\) lengths as follows:
\[l_{0}^{c}=l_{0}^{f}\sqrt{\frac{N_{v}^{f}-2}{N_{v}^{c}-2}}\quad\text{and}\quad l _{max}^{c}=l_{max}^{f}\sqrt{\frac{N_{v}^{f}-2}{N_{v}^{c}-2}}, \tag{26}\]
the role of \(l_{0}\) and \(l_{max}\) is critical in determining the response from the WLC model as seen in Equation (5), with \(l_{max}=2.2\)\(l_{0}\) in the fine-scaled model. Due to the scaling in Equation (26), the value of \(x_{0}=\frac{l_{0}}{l_{max}}=\frac{1}{2.2}\) does not change as the model is coarse-grained from the number of vertices \(N_{v}^{f}\) to \(N_{v}^{c}\).
Furthermore, as the number of vertices reduces, the average angle between the pairs of adjacent triangles increases. Therefore, the spontaneous angle \(\theta\) is adjusted accordingly in the coarse-grained model as:
\[\theta_{0}^{c}=\theta_{0}^{f}\frac{N_{v}^{f}}{N_{v}^{c}}\quad\text{with}\quad \theta_{0}^{f}=\arccos\left(\frac{\sqrt{3}(N_{v}^{f}-2)-5\pi}{\sqrt{3}(N_{v}^ {f}-2)-3\pi}\right). \tag{27}\]
To maintain the shear and area-compression moduli, the parameters \(p\) and \(k_{p}\) are adjusted as:
\[p^{c}=p^{f}\frac{l_{0}^{f}}{l_{0}^{c}}\quad\text{and}\quad k_{p}^{c}=k_{p}^{f} \left(\frac{l_{0}^{c}}{l_{0}^{f}}\right)^{m+1}. \tag{28}\]
### Time integration
In this work, we implemented the modified Velocity-Verlet algorithm [37], which consists of two primary steps. The first step involves determining the new position of the particle \(i\) (\(\mathbf{r}_{i}\)) while predicting the velocity (\(\mathbf{\bar{v}}_{i}\)), and the second step involves correcting the velocity by utilizing the computed force (\(\mathbf{F}_{i}\)) based on the predicted velocity and the new position as follows.
\[\mathbf{r}_{i}(t+dt) =\mathbf{r}_{i}(t)+dt\mathbf{v}_{i}(t)+\frac{1}{2}dt^{2}\mathbf{ F}_{i}(t), \tag{29}\] \[\mathbf{\bar{v}}_{i}(t+dt) =\mathbf{v}_{i}(t)+\Lambda dt\mathbf{F}_{i}(t),\] \[\mathbf{F}_{i}(t+dt) =\mathbf{F}_{i}(\mathbf{r}_{i}(t+dt),\mathbf{\bar{v}}_{i}(t+dt)),\] \[\mathbf{v}_{i}(t+dt) =\mathbf{v}_{i}(t)+\frac{1}{2}dt(\mathbf{F}_{i}(t)+\mathbf{F}_{ i}(t+dt)),\]
where \(\mathbf{\bar{v}}_{i}(t+dt)\) is the predictive velocity at time \(t+dt\) and \(\Lambda\) is the variable which accounts for the effects of the stochastic processes. The value of \(\Lambda\) is chosen to be the optimal value [37]\(\Lambda=0.65\).
### Fluid-Structure Interaction simulation of RBC in flows
#### ii.1.1 Numerical methods
The blood plasma was considered as an incompressible Newtonian fluid modeled using the incompressible three-dimensional unsteady Navier-Stockes equations, with density \(\rho\) and kinematic viscosity \(\nu=\frac{\mu_{plasma}}{\rho}\). The governing equations (continuity and momentum) read in Cartesian tensor notation as follows (\(i=1,2,3\) and repeated indices imply
summation):
\[\frac{\partial u_{i}}{\partial x_{i}} = 0, \tag{30}\] \[\frac{\partial u_{i}}{\partial t}+\frac{\partial(u_{i}u_{j})}{ \partial x_{j}} = -\frac{\partial p}{\partial x_{i}}+\nu\frac{\partial^{2}u_{i}}{ \partial x_{j}\partial x_{j}}. \tag{31}\]
In the above equations, \(u_{i}\) is the \(i^{th}\) component of the velocity vector \(\mathbf{u}\); \(t\) is time; \(x_{i}\) is the \(i^{th}\) spatial coordinate; \(p\) is the pressure divided by \(\rho\). The characteristic velocity scale is chosen as \(U_{0}\). The length scale \(L_{s}\) is set to equal \(8~{}\mu m\) for all cases. Note that this length scale is chosen to reflect the diameter of the RBC at the equilibrium condition.
The fluid solver is based on the sharp-interface curvilinear-immersed boundary (CURVIB) method in a background curvilinear domain that contains the RBC model [38]. The CURVIB method used here has been applied and validated in various FSI problems across different biological engineering areas [39; 40; 41]. In our previous work [26], we utilized the capabilities of the CURVIB method to capture accurately the complex cellular dynamics of the RBC in fluid flows.
The dynamics of the RBC in flow is thus simulated with a hybrid continuum-particle approach since the Fluid-Structure Interaction (FSI) methodology involves the coupling of DPD methods and the solvers for Navier-Stokes equations. The details of the FSI procedures are reported in our previous works [26; 38].
### Computational setups
Fluid-Structure Interaction simulations are performed to determine the dynamics of RBC in a confined micro-channel [5]. The computation domain is defined as a rectangular channel containing a single RBC as illustrated in Figure 2a. The dimensions of the domain along the \(x,y\), and \(z\) are \(L_{x}\) (the length), \(L_{y}\) (the width) and \(L_{z}\) (the height), respectively. The computational domain is discretized as a structured grid of size \(N_{i}\times N_{j}\times N_{k}\) with the
spatial resolution in three directions \((i,j,k)\) are \(\Delta x\times\Delta y\times\Delta z\), respectively. The details of the channels used in the simulations are listed in the Table 4.
The RBC locates initially at \(t=0\) in a axial distance of \(x_{0}\) from the inlet. The transverse location of the RBC is placed along the bisector of the first quadrant with a radial shift (\(r\)). Thus, the tranverse coordinates of the RBC are \(y_{0}=r\) and \(z_{0}=r\), respectively as shown in Figure 2b (\(r\) is the radial shift). With this configuration, the RBC confinement is defined as the ratio between the effective RBC diameter \(D_{r}=\sqrt{\frac{A_{0}^{tot}}{\pi}}\) and the domain height \(L_{z}\):
\[\chi=\frac{D_{r}}{L_{z}}. \tag{32}\]
The initial shape of the RBC is first set to be the idealized shape (Equation 1) for all simulation cases at the initial time \(t=0\). A short period of relaxation \(t_{relax}\) is allowed for the RBC under no external load (no flows) so that the internal forces of the RBC membrane balance. An uniform flow velocity \(U(t)\) is then applied at the channel inlet at \(t>t_{relax}\) to induce the RBC's deformation depending on the controlling strategy. The average shear rate across the channel height is defined as the ratio between the bulk velocity \(U(t)\) and the domain's height:
\[\overline{\gamma(t)}=\frac{U(t)}{L_{z}} \tag{33}\]
#### iii.2.1 Constant shear rate condition (\(I_{0}\))
Following the experimental study of Guckenberger et al. [5] (Channel-1, Table 4), FSI simulations of a RBC in channel flow with a constant flow rate are carried out with \(x_{0}=22.5\ \mu m\). To highlight the constant flow rate, the notation \(I_{0}\) is introduced to emphasize this condition. As shown in Table 5, a constant inflow velocity \(U(t)=\psi_{0}\) are required at the inlet of the computational domain. Two values of \(\psi_{0}\) are considered: \((i)\)\(\psi_{0}=U_{3}=2\ mm/s\); and \((ii)\)\(\psi_{0}=U_{4}=6\ mm/s\). In these cases, two values of the radial shift are also investigated: \(r_{1}=0\) and \(r_{3}=0.7\ \mu m\). To simplify the discussions, the numerical values
for the bulk velocity \(\psi_{0}\) will not be explicitly referred to. Instead, only the acronyms (\(U_{3}\) and \(U_{4}\)) will be used for reasons that will be evident in the subsequent texts.
Using these notations, the FSI simulation cases are named using the convention for each type of inflow waveform (\(I\)), the bulk velocity (\(U\)), the radial shift (\(r\)), and the channel type, respectively. The first case (\(I_{0}U_{3}r_{1}\chi_{1}\)) is configured with (\(\psi_{0}=U_{3}=2\ mm/s\)) and \(r=r_{1}=0\ \mu m\). The second case (\(I_{0}U_{4}r_{3}\chi_{1}\)) is carried out with \(\psi_{0}=U_{4}=6\ mm/s\) and \(r=r_{3}=0.7\ \mu m\). Here, the Reynolds number is defined as \(R_{e}=\frac{UL_{s}}{\nu}\). The kinematic fluid viscosity of blood plasma is chosen as \(\nu=\frac{\mu_{plasma}}{\rho}=1.2\times 10^{-6}\ m^{2}/s\). The summary of the parameters for each simulation case is shown in Table 5.
First, \(t_{relax}=10\ ms\) and \(7.0\ ms\) are set for \(I_{0}U_{3}r_{1}\chi_{1}\) (croissant) and \(I_{0}U_{4}r_{3}\chi_{1}\) (slipper) simulations, respectively. After the relaxation period, a linear ramping period is set for each simulation case \(t_{ramp}=30\ ms\) and \(20\ ms\) are set for \(I_{0}U_{3}r_{1}\chi_{1}\) and \(I_{0}U_{4}r_{3}\chi_{1}\). During this ramping period, the bulk velocity \(U(t)\) is linearly increased. The value of \(U(t)\) reaches \(\psi_{0}\) at the end of the ramping period.
#### iii.2.2 Stepwise oscillatory flows (\(I_{s}\))
To further validate our FSI model in oscillatory flows, the propulsion of the RBC in square channels is investigated [11]. Two square channels (Channel-2 and Channel-3) with side lengths \(L_{z}=16\ \mu m\) and \(21\ \mu m\) are used for the simulations, resulting in confinements \(\chi_{2}=0.4\) and \(\chi_{3}=0.3\), respectively. The initial location of the RBC is on the channel axis (\(x_{0}=16\ \mu m\), \(r=r_{1}=0\)). The computational configuration including the grid spacing, RBC surface meshes, and boundary conditions are shown in the Figure 2a and Table 4. A stepwise asymmetric oscillatory waveform \(I_{s}\) is used with two phases: (\(i\)) forward \(T_{f}\); and (\(ii\)) backward \(T_{b}\) periods \(\left(\frac{T_{b}}{T_{f}}=4\right)\) as shown in Figure 3a. The velocities during the forward and backward phases are \(\psi_{f}\) and \(\psi_{b}\)\(\left(-\frac{\psi_{f}}{\psi_{b}}=4\right)\), respectively. The formula for
the waveform is defined as:
\[U(t)=\begin{cases}\psi_{f}&\text{for}\ \ 0\leq t\leq\ \frac{T}{5},\\ \psi_{b}&\text{for}\ \ \frac{T}{5}\leq t\leq T\end{cases} \tag{34}\]
Following this formula, the flow has a forward phase (\(\psi_{f}>0\)) and a backward flow phase (\(\psi_{b}<0\)). The maximum shear rate is defined as (\(\overline{\dot{\gamma_{f}}}=\frac{\psi_{f}}{L_{c}}\)).
The Capillary number \(Ca^{f}\) in the forward flow phase is given as:
\[Ca^{f}=\frac{4\psi_{f}L_{s}t_{R}}{L_{z}^{2}}, \tag{35}\]
with \(t_{R}=\frac{L_{s}\mu_{plasma}}{2\mu_{0}}\).
There are 6 values of \(\psi_{f}\) are examined as \(\psi_{1}=1.05\), \(\psi_{2}=1.56\), \(\psi_{3}=2.1\), \(\psi_{4}=2.17\), \(\psi_{5}=3.25\), and \(\psi_{6}=4.35\) mm/s, respectively. Following the naming convention of the simulations, six cases are formed with the respective parameters: \(I_{s}\psi_{1}r_{1}\chi_{2}\); \(I_{s}\psi_{2}r_{1}\chi_{2}\); \(I_{s}\psi_{3}r_{1}\chi_{2}\); \(I_{s}\psi_{4}r_{1}\chi_{3}\), \(I_{s}\psi_{5}r_{1}\chi_{3}\), \(I_{s}\psi_{6}r_{1}\chi_{3}\) as shown in Table 5. As the waveform applied was of a stepwise nature, with a gradual increase, there was no relaxation time taken into consideration for this case (\(t_{relax}=0\)).
Under these oscillatory conditions, the axial propulsion step (\(\Delta x_{c}\)) is recorded at the end of the forward time interval of the asymmetric oscillating flow (\(t=T_{f}=\frac{T}{5}\)), as a function of the forward (peak) capillary number \(Ca^{f}\) for the chosen shear rates [11]. Thus \(\Delta x_{c}\) is defined as the displacement of the RBC's centroid (\(C\)) at the end of the forward phase (\(t=\frac{T}{5}\)):
\[\Delta x_{c}=x_{c}(t=\frac{T}{5})-x_{c}(t=0) \tag{36}\]
#### iii.2.3 Sinusoidal flow simulations
To study the effect of the pulsatile flow on the propulsion and the behavior of the cellular response (morphology changes) of the RBC, we considered time-periodic flow \(U(t)\)
The flow time period consists of three separate phases: \((i)\) the forward \((T_{f})\); \((ii)\) the resting \((T_{r})\), and the backward \((T_{b})\) periods, with \(T=T_{f}+T_{r}+T_{b}=50\ ms\ (f=20\ Hz)\). The asymmetry of the waveform is adjusted by changing the values of \(T_{f}\), \(T_{r}\), and \(T_{b}\). The formula for the waveform is:
\[U(t)=\begin{cases}A\sin(2\pi\frac{t}{T_{f}})&\text{for}\ \ 0\leq t\leq\ T_{f},\\ 0&\text{for}\ \ T_{f}\leq t\leq T_{f}+T_{r}\\ A\sin(2\pi\frac{(t-T_{f}-T_{r})}{T_{b}})&\text{for}\ \ T_{f}+T_{r}\leq t \leq T\end{cases} \tag{37}\]
The reversible waveform \((I_{1})\) is created with \(T_{f}=T_{b}\) (completely symmetric). The irreversible waveforms \((I_{2}\), \(I_{3}\), and \(I_{4})\) are formed by progressively reducing the period of \(T_{b}\). Four distinct inflow types were generated with symmetry and asymmetric waveforms \((I_{1}\), \(I_{2}\), \(I_{3}\), and \(I_{4})\) as seen in Figure 4 and Tables 6 and 7. For each of these waveforms, three different velocity magnitudes (\(A=U_{1}\), \(U_{2}\) and \(U_{3}\)) were considered. Furthermore, three different radial shift (\(r_{1}\), \(r_{2}\) and \(r_{3}\)) were chosen for simulations. In total, the combinatoric arrangements lead to a total of 36 distinct simulation cases with the notation \(I_{m}U_{n}r_{p}\chi_{1}\) with the corresponding values of the indices \(m=1,2,3,4\), \(n=1,2,3\), and \(p=1,2,3\). The outline of the simulation cases are shown in Table 8. In addition, the RBC shapes are recorded over a time period of two cycles \(2T\) as exemplified in Figure 3(b), in which the initial location of the RBC is set at \(x_{0}=22.5\ \mu m\). Due to the nature of the sinusoidal waveform applied, there was no relaxation time for all of these cases \((t_{relax}=0)\). In this case, the centroid's displacement is monitored continuously as the function of time:
\[\Delta x_{c}(t)=x_{c}(t)-x_{c}(t=0) \tag{38}\]
## III Results
### Model validation
#### iii.1.1 Coarse-graining validation
To first validate the coarse-graining procedure employed in our study, a stretching test is carried out and aimed to replicate the experimental test of Mills et al. (2004) [42]. In this experiment, two external forces \(\mathbf{F}_{stretch}\) with an opposite direction are applied on both sides of the RBC. The magnitude of the force \(\mathbf{F}_{stretch}\) is increased in a stepwise manner from 0 to 200 \(pN\) (a total of 16 steps). The axial diameter \((D_{a})\) and transverse diameter \((D_{t})\) were measured for every step. \(D_{a}\) refers to the diameter in the direction of stretch, while \(D_{t}\) is the diameter measured in the direction orthogonal to the stretch. The definitions of \(D_{a}\) and \(D_{t}\) are shown in Figure 1a. The simulations were performed systematically with different RBC surface mesh resolutions by changing the number of vertices (\(N_{v}\)). The parameters to describe the physical characteristics of the RBC are listed in Table 2. Following the coarse-graining procedure, the model parameters parameters for the cell membrane such as the equilibrium length, the persistence length, the spring stiffness, and the spontaneous angle are computed for each value of \(N_{v}\) as in Table 3. The cytosol fluid is modeled by a set of particles \(N_{f}=100\), which locate within the interior volume of the cell membrane as shown in Figure 1b.
The current RBC model accurately replicates the elastic response of the RBC under stretching forces, as revealed by the results shown in Figures 1. During membrane stretching under the external stretching force from 0 to 200 \(pN\), the dynamic response of cytosol particles are visible indicating the coupling between the membrane and the cytosol fluid. The shapes of the RBC under loading conditions agree with ones from experimental data of Mills et al. [42]. The computed values of the axial (\(D_{a}\)) and transverse (\(D_{t}\)) diameters
agree well with the experimental values as seen in Figure 1a. In particular, the values of \(D_{a}\) and \(D_{t}\) are consistent across the different values of \(N_{v}\), which indicate a robust performance of the coarse-graining procedure. There is a disagreement between the simulated results and the experimental value of \(D_{t}\). Examining the shapes of the RBC in the simulations (1b), it is revealed that the RBC tends to rotate around the stretching direction. This rotation leads to the difference between the experimental and numerical results of \(D_{t}\). In brief, the mechanics of RBC is well replicated by the computational model across different level of coarse-graining. Thus, a value of \(N_{v}=1000\) is chosen to report the dynamics of the RBC in subsequent sections.
#### iii.2.2 Deformation of the RBC under constant shear rates \(\overline{\dot{\gamma}_{0}}\)
Under constant shear rate conditions (\(I_{0}\)) as described in section II.8.1, two districts of the RBC shape are observed: \((i)\) the croissant shape (\(I_{0}U_{3}r_{1}\chi_{1}\) - \(\overline{\dot{\gamma}_{0}}=200\ s^{-1}\)); and \((ii)\) the slipper shape (\(I_{0}U_{4}r_{3}\chi_{1}\) - \(\overline{\dot{\gamma}_{0}}=600\ s^{-1}\)) as shown in Figure 6.
Under low shear rate (\(I_{0}U_{3}r_{1}\chi_{1}\)), the RBC was initially placed along the centerline of the microchannel (discocyte shape). As the RBC interacts with the incoming flow, deforms, and eventually transitions to a croissant shape. The terminal shape (croissant) is attained as the RBC continues to propagate along the channel's symmetry axis as shown in Figure 6a. Note that the croissant shape in this case is not fully axi-symmetric as the RBC is immersed in a rectangular channel.
Under high shear rate (\(I_{0}U_{4}r_{3}\chi_{1}\)), the RBC transitions from the croissant shape to the slipper shape as shown in Figure 6b, which exhibits a bistability mode with tank-treading behavior. Note that the RBC is placed at a radial shift \(r_{3}=0.7\ \mu m\). Thus the initial location of the RBC is not at the channel's symmetry axis. The tank-treading effect is a complex dynamics in which the RBC membrane propagates axially along the channel while it rotates around its own center of mass. This rotation of the membrane/cytoskeleton around t
the cytoplasm is shown clearly in Figure 6b. A counter-clockwise rotation is observed as indicated by the locations of two membrane particles (Lagrangian points - \(V_{1}\) and \(V_{2}\)) at different time instances (\(t_{1}=22\ ms\) and \(t_{2}=25\ ms\)).
In both the croissant or slipper shapes, the shape transition from the initial shape (discocyte) to the terminal shape (either croissant or slipper) occurs within around \(30\ ms\). These transitions agree well with the corresponding experimental data of Guckenberger et al. (2018)[5] as well as described in recent experiments on RBC transient dynamics [43; 44]. Furthermore, our shapes (croissant and slipper) for confined flow are in good agreement with the shape diagram produced by Agarwal et al. [45] for different Capillary numbers and confinements as seen in Figure 5. In conclusion, our simulations are able to replicate the dynamics of the croissant and slipper shapes excellently well.
The extracellular patterns of the croissant and slipper shapes agree excellently well with the experimental data of Guckenberger et al. (2018) [5]. The extracellular flow pattern can be visualized by reconstructing the relative flow velocity field [23]. The relative velocity is defined is the difference between the flow velocity and the RBC's centroid velocity as shown in Figure 7. In the croissant shape (\(I_{0}U_{3}r_{1}\chi_{1}\)), the velocity streamlines closely resemble an axi-symmetrical flow pattern (Figure 7a). The downstream side of the RBC membrane deforms significantly whereas the upstream side barely changes as depicted in Figure 7b. In the slipper shape (\(I_{0}U_{4}r_{3}\chi_{1}\)), there exists an asymmetrical vortical structure in the vicinity of the RBC membrane. As the slipper shape emerges, a fully closed vortex ring is created by a reversed flow region, which is close to the channel wall. In short, the emergence of the RBC shape dictates the extracellular flow pattern.
#### iv.2.3 Propulsion of RBC under stepwise oscillatory flows (\(I_{s}\))
Under stepwise flow waveform (\(I_{s}\)), our simulation results agree well with the propulsion step map (\(\Delta x_{c},Ca^{f}\)), which was developed by Schmidt et al. (2022) [11] for both
channels \(\chi_{2}=0.5\) and \(\chi_{3}=0.38\). In both cases, the propulsion step \((\Delta x_{c})\) was observed to monotonically increase with the values of \(Ca^{f}\). However, the \(\Delta x_{c}\) is higher in the lower confinement channel (\(\chi_{3}\)), which indicates the importance of channel confinement. In all simulation cases \((I_{s}\psi_{1}r_{1}\chi_{3})\), \((I_{s}\psi_{2}r_{1}\chi_{3})\), \((I_{s}\psi_{3}r_{1}\chi_{3})\), \((I_{s}\psi_{4}r_{1}\chi_{3})\), \((I_{s}\psi_{5}r_{1}\chi_{3})\) and \((I_{s}\psi_{6}r_{1}\chi_{3})\) the RBC transitioned from the discocyte to the biconcave shape during the forward phase (\(0<t<\frac{T}{5}\)) with all values of the peak forward flow (\(\psi_{f}=1.05~{}mm/s\) to \(\psi_{f}=4.34~{}mm/s\)) as shown in Figure 6c. Strikingly, the complex multilobe shape emerges during the backward phase \(T_{b}\). The elastic response of the RBC membrane to the oscillatory flow during the cycle \(T\) is depicted for the case \((I_{s}\psi_{6}r_{1}\chi_{3})\) in Figure 6c. The reversal of the flow direction during \(T_{b}\) results in membrane buckling and stretching, which give rise to the multilobe shape even if the RBC is placed initially at the channel center (\(r=r_{1}=0\)).
### The impact of oscillatory flows on RBC dynamics
#### iv.2.1 The emergence of RBC shapes
The oscillatory flow waveform (\(U(t)\)) further adds complexity to the membrane dynamics as the shape of RBC is highly sensitive to the extracellular flow condition. As the result of the pulsatile flow condition, the RBC shape continuously responds to the applied flow in the channel. Our simulations show that the RBC alternates its shapes in one of the following types: 1) croissant; 2) slipper; 3)trilobes; 4) simple/complex/elongated multilobes; 5) rolling stomatocytes; 6)hexalobes; and 7) rolling discocyte as shown in Figure 8 and Tables IX-XII. The emergence of each type will be discussed as follows.
In all cases, the RBC evolves from the croissant (\(C\)) toward the slipper (\(S\)) mode during the forward phase (\(0<t<T_{f}\)) of the flow cycle (\(\frac{t}{T}\approx 0.25\)) as shown in Figure 8. Note that the transition to \(C\) or \(S\) mode from the biconcave shape is dependent on the value
of the radial shift (\(r\)). As shown in Tables 9-10, the \(S\) mode appears only when the RBC is initially placed not exactly at the cross-sectional center (\(r>0\)). The RBC remains in \(C\) mode during the forward phase if it is initially placed at the cross-section center (\(r=0\)) regardless of the bulk flow waveform (\(I_{1}\) to \(I_{4}\)). In brief, the croissant and the slipper shapes exists during the forward phase and their emergence depends on the initial off-centered location of the RBC (\(r\)).
The RBC transitions from the simple shapes (croissant and slipper) toward more complex shapes (trilobes, simple/complex/elongated multilobes, rolling stomatocytes, hexalobes, and rolling discocyte) later in the flow cycle during the resting/reverse periods (\(\frac{t}{T}>0.5\)). The shape transformation is initiated by the buckling of the RBC membrane, which takes place in the resting interval (\(T_{r}\)) phase of the flow (see Figure 4). As a result of the change in flow direction, the RBC experiences considerable stretching and compression, leading to significant alterations in its membrane shape.
#### iv.2.2 The impacts of the initial position \((r)\) and waveform (\(I\))
Our finding (Figure 8) revealed clearly that the initial position \((r)\) and the flow waveform \((I)\) play a critical role in the emergence of RBC shapes. Under the symmetric and asymmetric waveforms, the RBC placed initially at the channel axis \(r=r_{1}=0\), transitions sequentially from the croissant shape toward the complex multilobe, multilobe, trilobe, rolling stomatocyte, elongated multilobe, and finally hexalobe as shown in the Tables 9-10. When \(r>0\), the RBC remains mostly the slipper shape during the forward phase (\(t<T_{f}\)) and it transitions toward the elongated multilobe during the back flow phase (\(T_{f}+T_{r}<t<T\)). Finally, the RBC becomes a rolling discocyte in the second cycle (\(t\approx 1.2T\)). In brief, the shape transition process is strongly sensitive to the initial placement of the RBC.
It is striking to observe the irreversible dynamics of RBC. When subjected to symmetric
waveform \((I_{1})\), the RBC is observed to be fully controlled by the pulsatile inflow. The RBC oscillates around its initial position with a minimal propulsion. Despite the inflow waveform is completely symmetrical (a sine function - \(I_{1}\)), the axial position of the RBC in Figure 9a (left column) shows a positive value of the displacement \(\Delta x_{c}\) at the end of the first (\(t=T\)) and second cycle (\(t=2T\)) even when there is no radial shift (\(r=0\)). Though small, this positive value of \(\Delta x_{c}\) indicates that the RBC does not go back exactly to its initial location, which is \(\Delta x_{c}=0\) at \(t=0\). At all values of the radial shift of \(r=0,0.4\), and \(0.7\mu m\), this irreversible dynamics is even more evident as shown in the lateral displacements in Figures 9b-c. The magnitudes of \(\Delta y_{c}\) and \(\Delta z_{c}\) are comparable for all values of \(r\) during the cycles. For the case \(I_{1}U_{1}r_{1}\chi_{1}\) (\(r=0\)) the value of \(\Delta y_{c}\) reaches a value of approximately \(0.16L_{s}\) at the end of the first cycle. For the cases \(I_{1}U_{1}r_{2}\chi_{1}\) and \(I_{1}U_{1}r_{3}\chi_{1}\), the values of \(\Delta y_{c}\) and \(\Delta z_{c}\) reach approximately \(0.25L_{s}\) at the end of the second cycle. In the vertical direction (\(z_{c}\)) in Figure 9c, the well-centered RBC (\(r=0\)) was influenced by the change of flow direction, which is depicted by the upward and downward trends in the first cycle. However, the cell followed a dominant upward trend during the entire second cycle resulting in a lateral migration of around \(0.25L_{s}\). Therefore, there exist significant lateral migration of the RBC during its propagation regardless of its initial position in the symmetrical waveform condition (\(I_{1}\)). In conclusion, a symmetrical flow waveform (\(I_{1}\)) results in a minimal propulsion along the axial direction but a significant lateral migration.
Under asymmetric waveform \(I_{4}\), the RBC propels along the channel direction with a propulsion step of approximately \(2L_{s}\) in each cycle as shown in Figure 9d. As the waveform becomes asymmetric with a longer forward phase, the RBC does not go back significantly during the reverse phase. It rather remains at the displacement value of \(\Delta x_{c}\approx 1.9L_{s}\) at the end of the first cycle. It continues to propel in the second cycle up to \(\Delta x_{c}\approx 4.0L_{s}\). Surprisingly, the lateral migration of the RBC (\(\Delta y_{c},\Delta z_{c}\)) is smaller in comparison to ones in the symmetric case (\(I_{1}\)). The values of (\(\Delta y_{c},\Delta z_{c}\)) are within \(0.15L_{s}\) for all cases \(I_{4}U_{1}r_{1}\chi_{1}\), \(I_{4}U_{1}r_{2}\chi_{1}\), and \(I_{4}U_{1}r_{3}\chi_{1}\) as shown in Figure 9e and f. In brief, the RBC propels signifi
cantly under the impact of the asymmetrical flow waveform \(I_{4}\) along the axial direction but it does not migrate significantly in the lateral directions.
When the RBC is positioned at the center line \((r=0)\) of the channel, it is observed to be fully controlled by the pulsatile inflow when subjected to a symmetric waveform (\(I_{1}\)) as shown in Figure 10a. In this case, the cell oscillates around its initial position with minimal propulsion. However, as the inflow profile transitions to asymmetric waveform (\(I_{2}\), \(I_{3}\), and \(I_{4}\)) with an increasing forward velocity time interval, the RBC gains more momentum and propels far away from its initial position reaching a maximum propulsion step \(\Delta x_{c}\) of approximately \(4L_{s}\) at the end of the second cycle. In the lateral direction \((y_{c})\), \(\Delta y_{c}\) reached a value of approximately \(0.16L_{s}\) at the end of the first cycle when subjected to symmetric waveform \((I_{1})\) as shown in Figure 10b, while for the cases \(I_{2}\), \(I_{3}\) and \(I_{4}\) the values of \(\Delta y_{c}\) were comparable at the end of the second cycle, especially as the waveform becomes predominantly asymmetric (\(I_{3}\) and \(I_{4}\)). Furthermore, in the vertical direction \((z_{c})\) as seen in Figure 10c, the RBC follows a monotonically upward trend throughout the entire second cycle. This results in a vertical propulsion \(\Delta z_{c}\) of approximately \(0.25L_{s}\). However, for all the asymmetric waveforms, a nearly identical upward trend is observed, leading to a vertical displacement \(\Delta z_{c}\) of about \(0.08L_{s}\) at the end of the first cycle. However, during the entire second cycle, the cell is observed to oscillate with a downward trend. In summary, the symmetric waveform leads to the maximum lateral and vertical propulsion, while the asymmetric waveforms results in the maximum axial propulsion.
The off-centered \((r=0.4\ \mu m)\) axial migration of the RBC exhibited a behavior similar to the centered case, indicating that the initial position does not significantly affect the axial propulsion of the RBC. In the lateral direction, the RBC under \(I_{1}\) and \(I_{2}\) achieved a lateral propulsion of approximately \(0.16L_{s}\) (Here \(L_{s}=8\mu m\))at the end of the second cycle. While the centered case reached this value at the end of the first cycle, the off-centered initial placement resulted in a slower lateral propulsion due to the cell experiencing a gradient of velocity magnitude compared to the centered case. Additionally, \(I_{3}\) and \(I_{4}\) displayed nearly
identical profiles with a maximum propulsion of \(0.06L_{s}\). A similar pattern was observed in the vertical direction, where the RBC under \(I_{1}\) and \(I_{2}\) exhibited similar oscillation profiles, reaching a propulsion step of approximately \(0.14L_{s}\) at the end of both cycles. On the other hand, \(I_{3}\) and \(I_{4}\) displayed a nearly identical steady upward trend throughout the entire two cycles, resulting in a vertical propulsion of approximately \(0.04L_{s}\). To summarize, the differentiation observed between \(I_{1}\), \(I_{2}\), and \(I_{3}\), \(I_{4}\) implies that when the RBC is off-centered, a maximum critical forward time interval is considered in order to attain the highest propulsion. Based on the findings of this study, to achieve maximum propulsion in all directions, the forward time interval \((T_{f})\) should be less than three times the backward time interval \((T_{b})\), expressed as \(\frac{T_{f}}{T_{b}}<3\).
#### iii.2.3 Extracellular flow dynamics at the vicinity of the RBC under oscillatory flows
The emergence of the RBC shape has a close relationship with the flow pattern of the surrounding fluid (extracellular flow). Under the impact of the channel confinement, the deformation of RBC is well regulated by the flow waveform, which result in distinct extracellular flow patterns surrounding the RBC as shown in Figures 8 and 11. To highlight the impact of the RBC motion, the flow pattern is visualized in the co-moving frame with the RBC's centroid (see section III.1.2). Thus, the flow streamlines are represented from the perspective of the RBC.
The case \((I_{1}U_{1}r_{1}\chi_{1})\) is selected to illustrate the evolution of flow pattern as the RBC deforms from a relatively simple shape to a more complicated shape as depicted in Figure 11 (first row). This case is chosen because the temporal variation of the waveform is completely symmetrical (\(I_{1}\)). Moreover, the RBC is placed initially at the channel axis (\(r=r_{1}=0\)) with the lowest forward velocity \(\psi_{f}=U_{1}=1\ mm/s\). In the case \((I_{1}U_{1}r_{1}\chi_{1})\), Figure 11a revealed that the RBC has a multilobe shape at the end of the forward phase. The presence of the large lobes resulted in a more convoluted streamline patterns during
the resting phase. As the RBC undergoes a morphological transition to rolling stomatocyte at the end of the first cycle \((t=0.9T)\), the streamlines experienced changes (Figure 11b). However, when the RBC transformed into rolling discocyte in the case \(I_{1}U_{1}r_{2}\chi_{1}\) shown in Figure 11c, the streamlines once again resembled to similar patterns observed in the croissant shape (constant shear rate case \(I_{0}U_{3}r_{1}\chi_{1}\) in Figure 7a).
The case \((I_{1}U_{3}r_{1}\chi_{1})\) is selected to illustrate further the impact of the peak forward flow \(\psi_{f}\). In this case, the peak velocity \(\psi_{f}\) is increased to \(\psi_{f}=U_{3}=2\ mm/s\) while other parameters are kept unchanged in comparison to \(I_{1}U_{1}r_{1}\chi_{1}\). Therefore, the most significant factor of the shape transition is due to the impact of the peak inlet velocity \(\psi_{f}=U_{3}=2\ mm/s\). The RBC transitions quickly to the croissant shape in Figure 11d \((t=0.28T)\). The flow patterns are similar to those observed under constant shear rate (see case \(I_{0}U_{3}r_{1}\chi_{1}\) in Figure 7a). During the rest period (\(T_{f}<T<T_{f}+T_{r}\)), the flow velocity surrounding the cell decreased notably and the complex multilobes shape emerges as seen in Figure 11e. The flow pattern is perturbed minimally surrounding the RBC as it shape turns to trilobe in Figure 11f. During the backward phase \((t=1.15T)\), the RBC becomes further elongated as its lobes are stretched further. Consequently, the flow patterns in the vicinity of the cell exhibit pronounce transience as shown in Figure 11g. In brief, the peak velocity \(\psi_{f}\) can induce complex morphology of the cell as well as the associated surrounding fluid flows.
To highlight the impact of the initial location \(r\), the case \(I_{1}U_{3}r_{3}\chi_{1}\) was selected to visualize the flow patterns. As shown in Figure 11h, due to the off-centered initial location (\(r>0\)) the slipper shape emerges during the forward phase. A closed vortex ring is also observed downstream of the RBC as the flow velocity reaches its maximum magnitude in the forward phase. This phenomenon is similar to the one observed in the constant shear rate case (\(I_{0}U_{4}r_{3}\chi_{1}\) with \(U_{4}=6\ mm/s\)) in Figure 7b. This is remarkable since the peak flow \(\psi_{f}\) is rather three times lower in this case \(\psi_{f}=U_{3}=2\ mm/s\).
Furthermore, the hexalobes shape (observed only in the case \(I_{4}U_{2}r_{1}\chi_{1}\)) corresponding t
flow patterns are shown in Figure 11i. During the resting period \((t=1.15T)\), the extracellular flow exhibits a minimal disturbance around the hexalobes as the RBC completed the transition in the rest period.
## IV Discussion
Due to the membrane flexibility, RBC responds swiftly to the applied shear rate [9]. This characteristic can be exploited to understand the mechanical properties of the RBC membrane [44] and thus it has the potentials to identify the pathological changes [43] of RBC's membrane. However, the exact mechanism of this response are not yet fully understood. In this work, we explore the impacts of the unsteady shear rate to control cell deformation and migration in micro-channels.
Our numerical method is based on the concept of coupling continuum-particle methods [26], which allows the simulations of RBC dynamics under physiological conditions. Our numerical results showed an excellent agreements with available _in vitro_ and computational studies both in cellular mechanics and extracellular flow pattern of the blood plasma[23; 5; 42]. While most previous studies [27; 28] have only focused on the impact of constant shear rate on the dynamics of the RBCs, our results show that the unsteady shear rate can induce complex RBC's morphology as discussed below.
The emergence of the croissant shape and the slipper shape under a constant shear rate \(\overline{\dot{\gamma}_{0}}\))
In micro-channel flows with constant shear rate (\(\overline{\dot{\gamma}_{0}}\)), three common dynamics of RBCs are frequently observed: (i) tumbling; (ii) croissant/parachute; and (iii) slipper shapes as shown in Figure 5. In unconfined flows [13], the RBC dynamics depends on only the shear rate (\(\dot{\gamma}\) or the \(Ca\)) and viscosity contrast (\(\lambda\)). However, the confinement of micro-channel
flows imposes an additional condition for shape transition via the confinement ratio \(\chi\). As shown in Figure 5, the combination of \(Ca\) and \(\chi\) dictates to the RBC shape either the croissant or slipper shapes.
Recent works [5; 23] in rectangular microchannels, which are identical to our channels as shown in Figure 2 and Table 4, further suggest that the emergence of RBC shape is also dependent on the radial shift (\(r\) - see Figure 2 for its definition). On one hand, the croissant shape dominates when the RBC is placed initially at the cross-sectional center with large confinement. In previous works [5; 23], the croissant shape emerged at low shear rate (\(\overline{\dot{\eta}_{0}}<300\ s^{-1}\)) if the RBC is placed exactly at the channel's center (\(r=0\)). On the other hand, the slipper shape emerge whenever the RBC was not placed exactly at the centerline (\(r>0\)). The RBC was found to exhibit a (tank-treading) slipper shape at sufficiently high shear rate (\(\overline{\dot{\eta}_{0}}\approx 500\ s^{-1}\)) and off-centered placement (\(r>0\)) [5; 23]. In cylindrical micro-channels [15], similar observations were confirmed albeit at lower shear rates (\(0<\overline{\dot{\eta}_{0}}<80\ s^{-1}\)). Therefore, the radial shift plays an important role in RBC dynamics.
Our results in Figure 5 confirm the croissant-to-slipper transition as the Capillary number (and thus \(\overline{\dot{\eta}_{0}}\)) increases from 0.1 to 0.37 for a confinement of \(\chi=0.65\). The croissant shape emerges when the initial position of the RBC is placed exactly at the channel centerline at sufficiently low shear rate (\(Ca=0.1\)). When the shear rate is increased to \(Ca=0.37\), the slipper shape emerges. Furthermore, our model is able to capture the intricate dynamics of the tank-treading motion, which is characterized by the rotation of the membrane at the shear rate of \(600\ s^{-1}\) as illustrated in Figure 6. Therefore, our results further confirm the importance of the radial shift.
### The impact of time-varying shear rate \(\bar{\gamma(t)}\) on RBC shape
When the inflow varies in a stepwise manner as seen in Figure 3, the shear rate (\(\bar{\gamma}\)) changes as a function of time \(\bar{\gamma}(t)\) with distinct forward (\(T_{f}\)) and backward (\(T_{b}\)) time phases. In all cases (\(I_{s}\psi_{1}r_{1}\chi_{2}\), \(I_{s}\psi_{2}r_{1}\chi_{2}\), \(I_{s}\psi_{3}r_{1}\chi_{2}\), \(I_{s}\psi_{4}r_{1}\chi_{3}\), \(I_{s}\psi_{5}r_{1}\chi_{3}\), and \(I_{s}\psi_{6}r_{1}\chi_{3}\)), the RBC is placed exactly at the channel axis (\(r=r_{1}=0\)). The RBC transitions from a discocyte shape toward the croissant shape during its propulsion as shown in Figure 6. Although the backward phase induces the buckling of the cellular membrane, the RBC shape remains symmetrical with respect to the channel axis (multilobes) as shown in Figure 6 at the end of \(T_{b}\). This is remarkable given that the maximum shear rate during the backward phase can be sufficiently large (\(\bar{\gamma}_{f}=207~{}s^{-1}\)). Comparing the case \(I_{0}U_{3}r_{1}\chi_{1}\) and \(I_{0}U_{4}r_{3}\chi_{1}\) in Table VIII, our results suggest that the break of symmetry (croissant-to-slipper transition [43]) is observed only when the radial shift exists (\(r>0\)).
When applying different sinusoidal waveforms (\(I_{1}\), \(I_{2}\), \(I_{3}\) and \(I_{4}\)) shown in Figure 4, our results show the ubiquitous presence of croissant-to-slipper transition across all shear rates (\(\bar{\gamma}_{f}=100,150\), and \(200~{}s^{-1}\)). While the applied shear rate \(\bar{\gamma}(t)\) varies greatly over one cycle, the slipper shape appeared (\(t\approx 0.3T\)) whenever the RBC is placed off the channel's axis (\(r>0\)) as shown in Tables IX-XII. Note that these waveforms are different in term of the forward (\(T_{f}\)) and backward (\(T_{b}\)) phases, with the backward phase being the shortest in \(I_{4}\). This explains the emergence of the slipper shape even when the waveform is reversible (\(I_{1}\)): \(I_{1}U_{1}r_{2}\chi_{1}\), \(I_{1}U_{1}r_{3}\chi_{1}\), \(I_{1}U_{2}r_{2}\chi_{1}\), \(I_{1}U_{2}r_{3}\chi_{1}\), \(I_{1}U_{3}r_{2}\chi_{1}\), \(I_{1}U_{3}r_{3}\chi_{1}\). Hence our results indicate that the initial of the RBC in flows plays an essential role in determining the RBC dynamics.
Our observations in Figure 8 and Tables IX-XII suggest that the shape transitions under reversible waveforms are accomplished through a consistent transient stretching and compression of the membrane. This occurs as the RBC experiences forward and backward flow phases during the cycle. Moreover, the orientation of the RBC's symmetry axis
continuously changes relative to the symmetry axis of the channels. This suggests that the RBC moves in different directions depending on the initial conditions (\(I\), \(\bar{\gamma}_{f}\) and \(r\)).
In particular, experiments and numerical simulations using shear flows showed that the RBC under weak shear rates (\(\bar{\gamma}<10\ s^{-1}\)) typically maintain its discocyte shape with an 80% probability [9]. However, as the shear rate gradually rises from \(10\ s^{-1}\) to \(400\ s^{-1}\), the likelihood of a discocyte shape decreases to 30%. The findings from Lanotte et al. [9] demonstrate that the presence of the discocyte shape is correlated with weak shear rates. Their results have been found to hold true even when considering different viscosity ratios, as evidenced by the work of Mauer et al. [10] Our study consistently observed the discocyte shape during the second cycle, across all applied waveforms and shear rates (\(\bar{\gamma}_{f}=100\ s^{-1}\), \(150\ s^{-1}\), and \(200\ s^{-1}\)) when the initial positions were off-centered, as indicated in Tables 9-10.
Moreover, our findings in Figures 9 and 10 indicates that by the end of the first cycle the RBC underwent sufficient lateral propulsion in addition to the initial off-centered shift. This propelled movement led for the RBC to experience even lower shear rates closer to the channel's walls, facilitating the transition to the discocyte shape. However, under shear flow stomatocyte shape was observed to dominate the RBC population with 65% when the shear rate is between \((10\ s^{-1}<\bar{\gamma}<400\ s^{-1})\)[9], while we observed the elliptical-rim-shaped stomatocyte only under symmetric waveform \(I_{1}\) and centered initial placement \((r=r_{1}=0)\) subject for the shear rate of \(100\ s^{-1}\)\((I_{1}U_{1}r_{1}\chi_{1})\). This results strongly suggest that the impact of waveform is significant in defining the morphology sequence the RBC can follow even at low oscillatory shear rates.
Our results underscore the significant influence of the applied waveform in shaping the morphological response of the RBC. At high constant shear rates \((400\ s^{-1}<\bar{\gamma}_{0}<2,000\ s^{-1})\)[9], polylobes shape emerges. This polylobes shape is characterized by large number of lobes on the RBC surface, known as trilobes and hexalobes [9]. The appearance of these polylobes is attributed to the substantial membrane buckling caused by the
reverse of flow direction. In the current study, polylobes are also observed across all applied waveforms when the cell is placed initially at the channel axis (\(r=0\)) even at weak shear rates (\(\tilde{\gamma}_{f}\leq 200\ s^{-1}\)) as in Figure 8 and Tables 9-10. For example, the trilobes shape are observed in the reversible waveform (\(I_{1}U_{2}r_{1}\chi_{1}\) and \(I_{1}U_{3}r_{1}\chi_{1}\)) or the irreversible waveform (\(I_{3}U_{2}r_{1}\chi_{1}\)). Furthermore, the hexalobes shape only appears under the most reversible waveform with (\(I_{4}U_{2}r_{1}\chi_{1}\)) (\(r=0\)) and \(\tilde{\gamma}_{f}=150\ s^{-1}\) as shown in Table 11. Surprisingly, we observed that the RBC can achieve this transition to polylobes over a short distance (approximately \(4.0\times L_{s}\) for \(I_{1}U_{2}r_{1}\chi_{1}\) ) as shown in Figure 9(a).
The RBC shape can be further deformed into elongated shapes. Li et al. [46] demonstrated that as the shear rate increases, RBCs can undergo significant elongation and assume a more cylindrical shape. Our findings support this observation, as we also observed the elongated multilobes shape. Our results suggest that this shape is generally present regardless of the applied waveform, but it only manifests under higher shear rates. Specifically, we observed the elongated multilobes morphology for \(\tilde{\gamma}_{f}\geq 150\ s^{-1}\) under symmetric waveform and centered position, and \(\tilde{\gamma}_{f}=200\ s^{-1}\) for all asymmetric waveforms and initial position.
### Controlling lateral migration of cells with oscillatory flows
Microfluidic devices are typically used to isolate and separate cells [47] with different techniques. While these devices are promising for many cell-sorting applications [48; 49], the main challenge is the difficulty in obtaining high-throughputs due to the required length of the microfluidic channels. Recent works have shown that varying the shear rates in time [11; 12] can reduce the required length based on the concept of velocity lift [50], which is the factor that drives the RBC's migration towards the center of the channel.
As the inertial effect is negligible at very low Reynolds number (\(R_{e}<0.01\)), the flow is reversible for a rigid body. Thus, a rigid body will return to its initial position if the inflow
conditions in the backward phase is reversed in the exact opposite way of its own during the forward phase. However, the RBC is not a rigid body and its membrane is highly flexible. Our results in Figures 9 and 10 for the symmetrical waveform (\(I_{1}\)) show that the RBC does not go back to its initial position at the end of the cycle. There is an axial shift of the RBC from its original position (\(\Delta x_{c}\neq 0\)) at the end of the cycle. Moreover, the RBC migrates significantly in the lateral cross-section (\(\Delta y_{c}\gg 0\) and (\(\Delta z_{c}\gg 0\)). Similar results were found experimentally [12] when the average positions of RBCs and stiff beads were compared in oscillatory flow. Thus, due to its soft nature the RBC showed a significant net actuation in asymmetric oscillating flows. This differential response points to the potential of utilizing oscillatory flow to selectively separate cell based on their mechanical attributes, which could be used in biological and medical applications.
Our findings in Figure 7 show that the flow patterns are directly influenced by the dynamics of the RBC. Under steady-state flow, the extracellular flow dynamics were observed to behave differently near the RBC for the croissant and slipper shapes. In particular, the flow around the steady croissant shape was found similar to that of a rigid sphere [51], in which the flow streamlines move nearly symmetrically inwards and outward from the cell in the upstream and downstream sides, respectively. In contrast, for the slipper shape a fully-closed vortex ring more known as "bolus" was observed downstream the cell. Similar results were obtained using experimental Particle Tracking Velocimetry (PTV). Furthermore, our results in Figure 11 suggest that it is possible to control the extracellular flow pattern by adjusting the inflow waveform. The extracellular flow has been found to play an important role in drug delivery strategies[23] due to its potential use of particle trapping. Therefore, our results suggest that controlling the inflow waveform either by adjusting the peak flow \(\psi_{f}\) or the shape of the waveform (\(T_{f}\)) might lead to the desired effects in delivering small particles (e.x therapeutic nano-particles) to the cells.
Conclusion
Transient dynamics of Red Blood Cells (RBC) in confined channels under oscillatory flows are investigated using our continuum-particle approach [26]. Our results revealed that the dynamics of RBCs are complex with different shape modes that are beyond the usually observed croissant and slipper modes. Our results indicate that the extracellular flow pattern around the RBC is dependent on the RBC shape. Our results suggest that the oscillatory flow can be used to control and manipulate the dynamics of RBC by adapting appropriate flow waveform. Our specific conclusions are:
* The RBC can transform into a variety of shapes such as multilobes, trilobes and hexalobes by varying the sinusoidal waveform even when it is subjected to a relatively weak flow shear rate (\(\overline{\gamma_{f}}\leq 200\ s^{-1}\)) and sufficient channel confinement \(\chi=0.65\).
* Simple shapes such as croissant, slipper, and rolling discocyte appear when the RBC is subjected to all waveforms. However, complex shapes such as rolling stomatocyte, trilobes, and hexalobes appeared only under specific conditions. The appearance of a specific shape depends on the inlet waveform \((I)\). In our study, the RBC transitions into 8 shapes under the reversible waveform \((I_{1})\), and into 5 shapes under the irreversible waveform \((I_{2})\). Therefore, it is possible to attain a certain shape using an appropriate waveform.
* Under the reversible flow waveform, the axial displacement of the RBC is rather minimal. However, the lateral displacements are significantly large. Under the irreversible flow waveform, the RBC experiences a large axial displacement but small lateral displacements.
* The maximum lateral displacement of the RBC during its propagation depends on the initial radial shift \((r)\). This maximum value is also dependent on the asymmetry
of the flow waveform (_I_).
* The extracellular flow surrounding the RBC depends on its morphological shape. The flow pattern is thus distinct and unique for each shape.
###### Acknowledgements.
This work is supported by the NSF grant number 1946202 ND-ACES and a start-up package of Trung Le from North Dakota State University. The authors acknowledges the use of computational resources at the Center for Computationally Assisted Science and Technology CCAST-NDSU, which is supported by the NSF MRI 2019077. The authors also received allocation CTS200012 from the Extreme Science and Engineering Discovery Environment (XSEDE). We acknowledge the financial support of NIH-2P20GM103442-19A1 to train undergraduate students in Biomedical Engineering.
## Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2310.02206
|
Chunking: Continual Learning is not just about Distribution Shift
|
Work on continual learning (CL) has thus far largely focused on the problems
arising from shifts in the data distribution. However, CL can be decomposed
into two sub-problems: (a) shifts in the data distribution, and (b) dealing
with the fact that the data is split into chunks and so only a part of the data
is available to be trained on at any point in time. In this work, we look at
the latter sub-problem, the chunking of data. We show that chunking is an
important part of CL, accounting for around half of the performance drop from
offline learning in our experiments. Furthermore, our results reveal that
current CL algorithms do not address the chunking sub-problem, only performing
as well as plain SGD training when there is no shift in the data distribution.
Therefore, we show that chunking is both an important and currently unaddressed
sub-problem and until it is addressed CL methods will be capped in performance.
Additionally, we analyse why performance drops when learning occurs on
identically distributed chunks of data, and find that forgetting, which is
often seen to be a problem due to distribution shift, still arises and is a
significant problem. We also show that performance on the chunking sub-problem
can be increased and that this performance transfers to the full CL setting,
where there is distribution shift. Hence, we argue that work on chunking can
help advance CL in general.
|
Thomas L. Lee, Amos Storkey
|
2023-10-03T17:04:33Z
|
http://arxiv.org/abs/2310.02206v2
|
# Chunking: Forgetting Matters in Continual Learning even without Changing Tasks
###### Abstract
Work on continual learning (CL) has largely focused on the problems arising from the dynamically-changing data distribution. However, CL can be decomposed into two sub-problems: (a) shifts in the data distribution, and (b) dealing with the fact that the data is split into chunks and so only a part of the data is available to be trained on at any point in time. In this work, we look at the latter sub-problem--the _chunking_ of data--and note that previous analysis of chunking in the CL literature is sparse. We show that chunking is an important part of CL, accounting for around half of the performance drop from offline learning in our experiments. Furthermore, our results reveal that current CL algorithms do not address the chunking sub-problem, only performing as well as plain SGD training when there is no shift in the data distribution. We analyse why performance drops when learning occurs on chunks of data, and find that forgetting, which is often seen to be a problem due to distribution shift, still arises and is a significant problem. Motivated by an analysis of the linear case, we show that per-chunk weight averaging improves performance in the chunking setting and that this performance transfers to the full CL setting. Hence, we argue that work on chunking can help advance CL in general.
## 1 Introduction
How should we update a neural network efficiently when we observe new data? This issue remains an open problem, and is one that the field of _Continual learning_ (CL) addresses. Many methods (Delange et al., 2021; Parisi et al., 2019; Wang et al., 2023) and setups (Hsu et al., 2018; Antoniou et al., 2020; van de Ven & Tolias, 2019) have been proposed in recent years. Specifically, CL studies settings where a learner sees a stream of chunks of data and where the data distribution for each chunk changes over time. This type of change in the data distribution is known as _task shift_(Caccia et al., 2020). Performing continual learning is thwart by a persistent problem; information learnt from previously seen data is forgotten when updating on a new chunk of data (Kirkpatrick et al., 2017). Tackling this problem is then one of the main focuses CL research.
CL can be decomposed into two sub-problems: (a) learning with a changing data distribution, and (b) only having access to a single chunk of data for learning at any point in time, unable to ever re-access previous chunks. We call this latter sub-problem the _chunking problem_ and analyse it in this work. We show it is responsible for a significant part of the performance difference between CL and offline learning--learning with full access to all the data. Also, our experiments demonstrate that current methods for CL do not counter this sub-problem at all, performing comparably to plain SGD training in the task-shift-free _chunking setting_. Therefore, we suggest that chunking has been underlooked as a problem in CL and in this work we set out to address this imbalance by looking at it in more detail.
Our analysis of the chunking setting establishes a number of findings. First, we show that the size of each chunk has a significant impact on performance: learning on small chunks leads to much worse performance. Second, our experiments demonstrate that forgetting is the main reason for the performance drop in the chunking setting compared to offline learning. This casts doubt on the common sentiment that forgetting is caused mainly by task shift (Lee et al., 2021; Ramasesh
et al., 2020). Third, motivated by an analysis of the linear case in the chunking setting, we look at per-chunk weight averaging which improves performance in the chunking setting and reduces the amount of forgetting. We also show that this performance benefit transfers to the full CL setting, establishing that work on the chunking sub-problem has the potential to impact CL in general.
The main contributions of this work are:
* Reviving awareness that online training in neural networks is itself an issue, irrespective of task shift. In the context of CL, we formulate that as _the chunking problem_, and demonstrate it is the reason for a large part of the performance drop between offline learning and CL.
* Analysis of chunking, where we show among other things that forgetting is a key problem and that current CL methods do not improve performance in the chunking setting.
* Proposal of a simple method, per-chunk weight averaging, which improves performance under chunking significantly. Furthermore, this performance transfers to the full CL setting, demonstrating how work on chunking can help improve CL in general.
## 2 Preliminaries and Related Work
Continual Learning (CL) is a well-studied problem, with many different settings and methods being proposed (van de Ven & Tolias, 2019; Wu et al., 2022; Mirzadeh et al., 2020; Delange et al., 2021). We focus on classification problems. In this context, standard CL (sometimes called called offline CL (Prabhu et al., 2020)), consists of a learner seeing a sequence of tasks. Each _task_ consists of a single chunk of data with all the training data from a subset of classes in the dataset (van de Ven & Tolias, 2019). A learner only views each task once and can only revisit data from previous tasks which it has stored in a limited memory buffer. For example, for CIFAR-10 (Krizhevsky, 2009) a learner might first see all the data for the airplane and ship classes, then see the data from the dog and cat classes and so on, seeing all data from two classes at a time until the learner has seen all the classes. In addition to standard CL, there is another common CL setting called online CL (Mai et al., 2021; Lee & Storkey, 2023). As shown in Figure 1, in online CL instead of there being a one-to-one map between tasks and chunks, the data for a task is split into multiple smaller chunks, the size of mini-batches, and a learner only sees each chunk once and so cannot revisit previous chunks even though they are of the same task.
In this work we look at the chunking sub-problem of CL. This problem is closely related to online learning, without task shift (Hoi et al., 2021; Bottou & Le Cun, 2003). In both cases the data is observed in the form of a stationary data stream. However, in chunking the data is batched into chunks to match modern neural network learning processes. Straight online learning can be seen as a special case when each chunk consists of one data instance. Furthermore, we investigate the neural network case in contrast to much work in online learning which focuses on the linear case (Hoi et al., 2021). There is recent work on online learning of neural networks, for example Ash & Adams (2020); Caccia et al. (2022); and Sahoo et al. (2017). But, they do not link or compare their work to CL and often the settings and assumptions are quite different from CL. This is unlike our work which focuses on providing insight into CL, which to the best of our knowledge has not been looked at in detail before.
As part of our analysis of the chunking setting we observe that preventing forgetting is the main challenge. The problem of forgetting in online learning of neural networks (without task shift) has a long history. The term _catastropic forgetting_ originates in work on Hopfield associative memories (Hopfield, 1982), where online addition of new data eventually results in the erasure of all stored memories, and minimising forgetting to increase storage was a goal (Storkey, 1997). The general problem of forgetting during learning was subsequently characterised by Grossberg (1988) as the _stability-plasticity dilemma_. In the following decade, the issue of forgetting in online learning of feedforward networks (de Angulo & Torras, 1995; Polikar et al., 2001) also received some attention. Despite not being a solved problem, it became less of a focus as non-neural approaches for machine learning came to the fore in the mid 1990s. With a resurgence of interest in neural networks, online learning was reconsidered in the form of CL (Delange et al., 2021; Ramasesh et al., 2020; Wang et al., 2023), but with a focus on more realistic settings that also involve task shift (Ramasesh et al., 2020; Lee et al., 2021). Because of this greater complexity, the component of forgetting due to incremental online learning or chunking has been comparatively understudied in the CL literature.
Yet decomposing a problem can aid its solution, and indeed, we show that chunking is responsible for a large part of the forgetting that happens in CL.
To improve performance in the chunking setting we look at using per-chunk weight averaging. There have been many weight averaging approaches proposed for offline learning (Izmailov et al., 2018; Tarvainen and Valpola, 2017). Instead, in this work we look at applying a per-chunk weight averaging approach to the online chunking setting, motivated by our analysis of the linear case.
## 3 The Chunking Setting
In the _chunking setting_, a learner sees a sequence of chunks \(C_{1},C_{2},\ldots,C_{N}\), and trains on one chunk of data at a time: chunks are not revisited. Each chunk of data consists of instance pairs \((x,y)\), with \(x\in X\) (e.g. images) and labels \(y\in Y\). The data in all chunks are drawn from the same distribution, so there is no distribution shift. Furthermore, in this paper, to control for class imbalance effects we consider a _balanced_ chunking setting (henceforth assumed); we constrain each chunk to have as close to the same number of instances for each class in \(Y\) as possible. In this way we ensure the results of our experiments are solely due to the effects of limited data availability through chunking and not due to class imbalance. We record results for the case where the chunks are class imbalanced in Appendix B and observe that for our experimental setup class imbalance does not have any significant effect.
In practice, to perform experiments in the chunking setting, consider a class-balanced training dataset of size \(M\) and a target chunk size \(S\). First, we randomly reorder the training data for each class, and then arrange all the data into a class-ordered list. Data is then sequentially allocated into \(\lfloor M/S\rfloor\) chunks by assigning each element of the list in turn to a chunk, in a cyclical fashion. So, the first data item goes into chunk 1, second into chunk 2 etc., up to an item into chunk \(\lfloor M/S\rfloor\), then the next into chunk 1 again and so on. Then we randomly permute the data within each chunk and randomly reorder the chunks themselves. To ensure chunks are fully balanced, in the experiments in this paper we choose chunk sizes so that all chunks are of equal size and contain the same number of data instances for each class. Finally, we reserve a portion of data from each class to form a test set which is used to evaluate the accuracy of a method.
The only difference between the chunking setting and the full CL setting is the lack of task shift. We do not change tasks in the chunking setting and instead the stream consists of chunks from a single task which contains all the training data from all the classes, as shown in Figure 1. Therefore, the chunking setting provides a simple way to analyse and understand the problems caused by chunking itself. Also, performance in the chunking setting gives an upper bound to CL performance, and so without solving this setting CL will never be able to improve beyond current chunking performance.
## 4 Analysis of the Chunking Setting
To see how much chunking impacts the performance in CL, we look at its relative contribution to the performance drop of CL from offline learning, showing it plays a significant part. This was
Figure 1: Diagram showing the standard CL, Online CL and chunking settings, where \(T_{k}\) denotes a task, \(C_{t}\) denotes a chunk and the arrows indicate which chunk belongs to which task. The figure shows that the chunking setting is a reduced CL setting where the task does not change.
achieved by performing an experiment where we compare the performance of the state-of-the-art CL method DER++ (Buzzega et al., 2020) for both standard CL and the chunking setting to offline SGD training. Where for standard CL we look at class-incremental learning which means that at test time, the learner has to classify across all the classes (van de Ven and Tolias, 2019), like the chunking setting. For the experiments, we use a ResNet18 backbone and a 10 task/chunk split of CIFAR-100 (Krizhevsky, 2009) and Tiny ImageNet (Stanford, 2015)--the rest of the experimental details are given in Appendix A. The results are presented in Table 1 and show that the performance drop between offline learning and chunking is \(50.05\%\) and \(46.69\%\) of the full performance drop from offline learning to CL for CIFAR-100 and Tiny ImageNet, respectively. This indicates that a significant part of the performance drop of CL from offline learning is due to chunking and not due to the task changing. Also, in the real world it is often the case that the hard task shifts commonly used in continual learning do not happen (Bang et al., 2021, 2022) and instead there are smoother changes between tasks which should reduce the effect of task shift and increase the importance of dealing with chunking.
### Performance in the Chunking setting
Our results on the chunking setting show that CL methods perform no better than plain SGD and perform worse as the size of the chunks decreases. For instance, Figures 2 and 3 present the performance of state-of-the-art CL methods for different chunk sizes and a memory buffer size of 500 examples on CIFAR-10, CIFAR-100 and Tiny Imagenet, which are commonly used in the CL literature (Delange et al., 2021). The full experimental details of this experiment are described in Appendix A. The results show that there is a large performance drop as the chunk size decreases. For example, on CIFAR-100 for offline learning when all the data is in one chunk, corresponding to a chunk size of 50000, CL methods get a test accuracy of around \(73\%\) but when each chunk consists of 1000 examples they get around \(45\%\). Also, Figures 2 and 3 show that all the CL methods perform roughly the same as plain SGD. Hence, our results indicate current CL methods do not tackle the chunking problem at all and instead have focused on reducing the performance drop due to task shift, as they perform much better than SGD on settings with task shift (Wang et al., 2023). One point
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Dataset & Memory Size & Offline & Chunking & CL & Chunking Prop. \\ \hline CIFAR-100 & 2000 & \(73.72_{\pm 0.115}\) & \(63.35_{\pm 0.348}\) & \(53.00_{\pm 0.327}\) & \(50.05\%\) \\ Tiny ImageNet & 5120 & \(60.63_{\pm 0.366}\) & \(50.54_{\pm 0.118}\) & \(39.02_{\pm 0.97}\) & \(46.69\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy of DER++ when using a ResNet18 in the offline, chunking and standard CL class-incremental settings, along with the drop in accuracy from offline learning to CL due to chunking (Chunking Prop.). We split each dataset into 10 tasks following the experimental setup of Buzzega et al. (2020) and Boschini et al. (2022) and also use a memory size selected in those works.
Figure 2: End-of-training accuracy against chunk size on CIFAR-10 and CIFAR-100, where each data point on a curve presents the end-of-training accuracy of a method from a full run with chunks of the size given on the horizontal axis.
to note on this is that the replay methods ER (Chaudhry et al., 2020) and ER-ACE (Caccia et al., 2021) perform better than SGD for very small chunk sizes. This is due to them storing 500 examples in memory and so have an effective chunk size of 500 more data points than SGD, which impacts performance for chunk sizes below 500. Lastly, we look at the effect using pretrained models has in the chunking setting in Appendix F.
An important question to ask is why does chunking reduce the performance from the offline setting. There are three general possibilities: not integrating all the information a chunk has into the model (underfitting), fully integrating each chunk's information but at the cost of forgetting previous information (forgetting) or a mixture of both. To explore which possibility is true we look at training using 50 chunks and present the loss curve in Figure 4 of learning on the first 13 chunks for CIFAR-100 and in Figure 5 the test accuracy and accuracy on the training data for the \(5^{th}\), \(20^{th}\) and \(40^{th}\) chunks evaluated at the end of each chunk, for CIFAR-100 and Tiny ImageNet (see Appendix D for CIFAR-10). The loss curve in Figure 4 shows that we fit each chunk well as the loss plateaus for each chunk and at a relatively low value. Furthermore, the accuracy curves for each chunks training data in Figure 5 establishes that after training on the chunk the model fits it perfectly, achieving an accuracy of \(100\%\). Hence, we know that we fit each chunk well, removing the possibility of underfitting. Figure 5 also shows that after learning on the chunk the accuracy on that chunks data quickly drops and falls back to the level of the test set performance, showing that the learner is forgetting a lot of the chunk's information. However, not all of a chunk's information is forgotten as
the test accuracy improves as the learner sees more chunks. Therefore, our results establish that the performance drop in the chunking setting is due to forgetting. This demonstrates that forgetting is not only due to task shift, like suggested in previous work (Lee et al., 2021; Ramasesh et al., 2020), but that it is also due to seeing the data in chunks.
We have shown that forgetting is the reason for the reduced performance in the chunking setting, when compared to offline learning. However, not all the information provided by a chunk is forgotten and if a learner could repeatedly resample chunks it would approach offline performance. This fact is standard knowledge for online learning (Bottou and Le Cun, 2003) and has recently been shown to be true for CL with task-shift (Lesort et al., 2023). However, unlike standard online learning, in the chunking setting and CL in general it is not possible to resample chunks and so in these settings we need to be able to fully learn a chunk of data without needing to repeatedly revisit it in the future. This implies that improving chunking performance and reducing forgetting is closely related to improving the efficiency of learning. Hence, we hope that work on improving chunking performance will also improve the general efficiency of learning algorithms and vice versa.
### Analysis of the Linear Case
To analyse the chunking problem further we turn to the linear regression case, where we can leverage closed form solutions. In this case, the naive solution is to perform least squares on each arriving chunk. However, as the least squares problem is convex and so does not depend on the initialised weights, it will fully forget all the past chunks, only using the last chunk to create the predictor. This means that the standard least squares solution to linear regression fails in the chunking setting. Instead a better solution is to use Bayesian linear regression (Minka, 2000). This is because Bayesian linear regression given any particular chunking of the data will return the same predictor and so fully solves the chunking setting. Therefore, it is instructive to see how Bayesian linear regression forgetting. To achieve this we present below the update equations for Bayesian linear regression. The prior on the weights is \(\mathbf{\theta}\sim\mathcal{N}(\mathbf{0},\mathbf{V}_{0})\) and the posterior after seeing all the chunks up to and including the \((k-1)\)th is \(\mathbf{\theta}|C_{1:k-1}\sim\mathcal{N}(\mathbf{m}_{k-1},\mathbf{V}_{k-1})\). Additionally, for a chunk \(C_{t}\) we define \(\mathbf{X}_{t}\) as its row-wise matrix of data instances and \(\mathbf{y}_{t}\) as its vector of targets. The likelihood is defined by assuming \(\mathbf{y}|\mathbf{x},\mathbf{\theta}\sim\mathcal{N}(\mathbf{\theta}^{T}\mathbf{x}, \sigma^{2})\). Then, the Bayesian posterior on the \(k\)th chunk is
\[\mathbf{\theta}|C_{1:k} \sim\mathcal{N}(\mathbf{m}_{k},\mathbf{V}_{k}), \tag{1}\] \[\mathbf{m}_{k} =\mathbf{V}_{k}\mathbf{V}_{k-1}^{-1}\mathbf{m}_{k-1}+\frac{1}{ \sigma^{2}}\mathbf{V}_{k}\mathbf{X}_{k}^{T}\mathbf{y}_{k},\] (2) \[\mathbf{V}_{k}^{-1} =\mathbf{V}_{k}^{-1}+\frac{1}{\sigma^{2}}\mathbf{X}_{k}^{T} \mathbf{X}_{k}. \tag{3}\]
By recursively expanding the \(\mathbf{m}_{k-1}\) and \(\mathbf{V}_{k-1}\) terms till we reach the prior we have that
\[\mathbf{m}_{k} =\frac{1}{\sigma^{2}}\sum_{t=1}^{k}\mathbf{V}_{k}\mathbf{X}_{t}^{ T}\mathbf{y}_{t} \tag{4}\] \[\mathbf{V}_{k}^{-1} =\mathbf{V}_{0}^{-1}+\frac{1}{\sigma^{2}}\sum_{t=1}^{k}\mathbf{X }_{t}^{T}\mathbf{X}_{t}=\mathbf{V}_{0}^{-1}+\frac{1}{\sigma^{2}}\mathbf{X}_{ 1:k}^{T}\mathbf{X}_{1:k}. \tag{5}\]
The equations above show that Bayesian linear regression prevents forgetting by having its posterior mean \(\mathbf{m}_{k}\) be: (a) a sum of the least squares solutions of each chunk and (b) instead of using the chunks unnormalised empirical covariance \(\mathbf{X}_{t}^{T}\mathbf{X}_{t}\) in the least squares solutions it uses the running estimate of the weight precision \(\mathbf{V}_{k}^{-1}\). Computing and storing \(\mathbf{V}_{k}^{-1}\) is infeasibly costly for very large systems (e.g. neural networks), taking up \(O(\text{dim}(\mathbf{\theta})^{2})\) space. Therefore, assuming there is only enough memory to store a set of weights a backoff is to use a sum of the least squares solutions to each chunk. This is achieved by _weight averaging_, where at each chunk we perform least squares on that chunk and add it to a running average, which results in the update equation,
\[\mathbf{m}_{k}=\frac{k-1}{k}\mathbf{m}_{k-1}+\frac{1}{k\sigma^{2}}(\mathbf{X}_ {k}^{T}\mathbf{X}_{k})^{-1}\mathbf{X}_{k}^{T}\mathbf{y}_{k}. \tag{6}\]
Again, by recursively expanding \(\mathbf{m}_{k-1}\) we have that,
\[\mathbf{m}_{k}=\frac{1}{k\sigma^{2}}\sum_{t=1}^{k}(\mathbf{X}_{t}^{T}\mathbf{X }_{t})^{-1}\mathbf{X}_{t}^{T}\mathbf{y}_{t}. \tag{7}\]
Weight averaging gives similar, mean, weights as Bayesian linear regression where instead of using \(\mathbf{V}_{k}\) it uses the per-chunk estimate \(\frac{1}{k}(\mathbf{X}_{t}^{T}\mathbf{X}_{t})^{-1}\) and we divide by \(k\) to correctly scale the estimate. Both \(\mathbf{V}_{k}\) and \(\frac{1}{k}(\mathbf{X}_{t}^{T}\mathbf{X}_{t})^{-1}\) are unnormalised estimates of the precision of the data distribution. Therefore, when each chunk is large enough that they are both accurate estimates, we have that \(\frac{1}{k}(\mathbf{X}_{t}^{T}\mathbf{X}_{t})^{-1}\approx\mathbf{V}_{k}\) for all \(t\in\{1,\dots,k\}\). In this case, weight averaging approximates Bayesian linear regression well and so should not forget that much. This means it greatly improves performance over standard linear regression, which forgets all but the last chunk. Hence, the question arises if this analysis showing weight averaging improving performance, assuming the chunks are large enough, still holds true for neural networks. We look at this in the next section.
## 5 Per-Chunk Weight Averaging
From the analysis of the linear case (Section 4.2), we see that averaging the weights learnt at the end of each chunk is a way to improve performance in the chunking setting for linear models. Motivated by this, we now look at the neural network case, where our results show that weight averaging also improves performance, often by a large margin. More precisely, the simple method we look at, calling it per-chunk weight averaging, consists of training the model as normal but we additionally store an average of the weights learnt at the end of each chunk, which is not used in training but in evaluation is used as the weights of the network. Here we consider the _weights_ to be all the parameters of the neural network, including batch normalisation statistics (Ioffe & Szegedy, 2015).
Figure 6: Plots (a), (b) and (c) show the end-of-training accuracy when leaning with the given chunk size for CIFAR-10, CIFAR-100 and Tiny ImageNet, where sgd is learning without weight averaging and we display EMA results for \(\alpha{=}0.8\) and \(0.95\). Plot (d) shows when using mean weight averaging the accuracy at the end of learning on each chunk for the training set of the \(5^{th}\), \(20^{th}\) and \(40^{th}\) chunks and the test set, for Tiny ImageNet with 50 chunks, corresponding to a chunk size of 2000.
More specifically, we look at using in evaluation the mean or an exponential moving average (EMA) of the weights found after training on each chunk up to some chunk \(k\), defined by
\[\mathbf{\theta}_{k}^{MEAN} =\frac{1}{k}\sum_{t=1}^{k}\mathbf{\theta}_{t} \tag{8}\] \[\mathbf{\theta}_{k}^{EMA} =\alpha\mathbf{\theta}_{k-1}^{EMA}+(1-\alpha)\mathbf{\theta}_{k}, \tag{9}\]
where \(\mathbf{\theta}_{t}\) is the value of the weights after learning on chunk \(C_{t}\) and for EMA, \(\alpha\in[0,1]\) controls how much weight is given to old versus newly learnt end-of-chunk weights.
To observe whether per-chunk weight averaging improves performance in the chunking setting, we carry out experiments using it in combination with plain SGD training. The reason we only look at plain SGD training and not a CL method is that, as shown in Figures 2 and 3, no CL method looked at performs any better than SGD in the chunking setting. The experimental setup is the same as the previous experiments and is described in Appendix A. The results of the experiments are presented in plots (a), (b) and (c) of Figure 6 and show that it is clear that for all three datasets--CIFAR-10, CIFAR-100 and Tiny ImageNet--using a per-chunk weight average in evaluation increases accuracy. For instance, for the smallest chunk size looked at for each dataset, using mean weight averaging improves accuracy by \(+4.32\%\), \(+8.22\%\) and \(+11.73\%\) for CIFAR-10, CIFAR-100 and Tiny ImageNet, respectively. Additionally, Figure 6 demonstrates that using the mean is better than or comparable to using EMA for nearly all chunk sizes on each dataset. We only display EMA for two \(\alpha\) values in the figure but we looked at many more in Appendix E, and selected the two best values to show in Figure 6. So, our results show that using the mean of the weights learnt after learning on each chunk for prediction is an effective way to improve performance in the chunking setting.
To analyse why per-chunk weight averaging improves performance, we look at how well it preserves the information of past chunks. To do this, like in Figure 5, we measure, for per-chunk mean weight averaging, the test accuracy and the accuracy on the training data of the \(5^{th}\), \(20^{th}\) and \(40^{th}\) chunks at the end of learning on each chunk, when using 50 chunks. The results are shown in plot (d) of Figure 6 for Tiny ImageNet and for CIFAR-10 and CIFAR-100 in Appendix D. By comparing these results to the ones when using the final weights for evaluation, shown in Figure 5, we see that when using per-chunk mean weight averaging more information is preserved from previous chunks. This is because using it gives higher accuracy on the training data from previous chunks than the test set long after that chunk was trained on. While, when using the final weights for evaluation this is not the case, as after learning on a chunk the accuracy on the training data of that chunk drops quickly down to around the test set accuracy. Therefore, part of the reason per-chunk weight averaging performs well is that it forgets less than plain SGD in the chunking setting.
### Application to Continual Learning
While per-chunk weight averaging improves performance in the chunking setting, it is also important to see how this translates to the full CL setting, so that we can see how work on the chunking setting can impact CL in general. To do this we perform experiments using mean weight averaging in class and task incremental learning (van de Ven & Tolias, 2019), the two main CL scenarios, using four standard well-performing methods: DER++ (Buzzega et al., 2020), experience replay (ER) (Chaudhry et al., 2020), AGEM (Chaudhry et al., 2019) and GSS (Aljundi et al., 2019). As in common with the rest of this work and many works on continual learning (Delange et al., 2021; Buzzega et al., 2020), we use CIFAR-10, CIFAR-100 and Tiny ImageNet as the datasets for this experiment, splitting CIFAR-10 into 5 tasks each containing the data of 2 classes and splitting CIFAR-100 and Tiny ImageNet into 10 tasks each consisting of the data of 10 classes for CIFAR-100 and 20 for Tiny ImageNet. The difference between class and task incremental learning is that at test time for task-incremental learning each method only predicts which class a data instance is between the classes of that data instance's task, while for class-incremental learning the method has to classify between all classes seen. Additionally, we look at both standard and online CL (as defined in Section 2) and set the memory size to be 100 examples for all methods. For standard CL methods can repeatedly iterate over the data of a task, in our experiments for each task we use 50 epochs for CIFAR-10 and CIFAR-100 and 100 epochs for Tiny ImageNet, like previous work (Buzzega et al., 2020).
The results of the experiments on per-chunk mean weight averaging in CL are presented in Table 2 and demonstrate that in general it improves performance. For example, in the standard CL setting
using per-chunk mean weight averaging improves performance on average by \(+6.39\%\), \(+11.11\%\), \(+12.02\%\) and \(+11.36\%\) for DER++, ER, AGEM and GSS, respectively. While in the online CL setting it improves performance on average by \(+5.05\%\), \(+4.52\%\), \(+8.82\%\) and \(+3.68\%\) for DER++, ER, AGEM and GSS, respectively. However, for DER++ on CIFAR-10 and for GSS on Tiny ImageNet for class-incremental learning per-chunk mean weight averaging does worse than using the final learnt weights. However, as a method will have access to both options when using per-chunk mean weight averaging by validating the performance of each option it is always possible to pick the better one, avoiding any accuracy loss. So, we have shown that per-chunk weight averaging improves performance in the chunking setting and that, in general, this improvement transfers to CL, showing that work on the chunking sub-problem can impact CL research as a whole.
## 6 Conclusions
In this work we have looked at chunking, bringing awareness to the fact it is an important subproblem of continual learning (CL), being responsible for a large part of the performance drop between offline and CL performance. We have presented results evidencing that current CL methods do not tackle the chunking problem at all, having comparable performance to plain SGD training in the chunking setting. Additionally, we have demonstrated that the reason for the performance drop in the chunking setting is forgetting, and that the size of each chunk has a significant effect on performance. Motivated by an analysis of the linear case, we also look at using per-chunk weight averaging in the chunking setting, showing that it improves performance. Furthermore, we show that per-chunk weight averaging improves performance of CL methods in the full CL setting, indicating that future work on chunking has the possibility of improving CL as a whole.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} & \multicolumn{2}{c}{Tiny ImageNet} \\ \cline{3-8} Setting & Method & Class-IL & Task-IL & Class-IL & Task-IL & Class-IL & Task-IL \\ \hline \multirow{8}{*}{\begin{tabular}{l} Online \\ \end{tabular} } & DER++ & \(34.76_{\pm 2.20}\) & \(78.56_{\pm 1.10}\) & \(6.73_{\pm 0.26}\) & \(41.21_{\pm 1.34}\) & \(5.48_{\pm 0.21}\) & \(30.95_{\pm 0.11}\) \\ & WA-DER++ & \(33.46_{\pm 0.72}\) & \(81.97_{\pm 0.25}\) & \(12.34_{\pm 0.19}\) & \(52.34_{\pm 0.43}\) & \(8.53_{\pm 0.05}\) & \(39.32_{\pm 0.55}\) \\ & \(\Delta\)Acc & \(-1.30\) & \(+3.41\) & \(+5.61\) & \(+11.13\) & \(+3.05\) & \(+8.37\) \\ \cline{2-8} & ER & \(36.19_{\pm 1.19}\) & \(81.89_{\pm 0.92}\) & \(8.45_{\pm 0.45}\) & \(44.14_{\pm 1.31}\) & \(5.56_{\pm 0.21}\) & \(27.23_{\pm 0.65}\) \\ & WA-ER & \(39.59_{\pm 0.60}\) & \(84.27_{\pm 0.37}\) & \(14.01_{\pm 0.23}\) & \(50.66_{\pm 0.77}\) & \(7.77_{\pm 0.09}\) & \(34.26_{\pm 0.33}\) \\ & \(\Delta\)Acc & \(+3.40\) & \(+2.38\) & \(+5.56\) & \(+6.52\) & \(+2.21\) & \(+7.03\) \\ \cline{2-8} & AGEM & \(16.82_{\pm 0.61}\) & \(70.70_{\pm 1.92}\) & \(4.70_{\pm 0.51}\) & \(29.56_{\pm 1.93}\) & \(3.93_{\pm 0.22}\) & \(20.53_{\pm 1.30}\) \\ & WA-AGEM & \(22.59_{\pm 1.04}\) & \(72.37_{\pm 3.03}\) & \(10.73_{\pm 0.35}\) & \(44.68_{\pm 0.58}\) & \(9.06_{\pm 0.47}\) & \(34.44_{\pm 0.59}\) \\ & \(\Delta\)Acc & \(+5.77\) & \(+1.67\) & \(+6.03\) & \(+20.39\) & \(+5.13\) & \(+13.91\) \\ \cline{2-8} & GSS & \(27.33_{\pm 1.26}\) & \(81.28_{\pm 1.47}\) & \(7.93_{\pm 0.16}\) & \(49.95_{\pm 0.24}\) & \(5.59_{\pm 0.11}\) & \(36.00_{\pm 0.49}\) \\ & WA-GSS & \(35.03_{\pm 0.50}\) & \(84.51_{\pm 0.41}\) & \(8.40_{\pm 0.32}\) & \(54.68_{\pm 0.28}\) & \(4.82_{\pm 0.06}\) & \(42.69_{\pm 0.38}\) \\ & \(\Delta\)Acc & \(+7.70\) & \(+3.23\) & \(+0.47\) & \(+4.73\) & \(-0.77\) & \(+6.69\) \\ \hline \multirow{8}{*}{
\begin{tabular}{l} Standard \\ \end{tabular} } & DER++ & \(53.18_{\pm 0.87}\) & \(88.90_{\pm 0.30}\) & \(16.26_{\pm 1.22}\) & \(58.92_{\pm 0.36}\) & \(11.08_{\pm 0.38}\) & \(34.26_{\pm 0.32}\) \\ & WA-DER++ & \(49.88_{\pm 1.63}\) & \(39.25_{\pm 0.33}\) & \(23.46_{\pm 1.48}\) & \(72.46_{\pm 1.08}\) & \(12.39_{\pm 0.93}\) & \(49.51_{\pm 0.69}\) \\ & \(\Delta\)Acc & \(-3.30\) & \(+4.35\) & \(+7.20\) & \(+13.54\) & \(+1.31\) & \(+15.25\) \\ \cline{2-8} & ER & \(40.10_{\pm 0.81}\) & \(89.79_{\pm 0.75}\) & \(11.78_{\pm 0.34}\) & \(57.80_{\pm 1.02}\) & \(8.36_{\pm 0.16}\) & \(31.72_{\pm 0.46}\) \\ & WA-ER & \(56.49_{\pm 0.87}\) & \(94.28_{\pm 0.17}\) & \(24.24_{\pm 0.64}\) & \(70.07_{\pm 0.29}\) & \(12.31_{\pm 0.19}\) & \(46.71_{\pm 0.33}\) \\ & \(\Delta\)Acc & \(+16.48\) & \(+4.49\) & \(+12.46\) & \(+12.27\) & \(+3.95\) & \(+14.99\) \\ \cline{2-8} & AGEM & \(20.19_{\pm 0.28}\) & \(85.80_{\pm 1.18}\) & \(9.35_{\pm 0.01}\) & \(46.99_{\pm 0.26}\) & \(8.15_{\pm 0.05}\) & \(24.76_{\pm 0.62}\) \\ & WA-AGEM & \(38.87_{\pm 2.83}\) & \(92.06_{\pm 0.61}\) & \(18.05_{\pm 0.68}\) & \(65.23_{\pm 0.61}\) & \(10.42_{\pm 0.32}\) & \(42.75_{\pm 0.25}\) \\ & \(\Delta\)Acc & \(+18.68\) & \(+6.26\) & \(+8.70\) & \(+18.24\) & \(+2.27\) & \(+17.99\) \\ \cline{2-8} & GSS & \(30.91_{\pm 1.02}\) & \(86.08_{\pm 0.35}\) & \(10.74_{\pm 0.10}\) & \(50.30_{\pm 0.28}\) & \(8.30_{\pm 0.01}\) & \(27.55_{\pm 1.04}\) \\ & WA-GSS & \(51.58_{\pm 1.14}\) & \(93.75_{\pm 0.43}\) & \(14.78_{\pm 0.57}\) & \(69.20_{\pm 0.35}\) & \(6.13_{\pm 0.07}\) & \(46.57_{\pm 1.16}\) \\ \cline{2-8} & \(\Delta\)Acc & \(+20.67\
|
2303.15642
|
Graph Sequence Learning for Premise Selection
|
Premise selection is crucial for large theory reasoning as the sheer size of
the problems quickly leads to resource starvation. This paper proposes a
premise selection approach inspired by the domain of image captioning, where
language models automatically generate a suitable caption for a given image.
Likewise, we attempt to generate the sequence of axioms required to construct
the proof of a given problem. This is achieved by combining a pre-trained graph
neural network with a language model. We evaluated different configurations of
our method and experience a 17.7% improvement gain over the baseline.
|
Edvard K. Holden, Konstantin Korovin
|
2023-03-27T23:51:05Z
|
http://arxiv.org/abs/2303.15642v1
|
# Graph Sequence Learning for Premise Selection
###### Abstract
Premise selection is crucial for large theory reasoning as the sheer size of the problems quickly leads to resource starvation. This paper proposes a premise selection approach inspired by the domain of image captioning, where language models automatically generate a suitable caption for a given image. Likewise, we attempt to generate the sequence of axioms required to construct the proof of a given problem. This is achieved by combining a pre-trained graph neural network with a language model. We evaluated different configurations of our method and experience a 17.7% improvement gain over the baseline.
Keywords:Automated Theorem Proving Machine Learning Premise Selection Sequence Learning Graph Neural Network
## 1 Introduction
Automated Theorem Provers (ATPs) construct formal proofs without human interaction and have seen great success in hardware and software verification, as it automatically verifies system properties. They have also played a significant role in formalisation projects such as Mizar [31] and as hammers in interactive theorem proving [20, 28].
State-of-the-art ATPs such as iProver [5, 15], E [27], Vampire [16] and SPASS [33] attempts to solve problems consisting of a conjecture and a set of axioms through saturation. This process consists of clausifying the input problem and computing all possible inferences between the clauses until it derives a contradiction or the set of clauses is saturated. However, computing inferences quickly lead to a combinatorial explosion in the number of clauses which exhausts the computational resources. Therefore, the proof search is guided by heuristics. This is essential because the chance of deriving a contradiction reduces considerably as the number of clauses grows. Machine learning is currently being used to discover efficient heuristics [11, 12, 26], and to intelligently operate internal heuristic components [4, 6, 25, 8].
However, strong heuristics are insufficient when reasoning over large theories. Problems concerning verification and mathematical statements often contain a vast amount of axioms, quickly resulting in resource starvation due to the sheer size of the initial space. A key observation is that, despite the large set of axioms, only a fraction is typically required to construct the proof. Consequently, by removing all "superfluous" axioms, the problems become computationally feasible, and the chance of finding proof increases dramatically. This task is known as _premise selection_.
This paper explores the adaptation of image captioning to the premise selection task. Image captioning models aim to produce a sentence in natural language that describes a given image. The captions are generated by embedding images using a pre-trained
image model and combining the embedding with a language model. Such methods are novel for premise selection and hold multiple compelling properties. First, the axioms are represented by tokens, and their embeddings are learnt during training. With an abstract token representation of the axioms, we can leverage both the conjecture-axiom and the inter-axiom relationship. This is in contrast to approaches that accentuate the structural and symbol similarity of conjecture-axiom pairs. A vital property of the language model is encapsulating sequences of axioms as opposed to treating them separately. By representing the axioms occurring in a proof as a sorted set, the model can learn the conditional relationship between the axioms occurring in a proof. This is crucial for treating the axioms as a collective set.
Another challenging aspect of a captioning approach is computing problem embedding entailing problem semantics. First-order problems consist of a set of tree-structured formulas which are not easily represented through a feature vector, as required for machine learning. This paper investigates pre-trained graph neural networks (GNNs) to embed problems via transfer learning. GNNs operate on graphs and can incorporate structural properties into the embedding. Nevertheless, there is no apparent pre-training task for FOF problems as there is for, e.g. images. Therefore, we investigate using a supervised pre-training step where the GNN learns the embedding by training on the premise selection problem in the binary setting. Additionally, we experiment using an unsupervised approach that learns to embed the problems based on graph similarity. The graph embeddings are further enhanced by emphasising different sections of the embedding at given steps of axiom generation by exploring attention mechanisms.
Due to the challenges of premise selection, a single approach is unlikely to encapsulate all productive aspects of the axiom-conjecture relationship. Hence, we also explore the combination of our method and SInE [10].
Our main contributions are:
* Adapt approaches from image captioning to the task of premise selection.
* Novel method for combining graphs with sequence learning in the context of premise selection, outperforming previous tokenised conjecture approaches.
* Novel method for unsupervised training of GNNs embeddings of FOF problems.
* Usage of GNNs on problem graphs for transfer learning.
* 'Rare Axiom' inclusion technique for training with a reduced vocabulary while maintaining rare positive axioms.
We evaluated our approach over an extended version of the DeepMath dataset. The results show that the specificity of our approach, in combination with the breadth of SInE, significantly outperforms related methods and results in a 17.7% increase in the number of solved problems over the baseline.
This paper is structured as follows: in Section 2 we present the related works. In Section 3 we present the sequence model and in Section 4 we describe our approach for obtaining problem embeddings. In Section 5, we evaluate our approach experimentally both offline and online, before concluding the paper in Section 6.
## 2 Related Works
Premise selection has previously been addressed with heuristic-based methods such as MePo [19] and the very successful SInE [10] algorithm. The core idea of SInE is that axioms are likely to contribute towards the proof if they contain symbols related to the symbols in the conjecture. This is achieved by iteratively selecting axioms with symbols occurring in a set of selected axioms relative to how often the symbol appears globally. The main limitation of the approach is a low specificity and not utilising any information from existing proofs.
The task of premise selection has also been approached with machine learning methods such as Naive Bayes [31], kernel methods [1], K-NN [13] and binary classification [21, 24, 17, 2]. In the binary setting, the goal is to train a supervised model to score conjecture-axiom pairs. A significant drawback of this method is that axioms are considered independent, and the problem sizes strongly skew predictions. Instead, axioms should be treated as a collective entity, as all the axioms occurring in a proof must be selected to construct the proof.
The approaches most similar to our method are the sequence-to-sequence approach in [22] and its extension with a Transformer model [23]. The sequence models treat the conjecture as a sequence of tokens and map it to a sequence of axioms. Their main limitation is being unaware of how the conjecture relates to elements of the various axioms. GNNs can model the relationship between formulae elements, as shown by the binary graph classification approach in [24]. Meanwhile, our approach is aware of the relationship between the axioms occurring in the proof and the conjecture's connection to these axioms.
## 3 Axiom Captioning
The image captioning problem can be stated as follows: given an image \(I\) and a dictionary of words \(\Omega\), generate an accurate and grammatical caption \(S\), consisting of words from \(\Omega\). This challenging problem goes beyond the already non-trivial task of identifying the image objects. Rather, it requires identifying and comprehending: the objects, their attributes and their relation. Moreover, this information must be decoded and represented as a grammatically correct sentence in the target language.
State-of-the-art image captioning approaches join the machine learning fields of image classification and language modelling. An example of a captioning model based on the inject architecture is shown in Figure 1. It consists of three components: an image encoder, a language model, and a dense output layer. The image encoder extracts and embeds the image semantics as a feature vector. The language model combines these salient features with an input word to produce an encoding of the current sequence. Finally, the dense layer maps the encoding to a probability distribution over the vocabulary.
Despite the challenges of image captioning, the models produce appropriate and detailed image descriptions. Due to their expressiveness, we believe these methods can be utilised for premise selection. In the remaining parts of this section, we describe the sequence model.
### Sequence Learning
In the original task of image captioning, the model operates on pairs of images and captions in a target language. In the context of premise selection, the images are replaced by problems and the captions are replaced with the axioms that appear in the proof of the problems. Assume we have a problem \(I\) with a corresponding proof \(S^{*}\) and an axiom resource bank \(\Omega\). Next, we extract and impose an order on the \(m\) axioms in \(S^{*}\), resulting in \(S=\langle s_{1},\dots,s_{m}\rangle\), \(s_{i}\in\Omega\) for \(1\leq i\leq m\). We describe the task of premise selection in the context of sequence learning as maximising the probability of producing the sequence of axioms used in the proof of a given problem. Given the problem-axioms pair \((I,S)\) we can compute its log probability as:
\[\log p(S|I)=\sum_{t=1}^{m}\log p(s_{t}|s_{t-1},\dots,s_{1},I).\]
We estimate \(\log p(s_{t}|s_{t-1},\cdots,s_{1},I)\) with the recurrent neural network (RNN) \(\sigma\) with learnable parameters \(\theta\). RNNs exhibit a dynamic behaviour over a sequence of inputs due to their internal memory state \(h_{t}\), which captures the previous inputs sequentially. In particular, the output at step \(t\) depends on the previous memory state \(h_{t-1}\) and the input \(s_{t-1}\). The hidden state is defined as:
\[h_{t}=\left\{\begin{aligned} \sigma(I;\theta)&\text{if }t=1,\\ \sigma(h_{t-1},s_{t-1};\theta)&\text{otherwise}.\end{aligned}\right.\]
The RNN is trained to predict the next token in a sequence based on the previous token and the current memory state. Over a training set of problem-axiom pairs \(\{(I^{i},S^{i})\}_{i=1}^{N}\), the model is trained to maximise the log probability of producing the correct sequence of axioms:
\[\theta^{*}=\operatorname*{arg\,max}_{\theta}\sum_{I,S}\log p(S|I;\theta).\]
Thus, the model predicts axioms based on the problem and the previously predicted axioms. In our implementation, we use Long-Short Term Memory (LSTM) [9] cells as the underlying RNN. LSTM is among the most popular RNN models due to its robustness towards vanishing and exploding gradients.
Figure 1: The inject architecture for image captioning.
### Axiom Captioning
The generative axiom prediction model is constructed using the par-inject architecture [30], as illustrated in Figure 2. This architecture takes a token embedding \(\mathbf{s}\) and a problem embedding \(\mathbf{I}\) at each time step. The model is given the special start token \(s_{start}\) to initialise the axiom generation process. Likewise, a special end token, \(s_{end}\), represents the end of a sequence. Consequently, start and end tokens are added to each axiom sequence such that the model is trained on the target sequence \(\langle s_{start},s_{1},\ldots,s_{m},s_{end}\rangle\). Axioms with few occurrences in the dataset are replaced by the Out-Of-Vocabulary token \(s_{unknown}\). These three special tokens are included in the dictionary \(\Omega\).
At training time, we apply teacher forcing., which feeds the next token of the training sequence to the model instead of its previous prediction. This prevents the model from being unable to recover from poor predictions. Hence, the prediction at each training step is expressed as:
\[\hat{y_{t}}=\left\{\begin{array}{ll}\sigma(\mathbf{s}_{start},\mathbf{h}_{0}, \mathbf{I};\theta)&\text{if }t=1,\\ \sigma(\mathbf{s}_{t-1},\mathbf{h}_{t-1},\mathbf{I};\theta)&\text{otherwise.} \end{array}\right.\]
Where \(\hat{y_{t}}\) is a probability distribution at time \(t\) over the axioms in \(\Omega\).
### Neural Attention Captioning
The captioning decoder is fed a static input entity at each time step, but it can be advantageous to emphasise different parts of the embedding based on the current model state [35]. This is achieved through a separate attention network that dynamically weighs some input according to a model state. In other settings, the attention mechanism can emphasise particular words in a sequence or regions of an image with respect to the model state. In this scenario, the incentive of attention is to emphasise particular sections and elements of the averaged graph representation to enhance the embedding.
The attention mechanism computes a context vector which is used as input to the next stage of the model. It is a weighted sum of the \(n\) embedding elements where each weight is the quantity of attention applied to the corresponding element:
Figure 2: Recurrent Neural Network predicting the next token in a sequence.
\[\mathbf{c}_{t}=\sum_{i=1}^{n}\alpha_{t,i}\mathbf{I}_{i}.\]
The weights \(\alpha_{t,i}\) are computed based on an alignment score function which measures how well each element matches the current state. The scores are scaled by softmax into weights in the range of \([0,1]\) where the sum of the weights equals to 1:
\[\alpha_{t,i}=\frac{\exp(score(\mathbf{I}_{i},\mathbf{h}_{t}))}{\sum_{j=1}^{n}\exp( score(\mathbf{I}_{j},\mathbf{h}_{t}))}.\]
In Section 5.4, we experimented with both Loung attention [18] and Bahdanau attention [3]. The alignment function of all three attention types is shown in Table 1, where \(W,W_{1},W_{2}\) and \(V\) are learnable attention parameters. In the Bahdanau style, the context vector is concatenated with the token embedding and fed to the RNN decoder, as illustrated by Figure 3. This is a key difference from the Loung style, where the alignment scores are computed on the output of the RNN prior to the dense layer.
## 4 Problem Embeddings
An embedding is a fixed-size, real-valued vector representation of an entity, where semantically similar entities ideally are close in the embedding space. In the original task of image captioning, image embeddings consist of low-level image features obtained from pre-trained convolutional neural networks over extensive image classification datasets.
Computing problem embeddings in a similar fashion pose multiple challenges in the context of first-order problems. Firstly, the problems have no natural fixed-size vector representation as they consist of unordered sets of tree-structured formulae. Thus, encoding syntactic, structural and semantic properties as a vector is non-trivial. Secondly, there is no immediate classification task for learning the semantics of first-order problems. This paper attempts to overcome these challenges by producing embeddings via graph neural networks.
\begin{table}
\begin{tabular}{l l l} Attention Style & \multicolumn{2}{c}{Alignment} \\ \hline Bahdanau & score(\(\mathbf{I}_{i},\mathbf{h}_{t-1}\)) & \(=V^{\top}\cdot tanh(W_{1}\cdot\textbf{I}_{i}+W_{2}\cdot\textbf{h}_{t-1})\) \\ Loung Dot & score(\(\mathbf{I}_{i},\mathbf{h}_{t}\)) & \(=\mathbf{h}_{t}^{\top}\cdot\mathbf{I}_{i}\) \\ Loung Concat & score(\(\mathbf{I}_{i},\mathbf{h}_{t}\)) & \(=V^{\top}\cdot tanh(W[\textbf{I}_{i};\textbf{h}_{t}])\) \\ \hline \end{tabular}
\end{table}
Table 1: Overview of attention alignment functions.
Figure 3: Language model with Bahdanau attention.
### Problem Graph
A first-order logic formula has an intrinsic tree-shaped structure and is naturally represented as a directed acyclic graph \(G\) with vertices \(V\) and edges \(E\). The vertices, also known as nodes, correspond to the types of elements occurring in the formula, such as function symbols and constants. The edges denote a relationship between the vertices, e.g., an argument supplied to a function. Figure 5 illustrates the graph representation of a conjecture, spanning four different node types as visually represented by the colouring.
This representation extends to sets of formulas by computing a global graph over the node elements in the formulae, as shown in Figure 5. The graph representation captures many aspects of the formulae while invariant to symbol renaming and encoding problems with previously unseen symbols. This paper uses a graph encoding of 17 node types as described in [24].
### Graph Neural Networks
The problem graph is embedded into an \(n\)-dimensional embedding space via a graph neural network. A graph neural network is an optimisable transformation that operates on the attributes of a graph. It utilises a "graph-in, graph-out" methodology where it embeds the graph while preserving the structure and connectivity of the original graph.
A randomly initialised vector represents each node type \(\Phi\) across all graphs in an \(n\)-dimensional embedding space. Next, each node in a graph is assigned to its corresponding embedding vector \(\mathbf{x}_{\Phi}\), resulting in the node feature matrix \(X\). The GNN embeds the type features of each node \(\mathbf{x}_{\Phi}\) into the node feature embedding \(\mathbf{x}^{\prime}_{\Phi}\) through a node update function. This effectively transforms the graph features \(X\) into a more favourable embedding \(X^{\prime}\). Adjacent nodes are incorporated into the update of a node to encode the structure through message passing [7].
Message passing is accomplished through graph convolutional layers, and we utilise the operation described in [14]. The node-wise convolutional operation for the attributes \(\mathbf{x}_{i}^{(k)}\) of node \(i\) at step \(k\) is described as:
\[\mathbf{x}_{i}^{(k)}=W\sum_{j\in\mathcal{N}(i)\bigcup\{i\}}\frac{e_{j,i}}{\sqrt{ \hat{d}_{j}\hat{d}_{i}}}\mathbf{x}_{j}^{k-1}\]
where \(W\) is a learnable weight matrix, \(\mathcal{N}(i)\) is the set of neighbouring nodes of \(i\) and \(\hat{d}_{i}=1+\sum_{j\in\mathcal{N}(i)}e_{j,i}\). The variable \(e_{j,i}\) denotes the edge weight from \(j\) to \(i\). In this setting, all edge weights are 1. The convolutional operations are applied synchronously to all nodes in the graph and learn hidden layer representations that encode both local graph structure and nodes features.
After computing the graph embeddings, they are pooled and passed through the prediction layer, which produces the final model output. We experiment with three different mean pooling approaches all nodes in the graph, only axiom nodes, and only the conjecture node. An overview of the GNN pipeline is shown in Figure 6.
In this approach, the GNN is pre-trained on auxiliary tasks, computing the embeddings before training the captioning model. We experiment with supervised and unsupervised pre-training GNN approaches, as described below.
### Supervised Problem Embedding
In the supervised approach, the GNN is trained on the node level by performing binary premise selection over the axiom nodes, as described in [24]. Based on their node embedding, the model learns to predict whether an axiom occurs in the proof of a problem. During training, the resulting axiom node embeddings become increasingly valuable for modelling their contribution towards the proof. Therefore, the node embeddings are expected to contain information crucial to premise selection. Our experiments show that this information prevails through average pooling.
### Unsupervised Problem Embedding
The supervised learning task emphasises the axiom nodes, but it might be advantageous with a learning task encapsulating all the nodes in a graph. Alas, no sensible labels are
Figure 6: Graph Neural Network for classification of graph or node properties.
directly derivable from the problems to train a prediction model. Thus, we employ unsupervised training through a synthetic dataset which utilises a relation property encapsulating all graph nodes.
The unsupervised training approach consists of training a matching model which learns the difference between two graphs according to some relational property, as described in [29]. The model takes two graphs, \(g_{i}\), \(g_{j}\), as input and passes them through the Siamese GNN model, as illustrated in Figure 7. Next, the nodes of the embedded graphs are pooled into two graph embedding vectors. The similarity of the two input graphs is approximated as the vector norm between the two graph embeddings: \(||GNN(g_{i})-GNN(g_{j})||\). Training the GNN in this fashion enables it to produce embeddings encompassing structural similarities and dissimilarities.
The synthetic dataset consists of pairs of undirected graphs and a numeric property describing their relation. The relational property utilised is the Laplacian spectrum distance [34], which can be defined as follows. Given a graph \(G\), the adjacency matrix \(A\) represents the node connections in the graph. The diagonal degree matrix \(D\) of \(G\) represents the degree of each node, e.g. the number of neighbours. Further, the Laplacian of the graph is defined as the degree matrix subtracted from the adjacency matrix:
\[L=D-A\]
The eigenvalues \(\lambda_{1}\leq\ldots\lambda_{i}\ldots\leq\lambda_{k}\) of the Laplacian are given as \(L\mathbf{x}=\lambda_{i}\mathbf{x}\). Accordingly, the Laplacian spectrum distance \(\pi\) of two graphs \(G\) and \(G^{\prime}\), is defined as:
\[\pi(G,G^{\prime})=\sqrt{\sum_{i=1}^{k}(\lambda_{i}-\lambda_{i}^{\prime})^{2}},\]
where \(k\in min(n,m)\), and \(n\) and \(m\) are the numbers of nodes in \(G\) and \(G^{\prime}\).
The Laplacian spectrum distance is a computationally cheap metric, even for graphs of the magnitude required to represent first-order problems. Although the metric encapsulates graph structure, it neither considers node types nor edge directions. Still, the
Figure 7: Unsupervised GNN training based on pairwise graph similarity.
distance provides an overall description of the structural similarity of the graphs and considers all graph nodes.
## 5 Experimental Evaluation
This section describes the experimental results and evaluation of our premise selection approach1. First, we describe the dataset, followed by five experiments. The first experiment investigates the performance of the graph embeddings2. The second experiment explores input orders, and the third investigates the effect of attention. The fourth experiment explores decoder sampling methods, and the fifth is an online evaluation of our approach and related methods.
Footnote 1: Experiments available at [https://github.com/EdvardHolden/axiom_caption](https://github.com/EdvardHolden/axiom_caption)
Footnote 2: Embedding computation available at [https://github.com/EdvardHolden/gnn-entailment-caption](https://github.com/EdvardHolden/gnn-entailment-caption)
### Proof Dataset
We used the synthetic DeepMath [2] dataset for the model training and evaluation. It consists of 32524 FOF problems based on proofs of the Mizar40 benchmark. DeepMath was created to evaluate binary premise selection methods. Hence, the number of positive and "superfluous" axioms in a problem is balanced. The main advantages of the dataset are a large number of problems combined with consistent formula naming and the reuse of axioms across problems.
We impose a maximum sequence limit of 20 axioms resulting in 30805 problems, where 20% are used for testing, 8% for validation, and the rest for training. The vocabulary consists of the 6K most common axioms occurring in the proofs of the training set. The other axioms are mapped to the Out-Of-Vocabulary (OOV) and removed. Table 2 displays the key statistics of each problem set. While many proofs contain rarely occurring axioms, few problems are represented solely by OOV tokens.
### Experiment 1: Supervised vs Unsupervised Graph Embeddings
The goal of this experiment is to examine the pre-trained embedding methods. The supervised approach was trained according to the methods in the original paper [24].
\begin{table}
\begin{tabular}{l c c c} & Train & Validation & Test \\ \hline Number of problems & 22179 & 2465 & 6161 \\ Average sequence length & 8.96 & 9.14 & 9.10 \\ Median sequence length & 8.00 & 8.00 & 8.00 \\ Ratio of problems containing oov tokens & 0.80 & 0.85 & 0.85 \\ Ratio of problems containing only oov tokens & 0.05 & 0.06 & 0.07 \\ \hline \end{tabular}
\end{table}
Table 2: Key statistics of each dataset partition.
The unsupervised approach was trained on 12000 graph pairs. The output of each GNN is combined with one of the pooling mechanisms to produce a total of six embedding variations. The captioning training parameters are shown in Table 3. The evaluation metrics are defined as follows, where \(A\) is the set of predicted tokens, and \(B\) is the ground truth:
\[Jaccard(A,B)=\frac{|A\cap B|}{|A\cup B|}\hskip 56.905512ptCoverage(A,B)=\frac{|A \cap B|}{|B|}\]
The results are presented in Table 4 and show that supervised embeddings perform better than unsupervised embeddings. This indicates that a contextual learning task is required to produce good embeddings. It also shows that the embeddings created by pooling problem and premise nodes perform better, indicating that essential structures persist through averaging. The results show issues with overfitting, but an increase in the dropout rate led to a decrease in validation performance. On the other hand, unsupervised learning is relatively less prone to overfitting. The following experiments utilise the supervised problem embeddings.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Train} & \multicolumn{2}{c}{Validation} \\ \cline{3-6} & & Jaccard & Coverage & Jaccard & Coverage \\ \cline{3-6} & Problem & **0.40** & **0.50** & **0.23** & **0.32** \\ Supervised & Premise & 0.35 & 0.46 & 0.22 & 0.31 \\ & Conjecture & 0.20 & 0.29 & 0.14 & 0.22 \\ \hline \multirow{3}{*}{Unsupervised} & Problem & 0.30 & 0.40 & 0.19 & 0.27 \\ & Premise & 0.20 & 0.32 & 0.15 & 0.25 \\ \cline{1-1} & Conjecture & 0.18 & 0.29 & 0.14 & 0.22 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The training and validation performance of the embedding approaches.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{2}{c}{**Training Parameters**} & \multicolumn{2}{c}{**Model Parameters**} \\ \hline Optimiser & Adam & RNN Type & LSTM \\ Learning Rate & 0.001 & RNN Units & 32 \\ Max Epochs & 80 & Stateful RNN & True \\ Early Stopping & 5 & Batch Normalisation & False \\ \hline Dropout Rate & 0.1 & Feature Normalisation & True \\ Teacher Forcing Rate & 1.0 & No Dense Units & 512 \\ Batch Size & 64 & Embedding Size & 50 \\ Maximum Sequence Length & 22 & Target Vocab Size & 6000 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The captioning model and training parameters.
### Experiment 2: Axiom Order
This experiment examines various input ordering schemes. While the ATP treats the axioms as a set, RNNs operate over input sequences. Therefore, some input orders may be more advantageous. We explore the following ordering schemes:
* **Original**: Ordered as in the original problem.
* **Length**: Ordered the smallest string representation to the longest.
* **Frequency**: Ordered from the most frequently occurring axioms to the least.
* **Random**: Random order of axioms for each sequence.
* **Global Random**: Static random order.
The results are displayed in Table 5. Although most configurations perform similarly, the length and frequency order have the best validation performance. However, random ordering has a surprisingly low performance. As this is not reflected in the other ordering schemes, including the global random order, it indicates that a consistent relative position across sequences is essential.
### Experiment 3: Attention Mechanisms
This experiment evaluates the impact of utilising an attention mechanism for the axiom captioning method, The models were trained with the length order, and the results are presented in Table 6. The results show that Loung dot attention can provide a minor performance improvement.
\begin{table}
\begin{tabular}{l l l l l} \multirow{2}{*}{Attention} & \multicolumn{2}{c}{Train} & \multicolumn{2}{c}{Validation} \\ \cline{2-5} & Jaccard & Coverage & Jaccard & Coverage \\ \hline None & **0.43** & **0.53** & **0.24** & 0.33 \\ Bahdanau & 0.37 & 0.50 & 0.22 & 0.32 \\ Loung concat & 0.38 & 0.49 & **0.24** & 0.33 \\ Loung dot & 0.39 & 0.51 & **0.24** & **0.34** \\ \hline \end{tabular}
\end{table}
Table 6: The performance of attention mechanisms and no attention.
\begin{table}
\begin{tabular}{l l l l l} \multirow{2}{*}{Axiom Order} & \multicolumn{2}{c}{Train} & \multicolumn{2}{c}{Validation} \\ \cline{2-5} & Jaccard & Coverage & Jaccard & Coverage \\ \hline Original & 0.40 & 0.50 & 0.23 & 0.32 \\ Length & **0.43** & **0.53** & **0.24** & **0.33** \\ Frequency & 0.39 & 0.49 & **0.24** & **0.33** \\ Random & 0.23 & 0.25 & 0.15 & 0.16 \\ Global Random & 0.39 & 0.49 & 0.22 & 0.31 \\ \hline \end{tabular}
\end{table}
Table 5: The train and validation set performance on the different axiom orders.
### Experiment 4: Model Decoding
This experiment shifts the focus from optimising the training parameters toward the premise selection task by examining sampling methods. Although the model is given a single input token at each step, multiple tokens can be sampled from the output distribution. We explore three different sampling methods over the test set for selecting which axioms to include in the final problem:
* **Greedy**: Select \(n\) axioms with the maximum probability.
* **Top-K**: Redistribute the probability distribution over the top \(K\) axioms, and sample \(n\) axioms from the new probability distribution.
* **Temperature**: Scale the logits by the temperature prior to applying softmax and sample \(n\) axioms from the new probability distribution.
At each generative step, the sampling method selects \(n\) axioms and adds them to the set of the selected axiom. Only the top axiom is fed back into the model. The results are displayed in Table 7. They show that the greedy sampling method is superior, and the coverage score is monotonic on \(n\). We found that an increase in coverage gives a substantial improvement for online performance compared to a slight decrease in Jaccard. Moreover, greedy sampling selects a small total number of axioms despite producing the highest coverage scores. Through experimentation, we discovered that achieving a high coverage score is crucial for solving the problems in this dataset.
### Experiment 5: Online System Evaluation
The final experiment performs an online evaluation of our premise selection method and other methods with the state-of-the-art ATP, iProver3. During initial experimentation, it was discovered that the Deepmath problems were too small for a meaningful assessment as iProver solved 86% of the problems without premise selection. Therefore, the
\begin{table}
\begin{tabular}{l c c c c c c} & \multicolumn{2}{c}{1} & \multicolumn{2}{c}{2} & \multicolumn{2}{c}{4} \\ \cline{2-7} & \multicolumn{2}{c}{Jaccard Coverage Size} & Jaccard & Coverage Size & Jaccard & Coverage Size \\ \hline Greedy & **0.29** & **0.39** 6.71 & **0.22** & **0.52** 12.82 & **0.15** & **0.64** 24.25 \\ Top-32 & 0.16 & 0.30 9.07 & 0.12 & 0.43 19.92 & 0.09 & 0.58 37.86 \\ Top-64 & 0.16 & 0.29 9.13 & 0.11 & 0.43 20.36 & 0.08 & 0.57 39.46 \\ Top-128 & 0.16 & 0.29 9.20 & 0.11 & 0.42 20.59 & 0.08 & 0.56 40.17 \\ Top-256 & 0.15 & 0.29 9.21 & 0.11 & 0.42 20.64 & 0.08 & 0.56 40.40 \\ \hline Temperature-1.0 & 0.15 & 0.28 9.22 & 0.14 & 0.38 15.40 & 0.11 & 0.50 27.77 \\ Temperature-0.9 & 0.17 & 0.30 8.78 & 0.16 & 0.40 14.33 & 0.12 & 0.51 25.37 \\ Temperature-0.8 & 0.19 & 0.32 8.42 & 0.18 & 0.41 13.27 & 0.13 & 0.51 23.10 \\ Temperature-0.5 & 0.25 & 0.37 7.49 & 0.24 & 0.43 10.60 & 0.17 & 0.50 17.15 \\ \hline \end{tabular}
\end{table}
Table 7: The performance of each sampling method.
problems were extended by merging the Deepmath problems with the larger Mizar404 problems, reducing this ratio to 52%.
Footnote 4: Problems are available at [http://grid01.ciirc.cvut.cz/~mptp/7.13.01_4.181.1147/mptp/problems_small_consist.tar.gz](http://grid01.ciirc.cvut.cz/~mptp/7.13.01_4.181.1147/mptp/problems_small_consist.tar.gz)
Nine different premise selection configurations are utilised to generate problem versions used in the online evaluation. This consists of axiom captioning, SInE, no premise selection, and three related axiom captioning methods. SInE is evaluated with tolerance-depth parameters of \((1,1)\) and \((3,0)\), note that \(0\) corresponds to unbounded depth.
In the case of axiom captioning, the model is trained on the 6K most common positive axioms in the benchmark. The positive axioms outside the vocabulary are essential for the proofs but appear too rarely to be learnt by the model. Thus, these axioms are not reachable for the captioning model. Consequently, our axiom captioning method implements a "rare-axiom" procedure for selecting rare but valuable axioms. The procedure scans a given problem and selects any axioms that have previously appeared positively but rarely. Furthermore, it is unlikely that a single method can encapsulate all aspects of the conjecture-axiom relationship. Hence, we also evaluate the combination of axiom captioning and SInE as two complementing approaches.
Further, we evaluate three related ML-based methods for premise selection: Binary Graph [24], Conjecture RNN [22] and the Conjecture Transformer [23]. The Binary Graph method is used to compute the supervised embeddings, and the comparison will therefore reveal whether axiom captioning improves the utilisation of the graph embeddings. Still, the method expects a balanced dataset and will likely introduce much positive bias for larger problems. Conjecture RNN is a sequence-to-sequence approach where the input is a tokenised embedding of the conjecture. A caveat with this approach is that it trains on a specific axiom order derived from the proofs unavailable for this dataset. Hence, it is trained on the consistent "length" order (see, Section 5.3). The approach utilises an LSTM architecture but puts no restriction on the input and output vocabulary, resulting in poor training performance on this dataset. The Conjecture Transformer approach is an extension where the RNNs are replaced with the Transformer [32] architecture. This approach was modified slightly in the input and output layers to enforce input length restrictions and OOV tokens. The Conjecture* approaches rely solely on the token vocabulary and do not inspect the given problem. Hence, their encoders are unaware of the increased problem size.
iProver is run with a default heuristic over the clausified problem versions generated by each configuration with a time limit of 10 seconds. The results are displayed in Figure 8. Conjecture RNN only solves one problem selecting only the most frequently occurring axiom. The "Conjecture Transformer" slightly improves the performance but does not come close to axiom captioning. This suggests that graph embeddings are superior to conjecture token embeddings for premise selection. Binary Graph performs best of the related methods and is close to the performance of SInE \((1,1)\). However, it suffers from generating large problems with many superfluous axioms and a low coverage score due to being a binary classifier trained on a balanced dataset. Nevertheless, half of the problems solved by Binary Graph complement the solution of axiom captioning. Hence, graph embeddings facilitate method diversification.
Axiom caption outperforms the related ML-based methods and is slightly enhanced by its combination with SInE \((1,1)\). However, it is behind SInE \((3,0)\), which performs nearly identically to no premise selection (Original). The real power of axiom captioning comes through when paired with SInE \((3,0)\). Axiom captioning considers axioms carefully and consequently predicts many of the essential axioms. On the other hand, SInE \((3,0)\) selects many axioms, resulting in reasonable coverage scores. Hence, their combination results in axiom selection with sufficiently large coverage scores while maintaining a computationally feasible size. The result is a method which solves 17.7% more problems than the baseline, which does not employ premise selection.
## 6 Conclusion
In this paper, we presented a novel approach for performing premise selection. It parallels image captioning, combining transfer learning on graph neural networks with sequence learning. The graph representation provides a holistic view of the problem structure, while the sequence model uses this embedding to predict the sequence of axioms necessary for the proof. Our evaluation found that the model performs better when the GNN is pre-trained on a related and supervised task with embeddings containing information of all the nodes in the graph. Further, we observed that effective axiom captioning requires a fixed axiom order and a greedy decoder sampler. Lastly, the proposed approach dramatically increases the number of solved problems when complemented with SInE and significantly outperforms related machine learning methods.
## Acknowledgements
We thank Michael Rawson for providing a translator from formulas to graphs, utilised in this work.
Figure 8: Online evaluation of SInE, Captioning, their combination, and related methods.
|
2303.14548
|
Viewpoint Equivariance for Multi-View 3D Object Detection
|
3D object detection from visual sensors is a cornerstone capability of
robotic systems. State-of-the-art methods focus on reasoning and decoding
object bounding boxes from multi-view camera input. In this work we gain
intuition from the integral role of multi-view consistency in 3D scene
understanding and geometric learning. To this end, we introduce VEDet, a novel
3D object detection framework that exploits 3D multi-view geometry to improve
localization through viewpoint awareness and equivariance. VEDet leverages a
query-based transformer architecture and encodes the 3D scene by augmenting
image features with positional encodings from their 3D perspective geometry. We
design view-conditioned queries at the output level, which enables the
generation of multiple virtual frames during training to learn viewpoint
equivariance by enforcing multi-view consistency. The multi-view geometry
injected at the input level as positional encodings and regularized at the loss
level provides rich geometric cues for 3D object detection, leading to
state-of-the-art performance on the nuScenes benchmark. The code and model are
made available at https://github.com/TRI-ML/VEDet.
|
Dian Chen, Jie Li, Vitor Guizilini, Rares Ambrus, Adrien Gaidon
|
2023-03-25T19:56:41Z
|
http://arxiv.org/abs/2303.14548v2
|
# Viewpoint Equivariance for Multi-View 3D Object Detection
###### Abstract
3D object detection from visual sensors is a cornerstone capability of robotic systems. State-of-the-art methods focus on reasoning and decoding object bounding boxes from multi-view camera input. In this work we gain intuition from the integral role of multi-view consistency in 3D scene understanding and geometric learning. To this end, we introduce VEDet, a novel 3D object detection framework that exploits 3D multi-view geometry to improve localization through viewpoint awareness and equivariance. VEDet leverages a query-based transformer architecture and encodes the 3D scene by augmenting image features with positional encodings from their 3D perspective geometry. We design view-conditioned queries at the output level, which enables the generation of multiple virtual frames during training to learn viewpoint equivariance by enforcing multi-view consistency. The multi-view geometry injected at the input level as positional encodings and regularized at the loss level provides rich geometric cues for 3D object detection, leading to state-of-the-art performance on the nuScenes benchmark. The code and model are made available at [https://github.com/TRI-ML/VEDet](https://github.com/TRI-ML/VEDet).
## 1 Introduction
Camera-based 3D object detection is a critical research topic, with important applications in areas such as autonomous driving and robotics due to the semantic-rich input and low cost compared to range sensors. In the past few years, monocular 3D detection has seen significant progress, from relying on predicting pseudo point clouds as intermediate representation [35, 41, 44] to end-to-end learning [33, 36, 40]. However, monocular 3D detectors are inherently ambiguous in terms of depth, which motivated some recent exploration in multi-view and multi-sweep 3D object detection [24, 27, 28, 42].
In a conventional monocular setting, given multiple cameras on a sensor rig, single-view detections are merged to the global frame through rule-based processing such as Non-Maximum Suppression (NMS). Recent advances in multi-view camera-based 3D algorithms [27, 42] proposed to jointly aggregate multi-view information at the feature level, and directly predict a single set of detections in the global frame. These algorithms demonstrate a giant leap in 3D detection performance on multi-camera datasets (E.g., Nuscenes [5]). To aggregate information from different views, one line of query-based detectors adopt transformers to query image features [27, 28, 42] or bird's-eye-view (BEV) features [17, 24] via an attention mechanism. In contrast, another line of works "lift-splat-shoot" [34] image features from each view into the shared BEV features to be processed by convolutional detection heads [23].
To further mitigate the depth ambiguity, some concurrent works have started extending multi-view to "multi-sweep" across timestamps and observe a promising performance
Figure 1: **Our proposed VEDet** encodes the 3D scene from multi-view images, and decodes objects with view-conditioned queries. The predicted 3D bounding boxes are expressed in the underlying views of the queries, which enables us to enforce viewpoint equivariance among predictions from multiple views. Virtual query views are generated during training and together with the viewpoint equivariance regularization bring richer geometric learning signals to guide the model to better understand the 3D structure in the scene. During inference, the global predictions can be obtained by simply choosing the global frame as the query view.
boost [24, 28].
While the works mentioned above demonstrate a strong potential for multi-view 3D detection, progress has concentrated on input aggregation and information interplay across frames and less on learning objectives. We argue that the learning objective can play a crucial role in ingesting the core knowledge in a multi-view setting: 3D geometry.
This paper proposes to encourage 3D geometry learning for multi-view 3D detection models through viewpoint awareness and equivariance. We obtain our intuition from traditional structure-from-motion works [1], where multi-view geometry is modeled through multi-view consistency. To this end, we propose viewpoint-awareness on the object queries, as well as a multi-view consistency learning objective as a 3D regularizer that enforces the model to reason about geometry. Compared to existing methods that make 3D predictions in the default egocentric view, our proposed multi-view predictions and viewpoint equivariance effectively bring stronger geometric signals conducive to the 3D reasoning. More specifically, in our query-based framework, the geometry information of image features and object queries is injected completely via implicit geometric encodings, and the transformer decoder is expected to learn better correspondence and 3D localization under the viewpoint equivariance objective. We demonstrate that our proposed framework can make the best of available geometry information with extensive experiments and establish the new state-of-the-art in multi-view 3D object detection. In summary, our contributions are:
* We propose a novel **Viewpoint Equivariance (VE)** learning objective that encourages multi-view consistency in 3D detection models, leading to improved 3D object detection performance.
* We propose a new multi-view 3D object detection framework, **VEDet**, which employs a query-based transformer architecture with perspective geometry and viewpoint awareness injected both at the encoding and decoding stages. VEDet fully enables our proposed VE learning objective, facilitating geometry learning with implicit inductive biases.
* VEDet achieves **state-of-the-art on large-scale benchmark**, reaching **45.1%mAP on NuScenes val set and 50.5% mAP on test set**. We provide a comprehensive analysis of our components, and share insights based on empirical observations.
## 2 Related Work
### Monocular 3D object detection
Early works tackled camera-based 3D object detection in a monocular setting by adopting a two-stage pseudo-LiDAR paradigm [41, 49, 35, 44] or directly building upon 2D detection frameworks to predict extra 3D properties [4, 36, 33, 36, 40, 46]. Due to the inherent scale ambiguity in depth estimation, one standard approach was to lift from 2D to 3D by aligning 3D properties and their 2D projections on the image plane [3, 14, 18, 21], while others leveraged additional object or scene priors such as shapes [14], CAD models [7], or ground planes [2]. In the line of representation learning, DD3D [33] exploited large-scale pre-training to learn a depth-aware representation that can universally benefit 3D detection algorithms. However, all these methods still struggle with the two major drawbacks in monocular 3D detection: inherent depth ambiguity and insufficient context to infer objects across images. As a multi-view method, our work addresses both of these issues by leveraging the ample 3D geometric cues in multi-camera setups.
### Multi-view 3D object detection
Recent advances in camera-based 3D object detection have started to leverage multi-view context, which can improve the detection of objects that appear in more than one image. One line of works extends the DETR framework [6], which decodes 3D bounding boxes with a set of queries [17, 22, 24, 27, 42]. Using camera parameters, DETR3D [42] directly projects 3D queries to 2D image planes to update query features, while PETR [27] constructs 3D position embeddings from point frustums to implicitly guide query updates. BEVFormer [24] and UVTR [22] first build an intermediate voxelized feature space around the vehicle/robot's ego coordinate frame, before feeding the features to a DETR-style decoder. Another line of work follows LSS [34] and constructs the voxelized feature space, before applying a detection head on the features to predict bounding boxes. Typically, a depth head is also trained to predict a depth bin distribution in order to lift-splat-shoot the features, as in BEVDepth [23]. Our work falls in the first line of research and exploits multi-view geometric consistency to improve bounding box localization in 3D space.
### Implicit geometric encoding
The Transformer architecture [11, 12, 39] introduced the use of positional encodings for input features. This brought upon a paradigm shift in how to model the relative position of elements, from _explicitly_, i.e. by recurrent operations or convolutional filters, to _implicitly_, i.e. learned from data. Inspired by this new paradigm, some works started to investigate how to use positional encodings constructed from geometric priors as input-level inductive biases [27, 28, 47]. ILIB [47] utilizes multi-view geometry, including camera and epipolar cues, to generate position encodings, to be processed by a generalist Perceiver IO [16] architecture to produce a set of latent vectors. From this latent space, ILIB constructs queries from 3D viewing rays to decode depth maps. PETR [27, 28] similarly constructs
position encodings from generated point frustums at pre-defined depth values, and decodes 3D bounding boxes using queries constructed from 3D anchor points [43] in the ego vehicle space. Positional encoding has also been used extensively in the context of neural fields (i.e. coordinate-based multi-layer perceptron) to process the appearance, radiance, or occupancy of a scene [31, 37, 45]. Our work improves the design of 3D geometric position encoding and introduces a new optimization objective to guide the detection model toward learning better object localization.
## 3 Viewpoint Equivariant 3D Detection
The multi-view 3D object detection task aims at detecting 3D bounding boxes in the scene with class labels, given a set of images from \(N\) cameras with poses and intrinsics \(\{\mathbf{I}_{i}\in\mathbb{R}^{3\times H\times W},\mathbf{T}_{i}\in\mathit{SE} (3),\mathbf{K}_{i}\in\mathbb{R}^{3\times 3},i=1,2,\dots,N\}\). In this section, we will first introduce the overall VEDet framework in Sec. 3.1. Sec. 3.2 describes how we use geometric positional encoding to inject geometry information for image features and object queries implicitly. In Sec. 3.3 we propose making object queries view-conditioned so that 3D boxes are predicted in the specified view. Lastly, we present the novel viewpoint equivariance learning objective in Sec. 3.4, which exploits the viewpoint-awareness of object queries and produces stronger geometric signals to improve 3D detection.
### Overall framework
The workflow of our proposed VEDet builds upon a transformer-based architecture, as depicted in Fig. 2. We first employ a backbone network that extracts image features \(\{\mathbf{F}_{i}\in\mathbb{R}^{C\times H^{\prime}\times W^{\prime}}\}\) from multi-view images. For each 2D location on the feature map grid, we calculate a geometric positional encoding that jointly considers pixel location, camera pose, and intrinsics.
The image features, with their associated positional encoding, are flattened and processed by a transformer decoder [6] with a set of object queries \(\{\mathbf{q}_{j}\}\). The queries are constructed from a set of learnable 3D _query points_\(\{\mathbf{c}_{j}\}\) combined with a given _query view_\(\{\mathbf{T}^{v}\}\).
A series of self- and cross-attention layers then aggregate and update the 3D scene information into the queries, after which a feed-forward detection head maps the updated queries to box predictions \(\{\mathbf{\hat{b}}_{j}\}\). The box predictions are conditioned and expressed in the query views \(\mathbf{T}_{j}^{v}\) associated with the queries, as detailed in Sec. 3.3. Finally, we optimize the network by applying a viewpoint equivariance (VE) loss on the view-conditioned box predictions, as detailed in Sec 3.4.
### Geometric positional encoding
Positional encodings provide location information of feature embeddings in Transformer architectures [39]. In this work, we encode the 3D geometric attributes associated with the image features as well as the object queries, when they are processed by the decoder. Inspired by [47, 27], for the image features we propose to encode the camera pose and the 3D inverse projection ray that combines pixel position and camera perspective geometry; for object queries the learnable 3D query point and the selected query view are encoded (more in Sec. 3.3).
Specifically, given the extracted image features \(\{\mathbf{F}_{i}\}\), we construct a triplet of geometric attributes, \([\mathbf{r}_{(u_{i},v_{i})},\mathbf{\bar{q}}_{i},\mathbf{t}_{i}]\) for _each feature location_\((u_{i},v_{i})\). \([\mathbf{\bar{q}}_{i},\mathbf{t}_{i}]\) denote the quaternion vector and translation of the camera pose, and \(\mathbf{r}_{(u_{i},v_{i})}\) denotes a unit-length inverse perspective projection ray originating from the pixel location given by:
\[\mathbf{r}^{\prime}_{(u_{i},v_{i})}=(\mathbf{K}_{i}\mathbf{R}_{i}^{T})^{-1}[ \alpha u_{i},\alpha v_{i},1]^{T},\mathbf{r}=\frac{\mathbf{r}^{\prime}}{|| \mathbf{r}^{\prime}||_{2}}, \tag{1}\]
where \(\alpha\) is the downsample factor of \(\mathbf{F}_{i}\) compared to image \(\mathbf{I}_{i}\), \(\mathbf{K}_{i}\) and \(\mathbf{R}_{i}\) are instrinsic and rotation matrix of camera \(i\). The triplet \([\mathbf{r}_{(u_{i},v_{i})},\mathbf{\bar{q}}_{i},\mathbf{t}_{i}]\) fully describes the perspective geometry for a given image feature \(\mathbf{F}_{i}(u_{i},v_{i})\). Compared to PETR [27, 28] which model the positional information of image features by manually sampling a set of 3D point locations along the ray at pre-defined depth frustums, VEDet employs a simpler design1 and chooses not to assume the discretized depth prior, as we believe \([\mathbf{r}_{(u_{i},v_{i})},\mathbf{\bar{q}}_{i},\mathbf{t}_{i}]\) keeps the full geometry information with which the model can learn 3D localization better.
Footnote 1: PETR also combines a few other components with the 3D PE, namely 2D grid PE and view number PE, which we do not use.
**Learnable geometry mapping**
We encode the geometric attributes into high-dimensional embeddings via Fourier transform followed by a learnable mapping. Inspired by advances in NeRF [31, 38], we first apply a Fourier transform to capture the fine-grained changes in the geometric attributes.
\[\gamma(x|[f_{1},\dots,f_{k}])=[\sin{(f_{1}\pi x)},\cos{(f_{1}\pi x)},\dots] \tag{2}\]
The \(k\) frequencies \([f_{1},\dots,f_{k}]\) are sampled evenly between \([0,f_{\max}]\). Afterward, an MLP is used to project the output to dimension \(C\) as our final geometric positional encoding:
\[\mathbf{p}^{e}_{(u_{i},v_{i})}=\mathrm{MLP}_{\mathrm{enc}}(\gamma([\mathbf{r} _{(u_{i},v_{i})},\mathbf{\bar{q}}_{i},\mathbf{t}_{i}]) \tag{3}\]
As a result, even without explicitly projecting the image features \(\{\mathbf{F}_{i}\}\) back to 3D space, they become 3D geometry-aware when augmented with the 3D geometric positional encodings \(\{\mathbf{P}^{e}_{i}\in\mathbb{R}^{C\times H^{\prime}\times W^{\prime}}\}\). Hence, we implicitly encode the multi-view perception of the scene at an input level, which will work jointly with our proposed VE learning objective to enforce 3D geometric modeling.
Temporal modelingIn the context of a multi-sweep setting, we follow [28] and transform the camera pose from previous frames into the current global frame via ego-motion compensation. The multi-sweep features are concatenated at the token dimension.
### View-conditioned query
VEDet adopts a DETR-style [6] decoder that consists of \(L\) transformer layers, as shown in Fig. 2. Each layer performs self-attention among a set of \(M\) queries \(\{\mathbf{q}_{j}\in\mathbb{R}^{C},j=1,2,\ldots,M\}\), and cross-attention between the queries and the 3D geometry-aware image features \(\{(\mathbf{F}_{i},\mathbf{P}_{i}^{e})\}\). The updated queries \(\{\mathbf{q}_{j}\}\) will serve as input to the next layer:
\[\{\mathbf{q}_{j}\}_{l}=\psi_{l-1}(\{\mathbf{F}_{i}\},\{\mathbf{P}_{i}^{e}\}, \{\mathbf{q}_{j}\}_{l-1}),l=1,2,\ldots,L, \tag{4}\]
where \(L\) is the number of attention layers. A classification and regression MLP heads map the queries from each layer into class logits and bounding box predictions, respectively.
\[\mathbf{\hat{s}}_{j}=\mathrm{MLP}_{\mathrm{cls}}(\mathbf{q}_{j}),\mathbf{ \hat{b}}_{j}=\mathrm{MLP}_{\mathrm{reg}}(\mathbf{q}_{j}) \tag{5}\]
We propose to ground the queries with multi-view geometry. Concretely, a query \(\mathbf{q}_{j}\) is constructed from two parts: a _3D query point_ and a _query view_. We initialize a set of \(M\) learnable 3D query points \(\{\mathbf{c}_{j}\in\mathbb{R}^{3},j=1,2,\ldots,M\}\) in the _global frame_\(\mathbf{T}^{0}\), similarly to PETR [27], which is optimized during training.
Query viewsFor the second part of query geometry, a query view \(\mathbf{T}^{v}=[\mathbf{\bar{q}}^{v},\mathbf{t}^{v}]\) is selected relative to the global frame. To construct the query, the 3D query points are first transformed into the query view via \(\mathbf{c}_{j}^{v}=(\mathbf{T}^{v})^{-1}\mathbf{c}_{j}\), and together with the query view \([\mathbf{c}_{j}^{v},\mathbf{\bar{q}}^{v},\mathbf{t}^{v}]\) compose the query geometries. As described in Sec. 3.2, the query geometries are similarly mapped by a Fourier transform followed by an MLP, into **view-conditioned queries**:
\[\mathbf{q}_{j}^{v}=\mathrm{MLP}_{\mathrm{dec}}(\gamma([\mathbf{c}_{j}^{v}, \mathbf{\bar{q}}^{v},\mathbf{t}^{v}])). \tag{6}\]
For query views, we refer the _global frame_\(\mathbf{T}^{0}=[[1,0,0,0],\mathbf{0}]\) as a default query view.2 Additionally, we generate \(V\)_virtual query views_ to provide variation to the decoding views and encourage viewpoint awareness in the model. Concretely, we randomly sample Euler angles \(\Theta^{v}\in\mathbb{R}^{3}\) and translation \(\mathbf{t}^{v}\in\mathbb{R}^{3}\) from uniform distributions \([\Theta_{\min},\Theta_{\max}]\) and \([\mathbf{t}_{\min},\mathbf{t}_{\max}]\), after which the Euler angles will be converted to the equivalent quaternion \(\mathbf{\bar{q}}^{v}\in SO(3)\), giving \(\{\mathbf{T}^{v}=[\mathbf{\bar{q}}^{v},\mathbf{t}^{v}]\}\). In total, there are \(V+1\) query views consisting of the default view and \(V\) virtual views \(\{\mathbf{T}^{v},v=0,1,\ldots,V\}\). Therefore, given the \(M\) 3D query points \(\{\mathbf{c}_{j}\}\) and \(V+1\) query views \(\{\mathbf{T}^{v}\}\), we construct object queries from \(\{\mathbf{c}_{j}\}\times\{\mathbf{T}^{v}\}\), resulting in total \(M\times(V+1)\) individual object queries.
Footnote 2: This view is also the egocentric coordinate frame for box prediction and evaluation as done in other multi-view detection works [27, 28, 42].
View-conditioned predictionsThe query view specifies the coordinate system in which boxes (groundtruth, predicted) are defined. Specifically, given a _view-conditioned_ query \(\mathbf{q}_{j}^{v}\), the box predictions \(\mathbf{\hat{b}}_{j}^{v}\) are local to the underlying query view \(\mathbf{T}^{v}\), parameterized as:
\[\mathbf{\hat{b}}_{j}^{v}=[\Delta\mathbf{\hat{c}}_{j}^{v},\mathbf{\hat{d}}_{j}, \cos(\phi),\sin(\phi),\mathbf{\hat{v}}_{j}^{v}], \tag{7}\]
where \(\Delta\mathbf{\hat{c}}_{j}^{v}\in\mathbb{R}^{3}\) is the offset from the 3D query point \(\mathbf{c}_{j}^{v}\) to the bounding box center, \(\mathbf{\hat{d}}_{j}\in\mathbb{R}^{3}\) is the box dimensions, \(\phi\) is the yaw angle of the box, and \(\mathbf{\hat{v}}_{j}^{v}\in\mathbb{R}^{3}\) is the box velocity.
Figure 2: **The framework of our proposed VEDet**: Given \(N\) multi-view input cameras, an image encoder first extracts image features. For each feature embedding, we provide geometric positional encoding (PE) based on pixel location as well as camera geometries (Sec. 3.2). At the decoding stage, we apply a view-conditioned query constructed by 3D query points and Query Views to predict view-conditioned predictions (Sec. 3.3). Finally, we optimize the network through a novel viewpoint equivariance loss (Sec. 3.4).
As for the object classification score, we simply decode one from the global frame for each query point \(\mathbf{c}_{j}\), as it is simple and decoding from virtual views did not show advantage in our experiments. We predict a binary score for each class normalized by a sigmoid function.
The view-conditioned queries and their local predictions serve as a form of data augmentation during training and, more importantly, enable viewpoint equivariance regularization as discussed in Sec. 3.4. More design choices are also ablated in Sec. 4.3. We only use the global frame \(\mathbf{T}^{0}\) as the query view at inference time.
### Viewpoint equivariance loss
As described in Sec. 3.3, given \(V+1\) query views, there are \(V+1\) versions of bounding box predictions \(\{\hat{\mathbf{b}}_{j}^{v}\}\) coming from a single query point \(\mathbf{c}_{j}\). The \(V+1\) bounding boxes are expressed in different coordinate frames but of the same underlying ground truth object. According to multi-view geometry, the observations of the _same_ object from different frames should be geometrically consistent and only differ by the relative transformation as shown in Fig. 3. Therefore, we propose a viewpoint equivariance objective that considers the multi-view predictions coming from the same query point \(c_{j}\) and box target from all query views _jointly_.
Concretely, we first ensure that the \(V+1\) versions predictions from query point \(\mathbf{c}_{j}\) are assigned to the same ground truth object. To achieve this goal, we create a **super box** by concatenating the predictions from different query views:
\[\hat{\mathbf{B}}_{j}=[\mathbf{\hat{b}}_{j}^{0},\hat{\mathbf{b}}_{j}^{1}, \ldots,\hat{\mathbf{b}}_{j}^{V}]. \tag{8}\]
Similarly, we extend all the ground truth bounding boxes into super boxes:
\[\mathbf{B}_{m}=[\mathbf{g}_{m}^{0},\mathbf{g}_{m}^{1},\ldots,\mathbf{g}_{m}^{V }], \tag{9}\]
where \(\mathbf{g}_{m}^{v}\) is the ground truth bounding box in the expressed in the query view \(\{\mathbf{T}^{v}\}\).
Next, we perform Hungarian matching [19] to decide the optimal assignment between \(\{\hat{\mathbf{B}}_{j}\}\) and \(\{\mathbf{B}_{m}\}\), using the following cost function similar to DETR [6]:
\[\sigma=-\mathds{1}_{\{c_{m}\neq\emptyset\}}\log(\mathbf{s}_{j}(c_{m}))+ \mathds{1}_{\{c_{m}\neq\emptyset\}}L_{reg}(\mathbf{B}_{m},\hat{\mathbf{B}}_{j}), \tag{10}\]
where \(c_{m}\) is the ground truth class label and \(L_{reg}()\) is a weighted L1 loss, given by:
\[L_{reg}(\mathbf{B}_{m},\mathbf{\hat{B}}_{j})=||\hat{\mathbf{b}}_{j}^{0}- \mathbf{g}_{m}^{0}||_{1}+\Sigma_{1}^{V}\lambda_{v}||\hat{\mathbf{b}}_{j}^{v}- \mathbf{g}_{m}^{v}||_{1}. \tag{11}\]
We use \(\lambda_{v}\) to weigh the virtual views. Once we identify the optimal assignment, we calculate the loss on the super boxes:
\[L_{VE}=\lambda_{cls}L_{cls}(\mathbf{s},c)+\lambda_{reg}L_{reg}(\mathbf{B}, \mathbf{\hat{B}}), \tag{12}\]
for each paired prediction and ground truth. We adopt focal loss [26] for classification loss \(L_{cls}\), and the same form of regression loss \(L_{reg}\) as in matching. \(\lambda_{cls}\) and \(\lambda_{reg}\) are loss weights. For each 3D query point, by considering \(V+1\) versions of predictions _jointly_ during both matching and optimization, the model learns viewpoint equivariance through multi-view consistency, leading to better 3D detection.
## 4 Experiments
### Experimental setup
Dataset and metricsWe evaluate our method on the large-scale benchmark NuScenes. NuScenes has 1000 scenes split into 700/150/150 as train/val/test subsets. Each scene contains 20-second videos collected by 6 surround-view cameras at 10Hz, with synchronized 3D box annotations at 2Hz. We report on metrics defined by NuScenes: mean Average Precision (mAP), mean Average Translation Error (mATE), mean Average Scale Error (mASE), mean Average Orientation Error, mean Average Velocity Error (mAVE), mean Average Attribution Error (mAAE), and NuScenes Detection Score (NDS) which is a comprehensive score that aggregates the above five sub-metrics.
Implementation detailsWe adopt ResNet [13] and VoVNetV2 [20] with FPN [25] as the backbone network, and use the P4 features (1/16 image size) from the pyramid for all our experiments. We use the AdamW optimizer [30] with cosine annealing [29] to train VEDet with learning rate starting at \(2\times 10^{-4}\) and ending at \(2\times 10^{-7}\), on 8 Tesla A100 GPUs with a total batch size 8. For image
Figure 3: **Multi-view consistency enforced at training time.** According to multi-view geometry, the observations of the same box in different views should only differ by the relative transformation. Therefore, when a 3D query point is paired with multiple query views (3 in this illustration) to construct queries, the predictions in each respective view are _combined_ to match and regress the ground truth counterparts _jointly_.
data augmentation, we apply random resize, horizontal flip, and crop; and for the 3D space we apply random scaling and rotation following CenterPoint [48]. Importantly, when flipping the input image, we flip the 3D box annotations and camera extrinsics accordingly. The algorithm-specific hyper-parameters are ablated in Sec. 4.3 and fixed for all experiments. Please see the supplemental material's full list of hyper-parameters and more training details. All experiments are trained for 24 epochs except for test submission, which is trained for 96 epochs without CBGS [50].
### Comparison to state of the art
Our VEDet achieves state-of-the-art detection performance on NuScenes val set across a range of model setups as shown in Tab. 1, compared to previous works and some concurrent preprints [17, 28]. We use ImageNet-pretrained models for setups with ResNet-50/101 backbones to process image resolutions of \(384\times 1056\) and \(512\times 1408\), and outperform existing baselines. As the full-fledged setup, we adopt a V2-99 backbone initialized with depth-pretrained weights [33], operating on \(640\times 1600\) images. In this high-performance regime we compare two closely related works, PETR [27], and PETRv2 [28], as shown in the third group. VEDet surpasses PETRv2 by 2.0\(\%\) mAP and 1.0% NDS, excelling at 3 sub-metrics. Our _single-frame_ version VEDet-SF also achieves substantial gain over the single-frame baseline PETR, by 2.9% mAP and 3.9% NDS; it even outperforms the two-frame PETRv2 by 0.2% mAP and at 3 sub-metrics. The noticeably lower mATE scores of VEDet and VEDet-SF further verify the strong localization capability. For the test submission we adopt the depth-pretrained V2-99 backbone from [33] with \(640\times 1600\) images. Without using the more advantageous data sampling strategy CBGS [50] as all the other baselines do, VEDet still outperforms PETRv2 and achieves state-of-the-art performance with 50.5% mAP and 58.5% NDS.
### Ablation studies
We first ablate the choices of some important hyper-parameters used in VEDet shown in Tabs. 2(a) to 2(d), and then analyze the most critical components proposed in VEDet by adding one component at a time as shown in Tab. 2(e) or making variations of specific components shown in Tab. 2(f).
**Hyper-parameter selection.** We ablate the maximum Fourier frequency \(f_{\text{max}}\) Tab. 2(a) and number of Fourier bands \(k\) Tab. 2(b) used in Eqs. (2) and (6), the number of vir
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline Method & Backbone & Image size & CBGS & mAP\(\uparrow\) & NDS\(\uparrow\) & mATE\(\downarrow\) & mASE\(\downarrow\) & mAOE\(\downarrow\) & mAVE\(\downarrow\) & mAAE\(\downarrow\) \\ \hline PETR & Res-50 & 384\(\times\)1056 & ✓ & 0.313 & 0.381 & 0.768 & **0.278** & 0.564 & 0.923 & 0.225 \\ \hline
**VEDet** & Res-50 & 384\(\times\)1056 & ✓ & **0.347** & **0.443** & **0.726** & 0.282 & **0.542** & **0.555** & **0.198** \\ \hline FCOS3D & Res-101-DCN & 900\(\times\)1600 & & 0.295 & 0.372 & 0.806 & 0.268 & 0.511 & 1.131 & **0.170** \\ DETR3D\(\uparrow\) & Res-101-DCN & 900\(\times\)1600 & ✓ & 0.349 & 0.434 & 0.716 & 0.268 & 0.379 & 0.842 & 0.200 \\ BEVFormer\(\uparrow\) & Res-101-DCN & 900\(\times\)1600 & & 0.416 & 0.517 & 0.673 & 0.274 & 0.372 & **0.394** & 0.198 \\ UVTR\(\uparrow\) & Res-101-DCN & 900\(\times\)1600 & & 0.379 & 0.483 & 0.731 & **0.267** & **0.350** & 0.510 & 0.200 \\ PETR & Res-101 & 512\(\times\)1408 & ✓ & 0.357 & 0.421 & 0.710 & 0.270 & 0.490 & 0.885 & 0.224 \\ VEDet & Res-101 & 512\(\times\)1408 & ✓ & **0.432** & **0.520** & **0.638** & 0.275 & 0.362 & **0.498** & **0.191** \\ \hline PETR\(\downarrow\) & V2-99 & 640\(\times\)1600 & & 0.404 & 0.447 & 0.739 & 0.271 & 0.452 & 0.876 & 0.208 \\ PETRv2\(\uparrow\) & V2-99 & 640\(\times\)1600 & & 0.431 & 0.517 & 0.730 & 0.264 & 0.399 & **0.404** & **0.190** \\ \hline
**VEDet-SF\(\downarrow\)** & V2-99 & 640\(\times\)1600 & & 0.433 & 0.486 & 0.683 & **0.263** & 0.352 & 0.808 & **0.201** \\ VEDet\(\downarrow\) & V2-99 & 640\(\times\)1600 & & **0.451** & **0.527** & **0.670** & **0.263** & **0.347** & 0.510 & **0.192** \\ \hline \end{tabular}
\end{table}
Table 1: **NuScenes val set performance. Our VEDet outperforms existing baselines consistently cross various backbone and resolution choices. \(\dagger\) means initializing from FCOS3D backbone. \(\ddagger\) means initializing from the depth-pretrained backbone provided by DD3D [33].**
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c c c c c} \hline Method & Backbone & Image size & TTA & mAP\(\uparrow\) & NDS\(\uparrow\) & mATE\(\downarrow\) & mASE\(\downarrow\) & mAOE\(\downarrow\) & mAVE\(\downarrow\) & mAAE\(\downarrow\) \\ \hline DD3D\(\ddagger\) & V2-99 & 900\(\times\)1600 & ✓ & 0.418 & 0.477 & 0.572 & 0.249 & 0.368 & 1.014 & 0.124 \\ DETR3D\(\ddagger\) & V2-99 & 900\(\times\)1600 & ✓ & 0.412 & 0.479 & 0.641 & 0.255 & 0.394 & 0.845 & 0.133 \\ PETR\(\ddagger\) & V2-99 & 640\(\times\)1600 & & 0.434 & 0.481 & 0.641 & 0.248 & 0.437 & 0.894 & 0.143 \\ UVTR\(\ddagger\) & V2-99 & 900\(\times\)1600 & & 0.472 & 0.551 & 0.577 & 0.253 & 0.391 & 0.508 & 0.123 \\ BEVFormer\(\dagger\) & V2-99 & 900\(\times\)1600 & & 0.481 & 0.569 & 0.582 & 0.256 & 0.375 & 0.378 & 0.126 \\ BEVDet4D & Swin-B & 900\(\times\)1600 & ✓ & 0.451 & 0.569 & **0.511** & 0.241 & 0.386 & **0.301** & 0.121 \\ PolarFormer\(\dagger\) & V2-99 & 900\(\times\)1600 & & 0.493 & 0.572 & 0.556 & 0.256 & 0.364 & 0.439 & 0.127 \\ PETRv2\(\ddagger\) & V2-99 & 640\(\times\)1600 & & 0.490 & 0.582 & 0.561 & **0.243** & 0.361 & 0.343 & **0.120** \\ VEDet\(\ddagger\) & V2-99 & 640\(\times\)1600 & & **0.505** & **0.585** & 0.545 & 0.244 & **0.346** & **0.421** & 0.123 \\ \hline \end{tabular}
\end{table}
Table 2: **NuScenes test set performance. Our VEDet achieves state-of-the-art performance compared to existing publications. We also note that other baselines are trained with CBGS which is a more advantegeous sampling strategy. \(\ddagger\) means initializing from the depth-pretrained backbone provided by DD3D [33].**
tual query views \(V\) Tab. 3c and their weighting \(\lambda_{\text{v}}\) Tab. 3d used in box regression loss. The table shows that VEDet is robust across a wide range of hyper-parameter choices with competitive performance. Importantly, in Tab. 3c starting from not using any virtual query views (\(V=0\)), adding more views gradually improves performance until too many views might have caused the optimization to be more difficult. We choose \(f_{\text{max}}=8,k=64,V=2,\lambda_{\text{v}}=0.2\) according to the best results and fix them for all experiments.
Geometric positional encodings and object queries.In Tab. 3e, we note that #1 is effectively PETR [27]. The "Fourier+MLP GPE" column means we switch the position embeddings and queries in PETR with ours introduced in Secs. 3.2 and 3.3. From #1 to #2 we can see significant improvements in mAP by 1.6% and NDS by 1.7%, which demonstrates that _our proposed implicit geometric mapping better captures the 3D geometries_ thanks to its Fourier component and the use of geometric attributes \([\mathbf{r},\mathbf{\bar{q}},\mathbf{t}]\).
Virtual query views.In Tab. 3e, we then add \(V=2\) virtual views during training on top of adopting our proposed geometric position embeddings and object queries, for both single-frame and 2-frame ("2-frame" column) settings. From #2 to #3, we see further jumps in mAP by 1.4% and NDS by 2.4%; from #4 to #5, mAP increases by 1.9% and NDS by 3.2%. This shows _our proposed multi-view consistency loss applied on virtual view decoding effectively guides the model to improve 3D detection_.
Mutual benefit between VEDet and multi-sweep.Based on the above two comparisons, another observation is that our proposed VEDet benefits in the 2-frame setting more than the single-frame setting, indicating that more geometric cues can be exploited when the input images contain richer multi-view context. Similarly, when looking at #2 to #4 (+1.2% mAP, +3.1% NDS) and #3 to #5 (+1.7% mAP, +3.9% NDS), we can see that adding more frames becomes more helpful after we add in the viewpoint equivariance on \(V=2\) views. These two observations further consolidate the effectiveness of our geometric position embeddings, object queries, and virtual views.
Fourier encoding.In Tab. 3f "no Fourier" we show the importance of the Fourier encoding before the MLP by simply dropping it for both position embeddings and object queries, such that the MLPs map the geometric primitives directly to the 256-dim vectors. This leads to a drastic decrease in the detection performance (-6.9% mAP and -5.1% NDS), showing the critical role of the Fourier encoding to capture fine-grained changes in the geometries, which can be considered as high-frequency signals [38].
Partial camera geometry.In Tab. 3f "no \(\mathbf{\bar{q}}\)" and "no \(\mathbf{t}\)" we show that removing partial information from the input cameras' poses leads to noticeably degraded performance. Specifically, removing the rotation information leads to drops in mAP by 1.5% and NDS by 1.2%, since the rotation indicates how the perspective projection plane is facing, which the rays are insufficient to describe; removing the translation leads to catastrophic drops in mAP by 9.2% and NDS by 7.4%, justifying the importance of translation information which is too difficult to infer from data implicitly if missing.
\begin{table}
\end{table}
Table 3: **Ablation studies and analyses.** In Tabs. 3a to 3d we first analyze important hyper-parameters relevant to the algorithm, and choose the ones giving the best performance. In Tab. 3e we ablate and show the importance of the proposed geometric embeddings and virtual views. They not only monotonically bring improvements to the 3D detection performance, but also mutually benefit adding time frames. In Tab. 3f we ablate some variations in the components, showing the critical usage of Fourier encoding and some optimal settings for queries during training and inference.
**Multi-view consistency.** In Sec. 3.4, we constrain the different views of a query point to be considered simultaneously by concatenating the box predictions for matching and calculating loss. Instead, in Tab. 3f "no joint match" we treat them as individual objects. This leads to the possibility that a query point with different query views can be matched to different boxes, which leads to \(2.6\%\) drop in mAP and 3.9% drop in NDS. This demonstrates the multi-view consistency of the _same_ query point is meaningful, whereas simply augmenting the queries with views and treating all queries individually does not exploit the geometric signals enough. Without multi-view consistency, the excessive number of queries might be even harmful to the optimization, as indicated by the lower performance (42.5% mAP, 48.3% NDS) than the "\(V=0\)" VEDet (43.2% mAP, 49.5% NDS) in Tab. 3c.
### More analysis on viewpoint equivariance
Given our design of view-conditioned queries and utilizing multiple virtual views during training, a natural question arises: does the performance gain come from the viewpoint equivariance regularization, or simply because more queries participate in the training? To show the effectiveness of the viewpoint equivariance, we compare to an intuitive baseline described as follows. We start from a plain version where we do not apply virtual views in queries (i.e. setting \(V=0\)). Under this setting, we add an additional set of \(V\times M\) query points during training while duplicating the box targets by \(V\) more times, and only use the original \(M\) queries during inference. This setting _matches the number of participating queries and box targets as VEDet but does not contain any viewpoint equivariance regularization_, denoted by "+2\(M\) dry, no VE.." in Fig. 4, for NuScenes val set performances.
As shown in Fig. 4, the extra queries and box targets generate more learning signals at the early stage as reflected by the superior mAP and NDS (orange curves) compared to our "\(V=0\)" VEDet (blue curves). However, their effects diminish as the curves plateau when reaching the end, and eventually "+2\(M\) qry, no VE." underperforms "VEDet-0 views", as shown in both plots of Fig. 4. In contrast, Our counterpart VEDet with 2 virtual views during training, denoted by "VEDet, \(V=2\)" (purple curve), outperforms "VEDet, \(V=0\)" consistently throughout the training and noticeably boosts the performance by +1.9% mAP and +3.2% NDS to 45.1% mAP to 52.7% NDS. This justifies the viewpoint equivariance regularization is more than just increasing the number of queries, and that it brings richer geometric learning signals for the model.
## 5 Limitations
**Camera parameter robustness.** VEDet leverage implicit geometric encodings to learn 3D geometry in a data-driven way, therefore the robustness of VEDet against camera parameters is critical and worth investigating in future works.
**Depth information.** This work mainly leverages geometric signals generated from the ground truth 3D bounding boxes and generic pose information of both cameras and object queries, while some concurrent works [15, 23] explicitly use depth to guide the 3D feature construction and hence the interaction between the heads and features. Incorporating depth information into the framework, such as enhancing the geometric positional encodings or guiding spatial attention, will be explored for future works.
**Temporal modeling.** The current VEDet follows existing works to simply concatenate features from multi-sweeps along the token dimension. While this is effective, the concatenation has two main issues: it throws away the temporal ordering, and has difficulty in modeling long sequences due to memory and computation constraints. Investigation on a better temporal modeling such as recurrent processing will be valuable for future works.
## 6 Conclusion
In this work, we introduce a novel camera-based multi-view 3D object detection framework that learns from viewpoint equivariance regularization. VEDet employs a transformer decoder with a set of view-conditioned queries to decode bounding boxes from image features with geometric positional encodings. The view-conditioning of queries enables us to enforce viewpoint equivariance on predictions made from different viewpoints, and therefore generate richer geometric learning signals to guide the model in better understanding the 3D structure of the scene. VEDet achieves state-of-the-art 3D detection performance, and we conduct extensive experiments to show the effectiveness of its components. We also point out a few meaningful limitations for future works.
Figure 4: **Effectiveness of the view equivariant regularization.** Simply adding more queries and duplicating box targets without introducing view equivariance only helps model optimization at an early stage and does not help the final detection performance as shown by the orange curves compared to the blue curves. VEDet is able to leverage richer geometric signals brought by the view equivariance objective with the help of view-conditioned queries, which leads to a performance boost shown by purple curves.
## Appendix A Implementation details
VEDet model.We use three different backbones to report performance on NuScenes: ResNet-50 and ResNet-101 [13] are initialized from the ImageNet-pretrained weights hosted on OpenMMLab [10]; VoVNetV2-99 [20] is initialized from the depth-pretrained weights released by [33]. The image features and geometric positional encodings have dimension \(C=256\), and are added element-wise as the keys to the transformer decoder, which has \(L=6\) transformer layers. In the transformer layers, we use multi-head attention with \(8\) heads, dropout rate \(0.1\) on the residual connection, and \(2048\) hidden dimensions in the feed-forward network. To predict the classification scores, we use a single linear projection from \(256\)-dim queries to \(10\)-dim class scores; for predicting the 3D box attributes, we use a \(2\)-layer MLP with \([512,512]\) hidden dimensions interleaved with ReLU activations. The classification and regression heads are both shared across the \(6\) transformer layers.
Learnable geometry mapping.For the MLP in the learnable geometry mapping, used to make both geometric positional encoding and object queries, we use \(1\) hidden layer with \(1920\) dimensions, followed by a ReLU activation and a final projection to \(C=256\) dimensions. Therefore, given Fourier bands \(k=64\), the dimensions go through the following changes: \(d_{0}\rightarrow_{\text{Fourier}}1280\rightarrow_{\text{hidden}}1920 \rightarrow_{\text{proj}}256\), where \(d_{0}=10\) for both perspective geometry of an image feature \([\mathbf{r}_{(u_{i},v_{i})},\mathbf{\bar{q}},\mathbf{t}]\) and query geometry \([\mathbf{c}_{j}^{v},\mathbf{\bar{q}}^{v},\mathbf{t}^{v}]\).
Query points.We use \(900\) learnable 3D query points in all experiments. We follow [42] to use object ranges \([-51.2m,-51.2m,-5.0m,51.2m,51.2m,3.0m]\) in XYZ axes of the global BEV space around the vehicle. The query points are normalized to \([0,1]\) by a sigmoid operation and scaled by their range. The predictions of box center offsets are added to the points before the sigmoid operation.
Virtual view sampling.During training, the range we use to uniformly sample the translation for the virtual query views is \([-0.6m,-1.0m,-0.3m,0.6m,1.0m,0m]\) in XYZ axes. We uniformly sample the yaw angle to be between \([0,2\pi]\).
Temporal modeling.In the full-version VEDet we concatenate \(2\) temporal frames at the token dimension. Following [15, 28], we randomly sample one frame from the past \([3,27]\) frames during training, and use the past \(15\)-th frame during inference. The time interval between consecutive frames is roughly \(0.083\)s.
Optimization.During training, the loss weights we use are \(\lambda_{cls}=2.0\) and \(\lambda_{reg}=0.25\) following [27, 42]. We use the AdamW optimizer [30] with weight decay \(0.01\). The learning rate is linearly warmed up in the first \(500\) iterations from \(6.77e^{-5}\) (\(\frac{1}{3}\) of initial learning rate) to \(2e^{-4}\). The learning rate of the pretrained backbone is multiplied by \(0.1\) compared to all other components, that are trained from scratch. Checkpointing [9] is adopted during training to save GPU memory, bringing the training time of the full-version VEDet (2 frames, \(640\times 1600\) images, \(V=2\)) to 20 hours on 8 A100 GPUs, for 24 epochs on NuScenes.
Data augmentation.We use data augmentations following [27], in the order shown below:
* Resize. The original images are resized keeping the aspect ratio. The resize factor is sampled uniformly from \([0.564,0.8]\) for \(384\times 1056\) images, \([0.79,1.1]\) for \(512\times 1408\) images, and \([0.94,1.25]\) for \(640\times 1600\) images.
* Crop. Given a crop size \(H\times W\) and an intermediate image size \(H^{\prime}\times W^{\prime}\) after the resizing, the top area \([0,H^{\prime}-H]\) is cropped to meet the final height \(H\). The left limit of the cropping box is uniformly sampled from \([0,W^{\prime}-W]\).
* Horizontal flip. With a \(50\%\) probability, we flip all \(N\) images at the same time, alongside the 3D box annotations. The camera poses and intrinsics are transformed accordingly to reflect the flipping. Concretely, the X coordinate of the camera translation and yaw angle are flipped, while the principal point in the intrinsic matrix has the X-coordinate flipped.
* Global rotation. Without changing the images, the camera poses and 3D box annotations are rotated around the Z axis of the global BEV space. The angle is uniformly sampled from \([-22.5^{\circ},22.5^{\circ}]\).
* Global scaling. Without changing the images, the camera poses and 3D box annotations are scaled relative to the origin of the global BEV space. The scaling factor is uniformly sampled from \([0.95,1.05]\).
During testing, no random augmentations are used. The images are resized to the final width while keeping the aspect ratio, and cropped at the bottom-center.
|
2303.13227
|
Confidence-Aware and Self-Supervised Image Anomaly Localisation
|
Universal anomaly detection still remains a challenging problem in machine
learning and medical image analysis. It is possible to learn an expected
distribution from a single class of normative samples, e.g., through epistemic
uncertainty estimates, auto-encoding models, or from synthetic anomalies in a
self-supervised way. The performance of self-supervised anomaly detection
approaches is still inferior compared to methods that use examples from known
unknown classes to shape the decision boundary. However, outlier exposure
methods often do not identify unknown unknowns. Here we discuss an improved
self-supervised single-class training strategy that supports the approximation
of probabilistic inference with loosen feature locality constraints. We show
that up-scaling of gradients with histogram-equalised images is beneficial for
recently proposed self-supervision tasks. Our method is integrated into several
out-of-distribution (OOD) detection models and we show evidence that our method
outperforms the state-of-the-art on various benchmark datasets.
|
Johanna P. Müller, Matthew Baugh, Jeremy Tan, Mischa Dombrowski, Bernhard Kainz
|
2023-03-23T12:48:47Z
|
http://arxiv.org/abs/2303.13227v2
|
# Confidence-Aware and Self-Supervised
###### Abstract
Universal anomaly detection still remains a challenging problem in machine learning and medical image analysis. It is possible to learn an expected distribution from a single class of _normative samples_, _e.g._, through epistemic uncertainty estimates, auto-encoding models, or from synthetic anomalies in a self-supervised way. The performance of self-supervised anomaly detection approaches is still inferior compared to methods that use examples from _known unknown_ classes to shape the decision boundary. However, outlier exposure methods often do not identify _unknown unknowns_. Here we discuss an improved self-supervised single-class training strategy that supports the approximation of probabilistic inference with loosen feature locality constraints. We show that up-scaling of gradients with histogram-equalised images is beneficial for recently proposed self-supervision tasks. Our method is integrated into several out-of-distribution (OOD) detection models and we show evidence that our method outperforms the state-of-the-art on various benchmark datasets. Source code will be publicly available by the time of the conference.
## 1 Introduction
Out-of-distribution (OOD) detection builds upon the assumption that the division into normal and abnormal data is distinct, however OOD data can overlap in-distribution (ID) data and may exhibit an infinite number of descriptive features. We assume for medical imaging data a finite ID distribution space and an infinite OOD distribution space. Furthermore, we assume ID consistency for healthy medical images such that the compatibility condition holds, based on the impossibility theorems for OOD detection by [8]. As a result, OOD detection algorithms can be capable of learning the finite ID space and also a finite but sufficient number of ODD features for inference. We can approximate density-based spaces based on drawn samples from real unknown (conditioned) probability distributions for covering uncertainty in annotation of data, and, therefore, assume the Realisability assumption [8] for learnable OOD detection referring to the proposed problem formulation.
The OOD problem for medical imaging can also be seen from a practical, intuitive point. To reflect that multiple human medical experts can come by different diagnoses given the same image of a patient, we integrate uncertainty estimates for both ID and OOD data in form of probability distributions. Intuitively, we tend to imagine a finite ID space, since we observe a consistency between ID features which are exhibit by healthy human individuals from an anatomical point of view. Assuming that, we postulate that we can present learnable OOD detection through training different types of algorithms on normal data with synthetically generated anomalies.
Learning from synthetically generated anomalies became a research focus in medical image analysis research recently [11]. In a medical context, labeling requires medical expertise and, hence, human resources for generating reliable ground truth masks for anomaly detection algorithms. Self-supervised tasks that base on synthetically generated anomalies are considered a convenient mitigation for limited robustness and generalisation abilities that result from small datasets. An extension of this idea is to leverage the natural variations in normal anatomy to create a range of synthetic abnormalities. For example, image patch regions can be extracted from two independent samples and replaced with an interpolation between both patches [25, 15]. The interpolation factor, patch size, and patch location can be randomly sampled from uniform distributions. Any encoder-decoder architecture can be trained to give a pixel-wise prediction of the patch and its interpolation factor. This encourages a deep network to learn what features to expect normally and to identify where foreign patterns have been introduced. The estimate of the interpolation factor lends itself nicely to the derivation of an outlier score. Meanwhile the pixel-wise output allows for pixel- and subject- level predictions using the same model. However, such synthesis strategies feature obvious discontinuities. [26, 22] solve the discontinuity problem by using Poisson image editing, but the resulting anomalies can be so subtle that they may represent variations of the normal class rather than true anomalies and these approaches do not provide prediction confidence estimates. Therefore we propose a new approach to model the ID space and make the following contributions:
1. We propose a revised Poisson Image-interpolation framework for the generation of salient but still smoothly interpolated anomalies for self-supervision in unsupervised image anomaly localisation.
2. We propose self-supervision with a probabilistic feature extractor - Probabilistic PII (P-PII) - which allows the generation of stochastic anomalies with which we are able to simulate multiple annotators.
3. We evaluate P-PII on 2D chest radiograph images and 3D CT scans and show that out method outperforms recently proposed self-supervised anomaly localisation approaches.
4. We show that it is possible to learn feature distributions for 'normal' tissue in a self-supervised way from databases that exclusively contain patients with disease.
**Related Work.**
The most prominent direction for unsupervised medical anomaly localisation [27] is dominated by reconstruction-based methods like VAEs [28; 33; 16; 10] as well as other generative models like GANs [31; 1; 21], especially, for image synthesis and data augmentation [7; 11; 9]. New advances are expected by Diffusion models, which shine with detailed reconstructions and anomaly maps for detection [29] but they are computationally very challenging and have not been evaluated in detail yet. Other commonly used methods include one-class Support Vector Machines, k-Nearest Neighbors and extensions of these approaches for dimensionality-reduced feature spaces [17; 6]. Probabilistic methods have not been researched in detail for OOD detection yet. However, they are known from probabilistic segmentation approaches. For example the Probabilistic Hierarchical Segmentation (PHISeg) combines a conditional variational autoencoder (cVAE) with a U-NET setup proposed by [4], Bayesian U-Nets [23] can model epistemic uncertainty with weak labels and Monte Carlo estimates [20; 5; 19].
In a medical context, labeling requires medical expertise and, hence, human resources for generating reliable ground truth masks for anomaly detection algorithms. Self-supervised tasks are considered as convenient extensions for improving robustness, uncertainty and generalisation abilities of models and replace expensive labelling [13; 12; 18; 32]. We modify our backbone models to allow for OOD detection. To do this, we form a self-supervised task which is easily interchangeable. The self-supervised principle reposes on patch interpolation from the same or a different source image into a target image. Since more research work focuses on alleviating the labelling effort by experts for image data, different generation methods for anomalies emerged. For Foreign patch interpolation (FPI) [25], two patches of same location are extracted from two independent samples and replaced with an interpolation between both patches. CutPaste [15] updates the orginal method by translating patches within an image and allows effective detection of anomalies in industrial datasets. Poisson Image Interpolation (PII) [26] overcomes sharp discontinuities with Poisson editing as interpolation strategy and generates more organic and subtle outliers. Natural Synthetic Anomalies (NSA) [22] are introduced by rescaling, shifting and a new Gamma-distribution-based patch shape sampling without the use of interpolation factors for an end-to-end model for anomaly detection.
## 2 Method
Self-supervised tasks were considered as convenient extensions for improving robustness, uncertainty and generalisation abilities of models [13; 12; 18]. Our proposed Probabilistic PII self-supervision task is based on [25] and builds upon the Poisson image editing implementation by [3]. PII relies on the relative changes of the source image, the image gradient field \(\mathbf{v_{pq}}\), in the patch region and the patch boundary of the target image \(\delta h\), see Eq.. The solution of the underlying mathematical problem represents the discretised Poisson equation with Dirichlet boundary conditions, see Eq. 2 and Eq. 4. The intensity values within the patch \(h\) are given by the scalar function \(f_{in}\) and \(\langle p,q\rangle\) are denoted as a pixel pair such
that \(q\in N_{p}\) denote the four directly adjacent neighbour pixel of \(p\). For PII, \(\alpha\) determines the influence of the individual image gradients on the interpolation task.
\[v_{pq}=\left\{\begin{array}{cc}(1-\alpha)(x_{i_{p}}-x_{i_{q}}),&\text{ if }\left|(1-\alpha)(x_{i_{p}}-x_{i_{q}})\right|>\left|\alpha(x_{i_{p}}-x_{i_{q}}) \right|\\ \alpha(x_{i_{p}}-x_{i_{q}}),&\text{ otherwise.}\end{array}\right. \tag{1}\]
\[\nabla f_{in}=\text{divv over }h \tag{2}\]
The PII task can be reformulated to the following minimisation problem (Eq. 3), given the projection of \(\mathbf{v}(\frac{p+q}{2})\) onto the oriented edge (Eq. 1) [26] and the field of intensity image gradients (Eq. 2). The problem formulation can be solved via a discrete linear system solver.
\[\min_{f_{in}}\iint_{h}\left|\nabla f_{in}-\mathbf{v}\right|^{2},\text{ with }f_{in}\Big{|}_{\delta h}=f_{out}\Big{|}_{\delta h} \tag{3}\]
\[\min_{f_{in}}\Big{|}_{h}\sum_{\left\langle p,q\right\rangle\cap h\neq 0}(f_{ in,p}-f_{in,q}-v_{pq})^{2},\text{ with }f_{in}\Big{|}_{\delta h}=f_{out}\Big{|}_{\delta h},\forall p \in\delta h,q\in N_{p} \tag{4}\]
Our proposed Probabilistic PII (P-PII) builds upon these mathematical foundations but incorporates new features and approaches for addressing current limitations and rethinking its application.
Figure 1: Probabilistic PII takes patches from a source image of a given size. A second mask of circular size, drawn from two normal distributions for radius and location inside the source patches, allows aggregated anomalies with smoothly interpolated boundaries. We obtain probabilistic and salient anomalies.
First, we apply P-PII pairwise on non-anomalous training data but those pairs can be also, _e.g._, easily reduced to one and single non-anomalous image sample for reduced memory and time consumption. If pairwise applied, the allocation of image pairs is randomly drawn from the image batch. Second, we take patches from different locations of the source image and interpolate into also different location inside the target image, hence, we latch on the patch drawing by NSA [22]. Third, we overcome the current limitation of PII and PII-based anomaly generation methods regarding the grade of abnormality of the interpolated patches. If both, source and target images, are normalised, these anomalous regions are very subtle and difficult to recognise - compared to real lesions as well. For intensifying these abnormal features, we introduce up-scaling of gradients during the interpolation into the source patch. This approach generates less subtle, salient anomalies which are still smoothly interpolated into the target image. Fourth, we mitigate class-imbalance of normal and anomalous pixels through generation of one to ten anomalies per image with which we speed-up learning to differentiate both classes. Fifth, we introduce the Probabilistic feature into PII. For simulating the variance of annotations by multiple raters, as _e.g._, annotation of lesions by multiple medial experts, we generate circular anomalies inside each extracted patch from the source image. Therefore, we draw anomaly masks which parameters, radius and location (Eq. 5), we sample from normal distributions. We ensure with fixed boundaries of location and radius that the generated anomaly only touches the boundaries.
\[\mathbf{r}\sim\mathcal{N}_{Radius}(\mu,\sigma)\qquad(\mathbf{x},\mathbf{y}) \sim\mathcal{N}_{Location}(M=\langle\mu_{\mathbf{x}},\mu_{\mathbf{y}}\rangle, \Sigma=\langle\sigma_{\mathbf{x}},\sigma_{\mathbf{y}}\rangle) \tag{5}\]
For using P-PII as self-supervised task for OOD detection, we decided for intensity-based label generation. Based on the mean of all anomalies of each patch, we use the absolute difference between the original target image and the mean final image as label. Additionally, we have a variance map of all anomalies which can be used for further statistical evaluation or integration into the optimisation problem.
## 3 Evaluation and Results
**Data.** We use the JSRT database [24] as an exemplary smaller medical imaging dataset which includes 154 conventional chest radiographs with lung nodules and 93 radiographs without a nodule. For each patient only one image is attributed. We re-scaled all images from \(2048\times 2048\) matrix size to \(512\times 512\) in order to hold the conditions for all datasets equal. The subset without pathological findings serves as our training dataset. LIDC-IDRI [2] covers 1018 cases in form of CT scans with 7371 lesions, which were marked by at least one radiologist. We also divide the dataset into lesion slices and anomaly-free slices by extracting the context slices from each volume with a margin of about 5 slices on either side of the lesion, which approximates the maximum possible margin of lesions given slice
thickness and lesion diameter. We use the first 800 cases as training dataset, the rest for validation and testing. The large-scale dataset DeepLesion [30] contains 32,735 lesions in 32,120 computed tomography (CT) slices from 10,594 studies of 4,427 unique patients. Since the image size varies throughout the dataset, we resize each image to the smallest occurring size, \(512\times 512\). Each lesion slice is provided as part of an imaging volume which provides the 3D context of the lesion. We divide the dataset into lesion slices and anomaly-free slices by extracting the context slices from each volume with a margin of about 10 mm on either side of the lesion. As a result, we have 386,587 anomaly-free slices and 4831 annotated anomalous slices. We test the quality of performance for all models on ID and OOD data samples, which were not seen during training. For JSRT, the test set consists of 19 ID samples and 154 OOD samples. For the large datasets, we drew a test cohort of 500 ID and 500 (478 for LIDC-IDRI) OOD samples. For LIDC-IDRI and DeepLesion, both ID and OOD samples are from patients not occurring in the training dataset. Note that the models are trained on healthy tissue of ill patients for the datasets LIDC-IDRI and DeepLesion, which is different to the dataset JSRT for which we only differentiate between ill and healthy patient/samples.
**Pre-processing and Training.** We use histogram equalisation to the normalised images for contrast enhancement, adopted from MIMIC-CXR-JPG [14]. We apply this type of equalisation to all datasets. We train all models for a fixed number of \(100,000\) steps with a fixed batch size of 16. We used PNY NVIDIA A100s with at least 18 GB memory per job. The training runtime was approx. 4 days. The backbone models and P-PII were implemented in Python and TensorFlow.
**Metrics.** Choosing suitable metrics for evaluating OOD detection methods is of important to effectively evaluate the performance of a method and make valid comparisons with other approaches. We chose Area under the receiver operating characteristic (AUROC) for sample- and pixel-wise binary classification between OOD and ID samples/pixels as threshold-less metric. We refer with _OOD_ to anomalous samples/pixels and with _ID_ to normal ('healthy') input samples/pixels. Average Precision (AP) takes both precision and recall into account and is considered as sample-based evaluation metric here. In medical imaging analysis, false negatives are more critical than false positives, especially, in lesion detection. Therefore, we include the Free-response receiver operating characteristic (FROC) score as evaluation measure.
**Sensitivity analysis.** We perform an ablation study to investigate the impact of revised PII as self-supervision task for various backbone models (U-Net, Monte-Carlo Dropout (rate=0.1) U-Net, PHiSeg).All backbone models have the same depth of five levels, PHiSeg includes two additional resolution levels. We examine the influence of selected augmentation functions for small-scale datasets or datasets suffering from class imbalance for improving the performance of the self-supervised training.
**Results.** We evaluated all models with the training checkpoint for best dice. We show quantitative results in Tab. 1 for all backbone models. We observed
an increase of pixel-wise AUROC of up to 13% for U-net and PHiSeg and 18% for Dropout U-net, for the JSRT dataset. For LIDC-IDRI, we achieve values improved by up to 53% for PHiSeg. For DeepLesion, we determined an increase of 34% with PHiSeg and 9% with U-net for pixel-wise AUROC. Emphasising the sensitivity level of 0.27 for 10 avg. FPS, we increased the performance of the U-net, trained with PII, threefold with our proposed self-supervision task. Sample-wise AUROC was improved the most for the JSRT dataset with 45%, whereas we observed AUROC values \(<\) 0.5 for LIDC-IDRI and, partially, for DeepLesion and JSRT. An increased amount of false positives in predicting anomalous samples results for sample-wise AP for the large datasets. We show qualitative results for the prediction of U-net as a backbone model in Fig. 2. The prediction on JSRT is quantitatively better, but there are still false positive pixels in all examples, especially, for the larger datasets. We compare augmentation functions for further enhancing the performance of P-PII, see Tab. 2. We compare both best performing models and obtain an increase of 1% with scaling of the input image and combining scaling, random rotation in between \(\pm 10^{\circ}\) and elastic deformation. Further improvement was achieved by scaling the input for the Dropout U-net which resulted enhancing image-wise AUROC about 3%. The most highest improvement can be achieved through the use of augmentation functions yielding a sensitivity of 11% for U-net with combined augmentation, and 19% for Dropout U-net with scaling.
\begin{table}
\begin{tabular}{l l l c c c c c c c c c c c} \hline \hline & & & \multicolumn{3}{c}{JSRT [24]} & \multicolumn{3}{c}{DeepLesion [30]} & \multicolumn{3}{c}{LIDC-IDRI [2]} \\ & & & Pixel & Sample & & Pixel & Sample & & Pixel & Sample & \\ \cline{3-14} & \multirow{2}{*}{Model} & AUC & FC & AUC & AP & AUC & FC & AUC & AP & AUC & FC & AUC & AP \\ & & U-Net & 0.80 & 0.08 & 0.44 & 0.87 & 0.68 & 0.00 & 0.50 & 0.49 & 0.50 & 0.00 & 0.36 & 0.39 \\ & & MC U-Net & 0.76 & 0.01 & 0.55 & 0.90 & **0.74** & 0.00 & 0.53 & **0.55** & 0.59 & **0.01** & 0.40 & 0.43 \\ & & PHiSeg & 0.67 & 0.00 & 0.51 & 0.90 & 0.41 & **0.01** & 0.47 & 0.48 & 0.43 & 0.00 & **0.52** & **0.50** \\ \hline \multirow{3}{*}{
\begin{tabular}{l} PII} & U-Net & **0.90** & **0.27** & **0.64** & **0.94** & **0.74** & **0.01** & **0.56** & 0.52 & **0.69** & **0.01** & 0.33 & 0.38 \\ & & MC U-Net & **0.90** & 0.26 & **0.64** & 0.93 & 0.72 & **0.01** & 0.47 & 0.49 & 0.67 & **0.01** & 0.38 & 0.41 \\ & & PHiSeg & 0.76 & 0.06 & 0.63 & 0.93 & 0.55 & **0.01** & 0.55 & 0.51 & 0.66 & **0.01** & 0.41 & 0.44 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results; for PHiSeg, mean of 50 drawn samples from likelihood network; AUC - Area under the Receiver operating characteristic (AUROC), FC - Free-response Receiver operating characteristic (FROC) for 10 average FPs
Figure 2: Exemplary anomaly prediction on test data with U-net, input image in grey, heatmap of prediction in red, ground truth bounding box in blue.
**Discussion.** Self-supervision with P-PII enables all models to detect also very small lesions, see Fig. 2, which is still a major challenge for other anomaly localisation models, in both, a supervised and self-supervised context. We improve upon the issue of decreasing sensitivity for increasing average FPs in FROC, which we observe for the baseline method. With augmentation functions the performance of models trained with PII increases the sensitivity significantly by up to 19%. The limited quantitative performance on DeepLesion and LIDC-IDRI is likely due to the fixed training steps which could be insufficient for large datasets and also the foreground-background class imbalance could influence the results for large datasets. These issues need to be approached in further studies. Considering the number of false positive predicted regions, we would require expert analysis if those regions are correlated with real aberrations in the input images. For now, we can only interpret them as visually perceived abnormal regions in the input images, _e.g._, dense regions in the lung hilum. Compared to the original PII implementation we achieved a shortening of at least half of the training time through the usage of Poisson image editing through discrete sine transformation [3]. This allows us to sample from different source images multiple times for probabilistic representations of anomalies while still being faster as the baseline.
## 4 Conclusion
We analyse the proposed self-supervised learning method, P-PPI, on multiple three backbone models and three small- and large-scale datasets from the medical imaging domain. We exploit the influence of augmentation functions for the self-supervision task and present probabilistic anomalies, which are described for the first time for the applications in OOD detection. Our investigations highlight previous limitations when using Poisson image interpolation for the generation of synthetic anomalies. We improve pixel-wise AUROC by up to 18% and sample-wise AUROC by up to 45% in comparison to baseline methods. Additionally, we enhanced the pixel-wise sensitivity for 10 avg. FPs up to 38%. We also show that it is possible to learn feature distributions for normal tissue in a self-supervised way from databases that exclusively contain patients with disease (DeepLesion and LIDC-IDRI). In future work, the integration of the generated variance maps
\begin{table}
\begin{tabular}{l l l c c c c} \hline \hline & & & \multicolumn{2}{c}{AUROC} & \multicolumn{1}{c}{AP} & \multicolumn{1}{c}{FROC} \\ & Model & Augmentation & \multicolumn{1}{c}{Pixel Image} & \multicolumn{1}{c}{Image} & \multicolumn{1}{c}{10FPs} \\ & U-Net & scaling & \(\mathbf{0.91}\) & \(0.60\) & \(0.93\) & \(0.27\) \\ & MC U-Net & scaling & \(\mathbf{0.91}\) & \(\mathbf{0.66}\) & \(\mathbf{0.94}\) & \(\mathbf{0.31}\) \\ \cline{2-7} & U-Net & combined & \(\mathbf{0.91}\) & \(0.60\) & \(0.92\) & \(0.30\) \\ & MC U-Net & combined & \(\mathbf{0.91}\) & \(0.59\) & \(0.93\) & \(0.25\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Sensitivity anaylsis of augmentation functions for small-scale datasets on P-PII for JSRT [24]; scaling, combined (rotation \(\pm 10^{\circ}\), elastic deformation, scaling).
into the loss function has high potential for pushing unsupervised probabilistic learning further towards integration into clinical workflows.
_Acknowledgements_: The authors gratefully acknowledge the scientific support and HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universitat Erlangen-Nurnberg (FAU) under the NHR project b143dc PatRo-MRI. NHR funding is provided by federal and Bavarian state authorities. NHR@FAU hardware is partially funded by the German Research Foundation (DFG) - 440719683.
|
2308.00897
|
Higgs Inflation with a Gauss-Bonnet term
|
Higgs inflation with a Gauss-Bonnet term is studied in the Einstein frame.
Our model features two coupling functions, $\Omega^2(\phi)$ and $\omega(\phi)$,
coupled to the Ricci scalar and Gauss-Bonnet combinations. We found a special
relation $\Omega^2 \propto \omega$ sets the system a lot more simplified;
therefore, we take it for granted in our analytical studies. As a result of a
Weyl transformation to the Einstein frame, we notice the emergence of new
interactions: a non-minimal kinetic coupling between the scalar field and
gravity and a derivative self-interaction of the scalar field. In the Einstein
frame, we investigate the cosmological implications of these interactions by
deriving the background equation of motion and observable quantities. Our
numerical result on $n_S$ vs. $r$ suggests our model is consistent with the
observational data for a wide range of the model parameter, $-1.4\times
10^4\lesssim \alpha \equiv \frac{\omega}{\Omega^2} \lesssim 8\times 10^3$,
where both the positive and negative values of $\alpha$ are allowed. As the
Gauss-Bonnet contributions decay away with time after inflation, the
propagation speed of gravitational waves turned out to be consistent with the
recent constraints on the propagation speed of gravitational waves (GWs)
without inducing ghost instability.
|
Seoktae Koh, Seong Chan Park, Gansukh Tumurtushaa
|
2023-08-02T01:26:17Z
|
http://arxiv.org/abs/2308.00897v3
|
# Higgs Inflation with a Gauss-Bonnet term
###### Abstract
Higgs inflation with a Gauss-Bonnet term is studied in the Einstein frame. Our model features two coupling functions, \(\Omega^{2}(\phi)\) and \(\omega(\phi)\), coupled to the Ricci scalar and Gauss-Bonnet combinations. We found a special relation \(\Omega^{2}\propto\omega\) sets the system a lot more simplified; therefore we take it for granted in our analytical studies. As a result of a Weyl transformation to the Einstein frame, we notice the emergence of new interactions: a non-minimal kinetic coupling between the scalar field and gravity and a derivative self-interaction of the scalar field. In the Einstein frame, we investigate the cosmological implications of these interactions by deriving the background equation of motion and observable quantities. Our numerical result on \(n_{S}\) vs. \(r\) suggests our model is consistent with the observational data for a wide range of the model parameter, \(-1.4\times 10^{4}\lesssim\alpha\equiv\frac{\omega}{\Omega^{2}}\lesssim 8 \times 10^{3}\), where both the positive and negative values of \(\alpha\) are allowed. As the Gauss-Bonnet contributions decay away with time after inflation, the propagation speed of gravitational waves is turned out to be consistent with the recent constraints on the propagation speed of gravitational waves (GWs) without inducing ghost instability.
###### Contents
* 1 Introduction
* 2 Setup and Conformal transformation
* 3 Higgs inflation with a Gauss-Bonnet term in the Einstein frame
* 4 Conclusion
* A Constant coupling to the Gauss-Bonnet term
* B Power-law coupling to Gauss-Bonnet term
## 1 Introduction
Cosmic inflation, an idea of accelerated exponential expansion of the early universe, is a successful paradigm that not only solved the flatness and horizon problems but also made definite predictions for primordial cosmological perturbations that observations can directly test; see Ref. [1] for review. However, there is no conclusive solution to the problem of how to embed inflation into a particle physics framework. The most common approach for embedding inflation into the particle physics framework is to couple the gravity sector to a scalar field, such as the Higgs field. Driven by the Higgs field \(\phi\), which is non-minimally coupled to gravity, Higgs inflation is a minimal model of inflation without introducing additional scalar degrees of freedom to those appearing in the Standard Model (SM) of particle physics [2; 3; 4; 5; 6; 7; 8]. This model agrees with data from Cosmic Microwave Background (CMB) experiments on the bounds of the scalar spectral index \(n_{S}\) and the tensor-to-scalar ratio \(r\)[9; 10; 11; 12; 13]. What makes Higgs inflation consistent with the observational data is the non-minimal coupling function between the Higgs field and the gravitational sector, which flattens the potential in the Einstein frame in the large-field regime, allowing the slow-roll conditions for inflation to be realized [14; 15; 16]. 1 As a result, at first order in slow-roll approximation, the Higgs inflation model predicts a \(n_{S}\) value consistent with data and a \(r\) value to be comfortably below the experimental limits [17; 18]; see [19] for the recent review.
Footnote 1: See [15; 16] for the non-minimal coupling of assistant field(s).
While the non-minimal coupling between gravity and the Higgs field is well-motivated by consideration of the renormalization of a scalar field in curved space, it is feasible to expect additional interactions to be present. From the effective field theory viewpoint \(R^{2}\) term [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31], especially the Gauss-Bonnet combination \(R_{GB}^{2}=R^{2}-4R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}\), are expected to arise [32]. Higher curvature terms, \(R^{2+p}\) terms (of mass dimension \(4+2p\)) may also arise, but they are supposed to be suppressed [33; 34]. The Gauss-Bonnet term, in isolation, is purely topological and therefore does not impact the dynamics of inflation. However, it can introduce intriguing phenomenological effects when coupled with the inflaton field. Therefore, in the present work, we are motivated to study inflation in the context of a scalar field non-minimally coupled to the Ricci scalar and the Gauss-Bonnet combination in the action. Such motivations for adding the Gauss-Bonnet term are also complemented by the string theory perspective, where particular couplings between the Gauss-Bonnet term
and scalar fields have been found [35; 36]. We note that many authors have studied phenomenological aspects of the Gauss-Bonnet combination, including cosmic inflation [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53], primordial black holes [54; 55], gravitational-wave leptogenesis [56], dark energy [57; 58; 59; 60; 61; 62; 63; 64], blackholes [65; 66], and wormholes [67; 68], in the Einstein frame version of a theory, where a generic function of a scalar field coupled to the Gauss-Bonnet combination is often considered in addition to the Einstein-Hilbert term.
The paper is organized as follows. Section 2 begins with our setup formulated in the Jordan frame, where we have a scalar (or Higgs) field coupled to the Ricci scalar and the Gauss-Bonnet combination. At the end of the section, we obtain the Einstein frame action using the so-called conformal transformation. From the Einstein frame action, we derive the background equations of motion and the observable quantities in Section 3 following Ref. [69]. In the same section, we provide our numerical results and discuss the consequent findings of our work. Finally, we conclude our work in Section 4.
## 2 Setup and Conformal transformation
Let us begin with an action given in the Jordan frame as
\[S^{J}=\int d^{4}x\sqrt{-g^{J}}\left[\frac{M_{p}^{2}}{2}\Omega^{2}(\phi)R^{J}- \frac{1}{2}g_{ab}^{J}\nabla^{a}\phi\nabla^{b}\phi-V(\phi)+\omega(\phi)R_{GB}^ {2^{J}}\right]\,. \tag{1}\]
Here, the superscript \(J\) denotes quantities in the Jordan frame, where the scalar field \(\phi\) and the Ricci scalar \(R\) are coupled through \(\Omega(\phi)\), which is also known as the non-minimal coupling function, \(M_{p}\) is the reduced Planck mass, and \(V(\phi)\) is the scalar field potential. The \(\omega(\phi)\) is the coupling function between the \(\phi\) and the Gauss-Bonnet combination, \(R_{GB}^{2}=R^{2}-4R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}\). Many properties of the physically interesting quantities become more apparent and easier to present in the Einstein frame. Thus, using the so-called Weyl transformation, a local conformal transformation, one moves from the Jordan frame to the Einstein frame, where gravity is minimally coupled to the scalar field. The spacetime metric and the square root of its determinants change under the conformal transformation as
\[g_{ab}^{J}=\Omega^{-2}g_{ab}\,,\quad\sqrt{-g^{J}}=\Omega^{-4} \sqrt{-g}, \tag{2}\]
where the metric \(g_{ab}\) without the superscript \(J\) represents the metric in the Einstein frame.
The first three terms in Eq. (1) are well known in the context of Higgs inflation and, in the Einstein frame, they can be written as [2; 3; 4; 6; 7]
\[S=\int d^{4}x\sqrt{-g}\left[\frac{M_{P}^{2}}{2}R-\frac{1}{2}g_{ ab}\nabla^{a}s\nabla^{b}s-V(s)\right]\,, \tag{3}\]
where
\[\frac{s}{M_{p}}\equiv\sqrt{\frac{3}{2}}\ln\Omega^{2}(\phi(s))\,, \quad V(s)=\frac{V(\phi(s))}{\Omega^{4}(\phi(s))}\,. \tag{4}\]
It is, therefore, interesting to see how the last term in Eq. (1) changes under the conformal transformation and to investigate what consequent dynamics would be apparent in the Einstein frame that otherwise does not come into sight in the Jordan frame. Thus, let us now focus on the last part of the action,
\[S_{GB}^{J}=\int d^{4}x\sqrt{-g^{J}}\omega(\phi)R_{GB}^{2^{J}}\,, \tag{5}\]
and transform it into the Einstein frame. The Gauss-Bonnet combination changes under the conformal transformation as [70]
\[R_{GB}^{2J} = \Omega^{4}\left[R_{GB}^{2}-8\Omega^{-1}G_{ab}\nabla^{a}\nabla^{b} \Omega-4R\Omega^{-2}\nabla_{a}\Omega\nabla^{a}\Omega+8\Omega^{-2}\left(\nabla_{ a}\nabla^{a}\Omega\nabla_{b}\nabla^{b}\Omega\right.\right. \tag{6}\] \[\left.\left.-\nabla_{b}\nabla_{a}\Omega\nabla^{b}\nabla^{a}\Omega \right)-24\Omega^{-3}\nabla_{a}\Omega\nabla^{a}\Omega\nabla_{b}\nabla^{b} \Omega+24\Omega^{-4}\left(\nabla_{a}\Omega\nabla^{a}\Omega\right)^{2}\right]\,,\]
where \(G_{ab}\equiv R_{ab}-g_{ab}R/2\) is the Einstein tensor. Substituting Eq. (6) into Eq. (5) and using Eq. (2), we obtain the action in the Einstein frame as
\[S_{GB} = \int d^{4}x\sqrt{-g}\,\omega(\phi)\left[R_{GB}^{2}-8\Omega^{-1}G _{ab}\nabla^{a}\nabla^{b}\Omega-4R\Omega^{-2}\nabla_{a}\Omega\nabla^{a}\Omega +8\Omega^{-2}\left(\nabla_{a}\nabla^{a}\Omega\nabla_{b}\nabla^{b}\Omega\right.\right. \tag{7}\] \[\left.\left.-\nabla_{b}\nabla_{a}\Omega\nabla^{b}\nabla^{a} \Omega\right)-24\Omega^{-3}\nabla_{a}\Omega\nabla^{a}\Omega\nabla_{b}\nabla^ {b}\Omega+24\Omega^{-4}\left(\nabla_{a}\Omega\nabla^{a}\Omega\right)^{2} \right]\,.\]
The coupling functions \(\omega(\phi)\) can generally be either a constant or a generic function of a scalar field. In the Appendix A, we show that, if \(\omega=\text{const.}\), no accountable effect comes from the Gauss-Bonnet term in both frames. Thus, from now on, we regard the Gauss-Bonnet coupling as a function of the scalar field.
With the use of the integration by parts, the second and the third terms in Eq. (7) can be simplified as
\[-8\int d^{4}x\sqrt{-g}\,\omega\left[\Omega^{-1}G_{ab}\nabla^{a} \nabla^{b}\Omega+\frac{1}{2}g_{ab}R\Omega^{-2}\nabla^{a}\Omega\nabla^{b}\Omega\right]\] \[\qquad=-8\int d^{4}x\sqrt{-g}\left[\omega\Omega^{-2}R_{ab}\nabla^ {a}\Omega\nabla^{b}\Omega-\Omega^{-1}G_{ab}\nabla^{a}\omega\nabla^{b}\Omega \right]\,. \tag{8}\]
The fourth term in Eq. (7) becomes
\[8\int d^{4}x\sqrt{-g}\omega\Omega^{-2}\left(\nabla_{a}\nabla^{a} \Omega\nabla_{b}\nabla^{b}\Omega-\nabla_{b}\nabla_{a}\Omega\nabla^{b}\nabla^{ a}\Omega\right)=8\int d^{4}x\sqrt{-g}\left[\omega\Omega^{-2}R_{ab}\nabla^{a} \Omega\nabla^{b}\Omega\right.\] \[\qquad\left.-\omega\Omega^{-2}\left(\omega^{-1}\nabla_{a}\omega-2 \Omega^{-1}\nabla_{a}\Omega\right)\left(\nabla^{a}\Omega\nabla_{b}\nabla^{b} \Omega-\nabla_{b}\Omega\nabla^{a}\nabla^{b}\Omega\right)\right]\,, \tag{9}\]
where Eq. (10) is used. Consequently, Eq. (7) can be rewritten as
\[S_{GB} = \int d^{4}x\sqrt{-g}\left[\omega R_{GB}^{2}+8\Omega^{-1}G_{ab} \nabla^{a}\omega\nabla^{b}\Omega-8\omega\Omega^{-2}\left(\omega^{-1}\nabla_{a }\omega-2\Omega^{-1}\nabla_{a}\Omega\right)\right.\] \[\left.\times\left(\nabla^{a}\Omega\nabla_{b}\nabla^{b}\Omega- \nabla_{b}\Omega\nabla^{a}\nabla^{b}\Omega\right)-24\omega\Omega^{-3}\nabla_{a }\Omega\nabla^{a}\Omega\nabla_{b}\nabla^{b}\Omega+24\omega\Omega^{-4}\left( \nabla_{a}\Omega\nabla^{a}\Omega\right)^{2}\right]\,,\]
where the first term in Eq. (8) is canceled with that of Eq. (9). It is interesting to note that the third term in Eq. (10) vanishes when the two non-minimal couplings are proportional to each other, maintaining the following relation:
\[\omega=\alpha\Omega^{2}\,, \tag{11}\]
where \(\alpha\in\mathbb{R}\). Although the coupling functions \(\Omega^{2}(\phi)\) and \(\omega(\phi)\) have the flexibility to be arbitrary functions of a scalar field, we would assume Eq. (11) as a part of our model.
For further elaboration, including the form of action and the case of an arbitrary power relationship \(\omega\propto\Omega^{p}\), please refer to Appendix B. Now the action is greatly simplified as
\[S_{GB}=\int d^{4}x\sqrt{-g}\,\alpha\Omega^{2}\left[R_{GB}^{2}+4G_ {ab}\nabla^{a}\ln\Omega^{2}\nabla^{b}\ln\Omega^{2}-3\nabla^{b}\nabla_{b}\ln \Omega^{2}\nabla_{a}\ln\Omega^{2}\nabla^{a}\ln\Omega^{2}\right]\,. \tag{12}\]
In terms of the scalar field \(s\) defined in Eq. (4), the action Eq. (12) reads
\[S_{GB}=\int d^{4}x\sqrt{-g}\,\alpha e^{\sqrt{\frac{2}{3}}\frac{s}{M_{p}}} \left[R_{GB}^{2}+\frac{8}{3M_{p}^{2}}G_{ab}\nabla^{a}s\nabla^{b}s-\frac{1}{M_ {p}^{3}}\sqrt{\frac{8}{3}}\nabla^{b}\nabla_{b}s\nabla_{a}s\nabla^{a}s\right]\,. \tag{13}\]
Combining Eq. (13) with Eq. (3), we can write the full action in the Einstein frame as
\[S=\int d^{4}x\sqrt{-g}\left[\frac{M_{p}^{2}}{2}R-\frac{1}{2}g_{ ab}\nabla^{a}s\nabla^{b}s-V(s)\right.\] \[\qquad\left.-\frac{1}{2}\xi(s)\left(c_{1}R_{GB}^{2}+\frac{c_{2}} {M_{p}^{2}}G_{ab}\nabla^{a}s\nabla^{b}s+\frac{c_{3}}{M_{p}^{3}}\nabla_{a}s \nabla^{a}s\nabla^{b}\nabla_{b}s\right)\right]\,, \tag{14}\]
where
\[\xi(s)\equiv-2\alpha e^{\sqrt{\frac{2}{3}}\frac{s}{M_{p}}}\,, \quad c_{1}=1\,,\quad c_{2}=\frac{8}{3}\,,\quad c_{3}=-\sqrt{\frac{8}{3}}\,.\]
Note that the scalar-field potential has not been specified yet. Thus, Eq. (14) applies to any potential and the coupling functions that adhere to the relationship given in Eq. (11). The Einstein frame potential \(V(s)\) can be determined through Eq. (4). For instance, consider the non-minimal coupling function and the Jordan frame potential:
\[\Omega^{2}=1+\frac{\sigma}{M_{p}^{2}}\phi^{2}\quad\text{and}\quad V(\phi)= \frac{\lambda}{4}\phi^{4}\,,\]
then, from Eq. (4), the Einstein frame potential reads:
\[V(s)=\frac{\lambda M_{p}^{4}}{4\sigma^{2}}\left(1-e^{-\sqrt{\frac{2}{3}}\frac{ s}{M_{p}}}\right)^{2}\,, \tag{15}\]
where \(\sigma\) and \(\lambda\) are the coupling constant and the potential parameters, respectively [2; 3; 4; 6; 7]. In addition to the expected Gauss-Bonnet term, Eq. (14) presents new interactions, the kinetic coupling between the scalar field and gravity and a derivative self-interaction of the scalar field, that were not apparent in the Jordan frame. Such interactions discussed in Ref. [71; 72; 73] as a particular subclass of Horndeski's theory [74], the most general scalar-tensor theory of gravity, or equivalently the generalized Galileons [75]. Moreover, multiplied by \(\xi(s)\), the last term in Eq. (14) is discussed in Ref. [69] as a string correction in the Einstein frame because the general relativity (GR) limit can be reached as \(\alpha\to 0\). In the GR limit, the inflationary dynamics of our model would be dictated mainly by the potential shape, \(V(s)\). Thus, in this work, the \(\alpha\) does not necessarily need to be small and may have either a negative or positive sign. The determination of the sign should be based on the observational data. From now on, we will specify the last term of Eq. (14) as the Gauss-Bonnet combinations.
Higgs inflation with a Gauss-Bonnet term in the Einstein frame
In this section, we investigate Higgs inflation with the Gauss-Bonnet combination in the Einstein frame with potential presented in Eq. (15). From Eq. (14), we derive gravitational and field equations of motion as [69]
\[G_{ab}=\nabla_{a}s\nabla_{b}s-\frac{1}{2}g_{ab}\left(\nabla_{c}s \nabla^{c}s-2V\right)-\frac{1}{2}T^{GB}_{ab}\,, \tag{23a}\] \[\nabla_{a}\nabla^{a}s-V_{,s}=\frac{1}{2}T^{GB}\,, \tag{23b}\]
where
\[T^{GB}_{ab} =c_{1}\left[4g_{ab}\left(2\nabla_{c}\nabla_{d}\xi R^{cd}-\nabla_ {c}\nabla^{c}\xi R\right)\right.\] \[\qquad\left.-4\left(2\nabla^{c}\nabla^{d}\xi R_{acbd}-2\nabla_{c} \nabla^{c}R_{ab}+4\nabla_{c}\nabla_{(b}\xi R^{c}_{a)}-\nabla_{a}\nabla_{b}\xi R \right)\right]\] \[\quad+\frac{c_{2}}{M_{p}^{2}}\left[\xi(R_{ab}\nabla^{c}s\nabla_{ c}s+R\nabla_{a}s\nabla_{b}s-4R^{c}_{(a}\nabla_{b)}s\nabla_{c}s)-\nabla_{c} \nabla^{c}\left(\xi\nabla_{a}s\nabla_{b}s\right)\right.\] \[\qquad\left.-\nabla_{a}\nabla_{b}\left(\xi\nabla^{c}s\nabla_{c}s \right)+2\nabla_{c}\nabla_{(b}(\xi\nabla^{c}s\nabla_{a)}s\right)\right.\] \[\qquad\left.+g_{ab}\left(\xi G^{cd}\nabla_{c}s\nabla_{d}s-\nabla_ {d}\nabla_{c}(\xi\nabla^{c}s\nabla^{d}s)+\nabla_{d}\nabla^{d}(\xi\nabla_{c}s \nabla^{c}s)\right)\right]\] \[\quad+\frac{c_{3}}{M_{p}^{3}}\left[2\nabla_{(a}(\xi\nabla^{c}s \nabla_{c}s)\nabla_{b)}s-2\xi\nabla^{c}\nabla_{c}s\nabla_{a}s\nabla_{b}s-g_{ ab}\nabla_{d}(\xi\nabla^{c}s\nabla_{c}s)\nabla^{d}s\right]\,,\] \[T^{GB} =c_{1}\xi_{,s}R^{2}_{GB}-\frac{c_{2}}{M_{p}^{2}}G^{ab}\left(\xi_{,s}\nabla_{a}s\nabla_{b}s+2\xi\nabla_{a}\nabla_{b}s\right)\] \[\quad+\frac{c_{3}}{M_{p}^{3}}\left[\xi_{,s}\nabla_{b}\nabla^{b}s \nabla^{a}s\nabla_{a}s+\nabla_{b}\nabla^{b}\left(\xi\nabla_{a}s\nabla^{a}s \right)-2\nabla_{a}\left(\xi\nabla_{b}\nabla^{b}s\nabla^{a}s\right)\right]\,.\]
with "\(V_{,s}=\partial V/\partial s\)" and "\(\xi_{,s}=\partial\xi/\partial s\)." In the spatially flat FRW universe with metric
\[ds^{2}=-dt^{2}+a(t)^{2}\delta_{ij}dx^{i}dx^{j}\,,\]
where \(a(t)\) is the scale factor, the background equations of motion yield from Eq. (23) [69]
\[3M_{p}^{2}H^{2} =\frac{1}{2}\dot{s}^{2}+V+12c_{1}\dot{\xi}H^{3}-\frac{9}{2}\frac{ c_{2}}{M_{p}^{2}}\xi\dot{s}^{2}H^{2}+\frac{1}{2}\frac{c_{3}}{M_{p}^{3}}(\dot{ \xi}-6\xi H)\dot{s}^{3}\,, \tag{24a}\] \[M_{p}^{2}(2\dot{H}+3H^{2}) =-\frac{1}{2}\dot{s}^{2}+V+4c_{1}\left[\ddot{\xi}H^{2}+2\dot{\xi} H(\dot{H}+H^{2})\right]\] \[\quad-\frac{1}{2}\frac{c_{2}}{M_{p}^{2}}\dot{s}\left[\xi\dot{s}(2 \dot{H}+3H^{2})+4\xi\ddot{s}H+2\dot{\xi}\dot{s}H\right]-\frac{1}{2}\frac{c_{3} }{M_{p}^{3}}\dot{s}^{2}(2\xi\ddot{s}+\dot{\xi}\dot{s})\,,\] (24b) \[\ddot{s}+3H\dot{s}+V_{,s} =-12c_{1}\xi_{,s}H^{2}(\dot{H}+H^{2})+\frac{3}{2}\frac{c_{2}}{M_{p} ^{2}}\left[H^{2}(\dot{\xi}\dot{s}+2\xi\ddot{s})+2H\xi\dot{s}(2\dot{H}+3H^{2})\right]\] \[\quad-\frac{1}{2}\frac{c_{3}}{M_{p}^{3}}\dot{s}\left[\ddot{\xi} \dot{s}+3\dot{\xi}\ddot{s}-6\xi(\dot{H}\dot{s}+2H\ddot{s}+3H^{2}\dot{s})\right]\,, \tag{24c}\]
where \(H\equiv\dot{a}/a\) is the Hubble parameter and the over-dot denotes the derivative with respect to time \(t\). The imprints of the Gauss-Bonnet contributions in the Einstein frame can be easily identified by examining the equation of motion for the presence of the \(\xi(s)\) function. Thus, the terms containing \(\xi(s)\) are clear indicators of the Gauss-Bonnet contributions in the Einstein frame and should not be overlooked.
In the context of slow-roll inflation, it is often assumed that the acceleration of the scalar field is negligible with respect to the gravitational friction, \(\ddot{s}\ll 3H\dot{s}\), and the potential energy dominates over the kinetic energy, \(V\gg\dot{s}^{2}/2\); together they are known as the slow-roll approximations. Thus, in light of the slow-roll approximations, the above equations can be simplified even further as
\[3M_{p}^{2}H^{2}\simeq V\,, \tag{19}\] \[3H\dot{s}\simeq-\frac{\mathcal{B}\mp\sqrt{\mathcal{B}^{2}-4 \mathcal{A}\mathcal{C}}}{2\mathcal{A}}\,, \tag{20}\]
where
\[\mathcal{A}\equiv\frac{c_{3}}{M_{p}^{3}}\xi\,,\quad\mathcal{B} \equiv 1-\frac{3c_{2}}{M_{p}^{2}}\xi H^{2}\simeq 1-\frac{c_{2}}{M_{p}^{4}}\xi V \,,\quad\mathcal{C}\equiv V_{,s}+12c_{1}\xi_{,s}H^{4}\simeq V_{,s}+\frac{4c_{1 }}{3M_{p}^{4}}\xi_{,s}V^{2}\,.\]
In obtaining Eqs. (19) and (20), we assumed \(\dot{\xi}/(2\xi H)\ll 1\) with \(\xi(s)\neq 0\). Without any loss of generality, one can rewrite Eq. (20) as
\[3H\dot{s}\simeq-V_{,s}\left[1+\delta(s)\right]\,, \tag{21}\]
where
\[\delta(s)\equiv\frac{\mathcal{B}\mp\sqrt{\mathcal{B}^{2}-4 \mathcal{A}\mathcal{C}}}{2\mathcal{A}V_{,s}}-1\,.\]
From Eq. (21), one can also get
\[\frac{\dot{s}}{H}\simeq-M_{p}^{2}\frac{V_{,s}}{V}(1+\delta)\,, \tag{22}\]
which is an important quantity to estimate the duration of inflation. The duration of inflation is measured by the so-called number \(N\) of the \(e\)-folds, which is defined as
\[N\equiv\int_{t_{i}}^{t_{e}}Hdt=\int_{s_{i}}^{s_{e}}\frac{H}{\dot{ s}}ds\,. \tag{23}\]
where the variables \(t_{i}\) and \(t_{e}\) represent the initial and end times of inflation, while \(s_{i}\) and \(s_{e}\) refer to the beginning and ending scalar field values of inflation, respectively. It is evident from Eqs. (22) and (23) that the contributions of the Gauss-Bonnet in the Einstein frame impact the number of \(e\)-folds.
Furthermore, to reflect the aforementioned slow-roll approximations, it is useful to introduce the following so-called slow-roll parameters
\[\epsilon_{1}\equiv\frac{\dot{H}}{H^{2}}\simeq-\epsilon_{V}(1+ \delta)\,,\quad\epsilon_{2}\equiv\frac{\ddot{s}}{H\dot{s}}\simeq\left[ \epsilon_{V}-\eta_{V}-\sqrt{2\epsilon_{V}}M_{p}\ln(1+\delta)_{,s}\right](1+ \delta)\,, \tag{24}\]
where
\[\epsilon_{V}\equiv\frac{M_{p}^{2}}{2}\left(\frac{V_{,s}}{V}\right)^{2}\,, \quad\eta_{V}\equiv M_{p}^{2}\frac{V_{,ss}}{V}\,.\]
Following Ref. [69], we also introduce the following additional slow-roll parameters to take the effects of the Gauss-Bonnet combination into account
\[\epsilon_{3}\equiv\frac{\dot{E}}{2EH}=\frac{E_{,s}}{2E}\frac{\dot{s}}{H}\,, \quad\epsilon_{4}\equiv\frac{Q_{a}}{4M_{p}^{2}HQ_{t}}\,,\quad\epsilon_{5}\equiv \frac{\dot{Q}_{t}}{2Q_{t}H}=\frac{Q_{t,s}}{2Q_{t}}\frac{\dot{s}}{H}\,, \tag{11}\]
where
\[E\equiv\frac{1}{\dot{s}^{2}}\left(\dot{s}^{2}+\frac{3Q_{a}^{2}}{2M_{p}^{2}Q_{t }}+Q_{c}\right)\simeq 1-\frac{c_{2}}{M_{p}^{4}}\xi V\left(1-\frac{2c_{3}}{c_{2}} \sqrt{2\epsilon_{V}}\right)\,,\]
with
\[Q_{a}\equiv-4c_{1}\dot{\xi}H^{2}+\frac{2c_{2}}{M_{p}^{2}}\xi\dot{ s}^{2}H+\frac{c_{3}}{M_{p}^{3}}\xi\dot{s}^{3}\,,\qquad Q_{b}\equiv-8c_{1}\dot{ \xi}H+\frac{c_{2}}{M_{p}^{2}}\xi\dot{s}^{2}\,,\] \[Q_{c}\equiv-\frac{3c_{2}}{M_{p}^{2}}\xi\dot{s}^{2}H^{2}+\frac{2c _{3}}{M_{p}^{3}}\dot{s}^{3}(\dot{\xi}-3\xi H)\,,\qquad Q_{t}\equiv 1+\frac{Q_{b}}{2M_ {p}^{2}}\,.\]
For inflation to occur successfully, these slow-roll parameters must be smaller than unity, \(|\epsilon_{1,2,3,4,5}|\ll 1\), during inflation. In terms of the slow-roll parameters, Eq. (10) reads
\[N=\frac{1}{M_{p}}\int_{s_{e}}^{s_{i}}\frac{ds}{\sqrt{2\epsilon_{ V}}(1+\delta)}\,,\]
where the field value \(s_{e}\) at the end of inflation can be estimated from the condition \(\epsilon_{1}(s_{e})\equiv 1\). One can notice that the \(s_{e}\) is also affected by the Gauss-Bonnet contributions. Following the linear perturbation analyses carried out in Ref. [69], we obtain the spectral indices for scalar and tensor fluctuation modes [69]
\[n_{S}-1=2(2\epsilon_{1}-\epsilon_{2}-\epsilon_{3})\,,\quad n_{T}=2(\epsilon_{1 }-\epsilon_{5})\,, \tag{12}\]
and the tensor-to-scalar ratio
\[r =16\left|\frac{1}{Q_{t}}\left(\frac{c_{A}}{c_{T}}\right)^{3} \left(\epsilon_{1}-\frac{1}{4M_{p}^{2}H^{2}}\left(2Q_{c}+Q_{d}-HQ_{e}+H^{2}Q_ {f}\right)\right)\right|\] \[\simeq 16\left|\epsilon_{1}-\frac{1}{4M_{p}^{2}H^{2}}\left(2Q_{c}+Q _{d}-HQ_{e}+H^{2}Q_{f}\right)\right|\,. \tag{13}\]
Here, the squared propagation speeds of the scalar and tensor perturbation modes are given [69; 76] by
\[c_{A}^{2}\equiv 1+\frac{Q_{d}+\frac{Q_{a}}{2M_{p}^{2}Q_{t}}Q_{e}+ \left(\frac{Q_{a}}{2M_{p}^{2}Q_{t}}\right)^{2}Q_{f}}{\dot{s}^{2}+\frac{3Q_{a} ^{2}}{2M_{p}^{2}Q_{t}}+Q_{c}}\,,\quad c_{T}^{2}\equiv 1-\frac{Q_{f}}{2M_{p}^{2}Q_{t}}\,, \tag{14}\]
where
\[Q_{d} \equiv-\frac{2c_{2}}{M_{p}^{2}}\xi\dot{s}^{2}\dot{H}-\frac{2c_{3 }}{M_{p}^{3}}\dot{s}^{2}(\dot{\xi}\dot{s}+\xi\ddot{s}-\xi\dot{s}H)\,,\] \[Q_{e} \equiv-16c_{1}\dot{\xi}\dot{H}+\frac{2c_{2}}{M_{p}^{2}}\dot{s}( \dot{\xi}\dot{s}+2\xi\ddot{s}-2\xi\dot{s}H)-\frac{4c_{3}}{M_{p}^{3}}\xi\dot{s} ^{3}\,,\] \[Q_{f} \equiv 8c_{1}(\ddot{\xi}-\dot{\xi}H)+\frac{2c_{2}}{M_{p}^{2}}\xi \dot{s}^{2}\,.\]
In the GR limit, where \(\alpha\to 0\), the \(Q_{a,b,c,d,e,f}\) quantities become zero, while \(Q_{t}\) becomes unity. Consequently, \(\{c_{A},c_{T}\}\to 1\) and the canonical case is restored. When \(\alpha\neq 0\), on the other hand, the propagation speeds deviate from the unity. However, if the \(c_{A}\) is either a negative (\(c_{A}<0\)) or superluminal (\(c_{A}>1\)), one must worry about the ghost instability [69, 76]. We perform full numerical analyses for the values of the \(c_{A}\) and \(c_{T}\) of our model later in this section. Now that we have the key observable quantities, we will conduct numerical analyses in the following using Eqs. (20) and (21) and put constraints on the model parameters. In general, we have three free parameters, including \(\alpha\), \(\lambda\), and \(\sigma\). However, if we adopt the Planck normalization [77] for \(\lambda/\sigma^{2}\sim\mathcal{O}(10^{-9})\) in our numerical study, our model becomes a one-parameter model effectively. This adaptation, in the Gauss-Bonnet contributions, i.e., \(\alpha=0\), allows us to recover well-known results of Higgs inflation in the Einstein frame.
Figure 1 presents the theoretical predictions of our model in the \(n_{S}\) vs. \(r\) plane, along with the observational data. The background dark- and light-blue contours represent the \(1\sigma\) and \(2\sigma\) confidence level (C.L.) of the _Planck TT, TE, EE+lowE+lensing+BK15+BAO_ data, respectively. At the same time, the blue, black, and red lines show theoretical predictions of our model for \(\alpha=8\times 10^{3}\), \(\alpha=0\), and \(\alpha=-1.4\times 10^{4}\), respectively. The orange squares and disks denote the \(N_{*}=50\) and \(N_{*}=60\)\(e\)-folds, respectively. 2 The solid black line in the figure indicates the absence of the Gauss-Bonnet contributions (\(\alpha=0\)), and in this case, we recover theoretical predictions of Higgs inflation in the GR case.
Footnote 2: To plot Figure 1, we first solve the background equations of motion Eq. (19) for \(\{s,\dot{s},H\}\), with the initial conditions \(\{s_{0},\dot{s}_{0},H_{0}\}=\{5.6M_{p},0,\sqrt{V(s_{0})/(3M_{p}^{2})}\}\), and use the result in Eqs. (20) and (21). We use the number of \(e\)-folds, which is related to time \(t\) via \(dN=Hdt\), as a time parameter, and the duration of inflation is counted from the end of inflation.
In the presence of the Gauss-Bonnet contributions (\(\alpha\neq 0\)), the theoretical predictions of our model shift along the dotted-black lines. Moreover, the \(n_{S}\) value decreases (increases), while the \(r\) value slightly increases (decreases) for the positive (negative) \(\alpha\). These shifts can be seen more clearly in the embedded inset, where the \(\alpha\) parameter varies between \(-1.4\times 10^{4}\leq\alpha\leq 8\times 10^{3}\) along the dotted lines. The wiggles in the plot are simply a byproduct
Figure 1: Numerical plot of \(n_{S}\) vs. \(r\) (top) and their number of \(e\)–fold dependence (bottom) from Eqs. (20)–(21). The two ends of each solid line denote \(N_{*}=50\)(squares) and \(N_{*}=60\) (disks). The solid black line (\(\alpha=0\)) indicates the absence of the Gauss-Bonnet contribution. The model parameter \(\alpha\) varies along the black-dotted lines between \(-1.4\times 10^{4}(\text{red})\leq\alpha\leq 8\times 10^{3}\)(blue).
of the numerical solution to the background equations of motion and do not indicate any specific significance. The limiting values of \(\alpha\) being relatively large, i.e., \(|\alpha|\gg\mathcal{O}(1)\), mean our model requires relatively large values of \(|\alpha|\) to deviate from the GR predictions because the Gauss-Bonnet contributions without \(\xi(s)\) are, in general, quite small in comparison to the effects of a scalar field in GR. Thus, the relatively large value of \(|\alpha|\) plays the role of compensation here.
Since our analyses in this section are purely numerical, we need to ensure the smallness of the slow-roll parameters during inflation. Figure 2 shows the slow-roll parameters (\(\epsilon_{i}\) with \(i\) ranging from 1 to 5) are indeed small during inflation: \(\epsilon_{i}\lesssim 1\) for the values of \(\alpha\) that are favored by observational data. The red and blue disks in each sub-figure mark the end time \(N_{\rm end}\) of inflation, which is determined from \(\epsilon(N_{\rm end})=1\). As expected, the absence of Gauss-Bonnet contributions results in the \(\epsilon_{3,4,5}\) value becoming zero, as demonstrated in the figure. From Eq. (20), it is evident that the tensor spectral index \(n_{T}\) can be negative if the \(\epsilon_{5}>\epsilon_{1}\). However, as is explicit in Figure 2, the \(\epsilon_{1}\) significantly outweighs the \(\epsilon_{5}\) throughout inflation. As a result, we notice that the power spectrum of the tensor fluctuations is red-tilted with \(n_{T}>0\).
The direct detections of GWs from a neutron star merger GW170817 [78], as well as its associated electromagnetic counterpart GRB170817A [79], allows us to constrain the GWs propagation speed with remarkable precision: \(-3\times 10^{-15}\leq c_{T}/c_{\gamma}-1\leq 7\times 10^{-16}\) where \(c_{\gamma}\) is the speed of light. We normalize \(c_{\gamma}=1\). This bound indicates that the difference in the propagation speed between light and gravitational waves is less than about one part in \(10^{15}\). However, the bound corresponds to the late-time universe, where the scalar field value in our model must have reached zero, i.e., \(s=0\). When \(s\neq 0\), which is the case for the early universe, one can expect significant deviations out of this bound induced by the Gauss-Bonnet contributions. Such deviations are subject to future probes. In Figure 3, we plot \(c_{T}/c_{\gamma}\) and \(c_{A}\) as functions of \(N\). The red and blue lines denote \(\alpha=-1.4\times 10^{4}\) and \(\alpha=8\times 10^{3}\), respectively, and the ends of inflation for each case are marked with the red and blue disks. The figure, especially the insets, shows that the Gauss-Bonnet contributions gradually decay away and become negligible a few \(e\)-folds after the end of inflation. As a result, we conclude that Gauss-Bonnet contributions play a significant role in letting the
Figure 2: Numerical plot of \(\epsilon_{i}(N)\) from Eqs. (21) and (22), where \(i=1,2,3,4,5\), for \(\alpha=-1.4\times 10^{4}\)(red) and \(\alpha=8\times 10^{3}\)(blue). The red and blue disks mark the end time of inflation for each case.
GWs propagate at a speed different from the speed of light during inflation and become negligible over time such that the GW propagation speed converges to that of the speed of light after inflation.
## 4 Conclusion
We have investigated Higgs inflation with a Gauss-Bonnet term in the Einstein frame for the model given in Eq. (1). Our model in the Jordan frame has two coupling functions, \(\Omega^{2}(\phi)\) and \(\omega(\phi)\), coupled respectively to the Ricci scalar and the Gauss-Bonnet combinations. We assumed these two coupling functions to hold a relation presented in Eq. (11) to simplify the delivery of results in the Einstein frame. Our key analytic result of the current work is derived in Eq. (14), where additional interactions, including a non-minimal kinetic coupling between the scalar field and gravity, as well as a derivative self-interaction of the scalar field, emerged in the Einstein frame as a result of a conformal transformation from the Jordan to the Einstein frame.
From Eq. (14), the background equations of motion are derived in Eq. (14), and the observable quantities are obtained in Eq. (10) and (11), where we have followed Ref. [69] closely. Although there are three free parameters in the model, including \(\lambda\), the potential parameter, \(\sigma\), the parameter representing the non-minimal coupling between the scalar field and the Ricci scalar, and \(\alpha\), the coupling parameter of the Gauss-Bonnet contributions, we showed that our model becomes effectively the one-parameter model if we adopt the Planck normalization for \(\lambda/\sigma^{2}\sim\mathcal{O}(10^{-9})\). The key numerical result of our current work is presented in Fig. 1, where the theoretical predictions \(\{n_{S},r\}\) of our model are plotted together with the observational data.
Without the Gauss-Bonnet contributions, where \(\alpha=0\), our result recovers the predictions of Higgs inflation in the GR. Once the Gauss-Bonnet contributions are turned on with \(\alpha\neq 0\), the \(n_{S}\) and \(r\) predictions deviate from the GR case. The \(n_{S}\) value decreases (increases), while the \(r\) value increases (decreases) for the positive (negative) \(\alpha\) values, as is seen in the bottom panels of Figure 1. The observational data favor the broad range model parameter, \(-1.4\times 10^{4}\lesssim\alpha\lesssim 8\times 10^{4}\). In Figure 3, our analysis reveals a remarkable phenomenon: the propagation speed of gravitational waves (GWs) deviates from the speed of light during the inflationary period, influenced by Gauss-Bonnet contributions on the order of a few parts in hundreds of thousand. These Gauss-Bonnet effects gradually dissipate after the inflation,
Figure 3: Numerical plot from Eq. (12) where \(c_{\gamma}(=1)\) is the speed of light. The red and blue lines denote \(\alpha=-1.4\times 10^{4}\) and \(\alpha=8\times 10^{3}\), respectively. The red and blue disks mark the end time of inflation for each case. The horizontal black solid lines at “1” indicate the GR limit where \(c_{T}=1=c_{A}\).
leading the GWs to progressively align with the speed of light. We have also shown the validity of the slow-roll approximation in Figure 2 by showing the slow-roll parameters are small, much smaller than unity, during inflation.
In our future research, we plan to relax our assumption made in Eq. (11) and explore post-inflationary cosmology and its implications for (p)reheating. It also remains to be determined whether these newly emerged interactions in the Einstein frame can adequately account for the observed late-time accelerating expansion of the universe.
The authors acknowledge that this work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (grant numbers) (NRF-2022R1I1A1A01053784) (GT), (NRF-2021R1A2C1005748) (SK) and by (NRF-2021R1A4A2001897), (NRF-2019R1A2C1089334) (SCP).
## Appendix A Constant coupling to the Gauss-Bonnet term
Let us consider the constant Gauss-Bonnet coupling, i.e., \(\omega(\phi)=\text{const.}\) in Eq. (1), and simplify the Einstein frame action in Eq. (7). For the fourth term in Eq. (7), we use
\[\nabla_{a}\left[\Omega^{-2}\left(\nabla_{b}\nabla^{b}\Omega\nabla ^{a}\Omega-\frac{1}{2}\nabla^{a}(\nabla\Omega)^{2}\right)\right]=\Omega^{-2} \left[(\nabla_{a}\nabla^{a}\Omega)^{2}-(\nabla_{a}\nabla_{b}\Omega)^{2} \right]-R^{ab}\Omega^{-2}\nabla_{a}\Omega\nabla_{b}\Omega\] \[\qquad\qquad-2\Omega^{-3}(\nabla\Omega)^{2}\nabla_{b}\nabla^{b} \Omega+2\Omega^{-3}\nabla_{a}\Omega\nabla_{b}\Omega\nabla^{a}\nabla^{b}\Omega\,, \tag{10}\]
and integration by parts to obtain
\[8\omega\int d^{4}x\sqrt{-g}\,\Omega^{-2}\left(\nabla_{a}\nabla^{ a}\Omega\nabla_{b}\nabla^{b}\Omega-\nabla_{b}\nabla_{a}\Omega\nabla^{b} \nabla^{a}\Omega\right)=\omega\int d^{4}x\sqrt{-g}\] \[\quad\times\left[2R^{ab}\nabla_{a}\ln\Omega^{2}\nabla_{b}\ln \Omega^{2}+4(\Omega^{-1}\nabla_{b}\nabla^{b}\Omega)(\nabla\ln\Omega^{2})^{2} -4(\Omega^{-1}\nabla^{a}\nabla^{b}\Omega)(\nabla_{a}\ln\Omega^{2}\nabla_{b} \ln\Omega^{2})\right]\,, \tag{11}\]
where the following relations are used
\[\nabla_{a}\ln\Omega=\Omega^{-1}\nabla_{a}\Omega\,,\] \[\Omega^{-1}\nabla^{a}\nabla_{a}\Omega=\nabla^{a}\nabla_{a}\ln \Omega+\nabla^{a}\ln\Omega\nabla_{a}\ln\Omega=\frac{1}{2}\nabla^{a}\nabla_{a }\ln\Omega^{2}+\frac{1}{4}\nabla^{a}\ln\Omega^{2}\nabla_{a}\ln\Omega^{2}\,. \tag{12}\]
The third term in Eq. (7) can also be rewritten as
\[-4\omega\int d^{4}x\sqrt{-g}R\Omega^{-2}\nabla_{a}\Omega\nabla^{ a}\Omega=-\omega\int d^{4}x\sqrt{-g}g^{ab}R\nabla_{a}\ln\Omega^{2}\nabla_{b}\ln \Omega^{2}\,. \tag{13}\]
Then, the first term on the right-hand side of equality in Eq. (11) is combined with Eq. (13) to give
\[2\omega\int d^{4}x\sqrt{-g}G^{ab}\nabla_{a}\ln\Omega^{2}\nabla_{ b}\ln\Omega^{2}, \tag{14}\]
which is then canceled with the second term in Eq. (7). 3
Footnote 3: The integration by parts of the second term in Eq. (7)
\[-8\omega\int d^{4}x\sqrt{-g}\Omega^{-1}G_{ab}\nabla^{a}\nabla^{b} \Omega=-2\omega\int d^{4}x\sqrt{-g}G_{ab}\nabla^{a}\ln\Omega^{2}\nabla^{b}\ln \Omega^{2}\,.\]
\[S=\int d^{4}x\sqrt{-g}\,\omega\left[R_{GB}^{2}-2(\Omega^{-1}\nabla_{b}\nabla^{ b}\Omega)(\nabla\ln\Omega^{2})^{2}-4(\Omega^{-1}\nabla^{a}\nabla^{b}\Omega)( \nabla_{a}\ln\Omega^{2}\nabla_{b}\ln\Omega^{2})\right.\] \[\qquad\qquad+\left.\frac{3}{2}\left(\nabla_{a}\ln\Omega^{2} \nabla^{a}\ln\Omega^{2}\right)^{2}\right]\,.\] (100)
Let us rewrite Eq. (100) once again using Eq. (101)
\[S=\int d^{4}x\sqrt{-g}\,\omega\left[R_{GB}^{2}-(\nabla_{b} \nabla^{b}\ln\Omega^{2})\left(\nabla_{a}\ln\Omega^{2}\nabla^{a}\ln\Omega^{2} \right)\right.\] \[\qquad\left.-2(\nabla^{b}\nabla^{a}\ln\Omega^{2})(\nabla_{a}\ln \Omega^{2}\nabla_{b}\ln\Omega^{2})-(\nabla_{a}\ln\Omega^{2}\nabla_{b}\ln \Omega^{2})^{2}+\left(\nabla_{a}\ln\Omega^{2}\nabla^{a}\ln\Omega^{2}\right)^{2 }\right]\,. \tag{101}\]
The second and third terms are canceled after integration by parts. Thus, we obtain
\[S=\int d^{4}x\sqrt{-g}\,\omega\left[R_{GB}^{2}-\frac{4}{9}(\nabla_{a}s\nabla_ {b}s)^{2}+\frac{4}{9}\,(\nabla_{a}s\nabla^{a}s)^{2}\right]=\int d^{4}x\sqrt{-g }\,\omega R_{GB}^{2}\,, \tag{102}\]
where \(s\equiv\sqrt{3/2}\ln\Omega^{2}\). It is well-known in the literature that the Gauss-Bonnet term is topological in 4-dimensions if the Gauss-Bonnet coupling is a constant. Thus, for the \(\omega=\text{const.}\) case, we conclude that no dynamical contributions emerge from the Gauss-Bonnet term in the Jordan and Einstein frames.
## Appendix B Power-law coupling to Gauss-Bonnet term
Let us now assume the coupling functions in Eq. (1) hold a more general relation as \(\omega=\alpha\Omega^{p}\). When \(p=2\), the action in the Jordan frame reads
\[S^{J}=\int d^{4}x\sqrt{-g^{J}}\left[\Omega^{2}(\phi)\left(\frac{M_{p}^{2}}{2}R ^{J}+\alpha R_{GB}^{2}\right)-\frac{1}{2}g_{ab}^{J}\nabla^{a}\phi\nabla^{b} \phi-V(\phi)\right]. \tag{103}\]
The Einstein frame action is presented in Eq. (10). For \(\omega=\alpha\Omega^{p}\) with arbitrary power of \(p\), the third term of Eq. (10) can be written as
\[8\omega\Omega^{-2}\left(\omega^{-1}\nabla_{a}\omega-2\Omega^{-1} \nabla_{a}\Omega\right)\left(\nabla^{a}\Omega\nabla_{b}\nabla^{b}\Omega- \nabla_{b}\Omega\nabla^{a}\nabla^{b}\Omega\right)\] \[\qquad\qquad=8\alpha(p-2)\Omega^{p-3}\nabla_{a}\Omega\left(\nabla ^{a}\Omega\nabla_{b}\nabla^{b}\Omega-\nabla_{b}\Omega\nabla^{a}\nabla^{b} \Omega\right)\,. \tag{104}\]
Consequently, Eq. (10) becomes
\[S=\int d^{4}x\sqrt{-g}\alpha\Omega^{p}\left[R_{GB}^{2}+8p\Omega^ {-2}G_{ab}\nabla^{a}\Omega\nabla^{b}\Omega-8(p+1)\Omega^{-3}\nabla_{a}\Omega \nabla^{a}\Omega\nabla_{b}\nabla^{b}\Omega\right.\] \[\qquad\qquad\left.+8(p-2)\Omega^{-3}\nabla_{a}\Omega\nabla_{b} \Omega\nabla^{a}\nabla^{b}\Omega+24\Omega^{-4}\left(\nabla_{a}\Omega\nabla^{a }\Omega\right)^{2}\right]\,. \tag{105}\]
Applying Eqs. (A.3) to the last term in the first line, we obtain
\[S = \int d^{4}x\sqrt{-g}\alpha e^{\frac{p}{2}\sqrt{\frac{2}{3}}\frac{s}{ M_{p}}}\left[R_{GB}^{2}+\frac{4p}{3M_{p}^{2}}G_{ab}\nabla^{a}s\nabla^{b}s-\frac{p+1}{3 M_{p}^{3}}\sqrt{\frac{8}{3}}\nabla_{b}\nabla^{b}s\nabla_{a}s\nabla^{a}s\right.\] (B.4) \[\left.+\frac{p-2}{3M_{p}^{3}}\sqrt{\frac{8}{3}}\nabla_{a}s\nabla_ {b}s\nabla^{a}\nabla^{b}s\right]\,,\]
where \((s/M_{p})\equiv\sqrt{3/2}\ln\Omega^{2}\). The last term vanishes for \(p=2\) and we get Eq. (2.13). As a result of conformal transformation from the Jordan frame to the Einstein frame, we notice the emergence of new interactions such as the kinetic coupling between the scalar field and gravity and the derivative self-interactions of the scalar field. These interactions certainly would contribute both to the background and the perturbation dynamics.
|
2302.04472
|
A characterization of irreducible Hermitian symmetric spaces of tube
type by $\mathbb{C}^{*}$-actions
|
A $\mathbb{C}^{*}$-action on a projective variety $X$ is said to be of Euler
type at a nonsingular fixed point $x$ if the isotropy action of
$\mathbb{C}^{*}$ on $T_{x}X$ is by scalar multiplication. In this paper, it's
proven that a smooth projective variety of Picard number one $X$ is isomorphic
to an irreducible Hermitian symmetric space of tube type if and only if for a
general pair of points $x,y$ on $X$, there exists a $\mathbb{C}^{*}$-action on
$X$ which is of Euler type at $x$ and its inverse action is of Euler type at
$y$.
|
Yingqi Liu
|
2023-02-09T07:29:01Z
|
http://arxiv.org/abs/2302.04472v1
|
A characterization of irreducible Hermitian symmetric spaces of tube type by \(\mathbb{C}^{*}\)-actions
###### Abstract.
A \(\mathbb{C}^{*}\)-action on a projective variety \(X\) is said to be of Euler type at a nonsingular fixed point \(x\) if the isotropy action of \(\mathbb{C}^{*}\) on \(T_{x}X\) is by scalar multiplication. In this paper, it's proven that a smooth projective variety of Picard number one \(X\) is isomorphic to an irreducible Hermitian symmetric space of tube type if and only if for a general pair of points \(x,y\) on \(X\), there exists a \(\mathbb{C}^{*}\)-action on \(X\) which is of Euler type at \(x\) and its inverse action is of Euler type at \(y\).
## 1. Introduction
### Main result
The study of complex torus actions on algebraic varieties is a classical topic in algebraic geometry. It is an interesting problem to classify algebraic varieties with special \(\mathbb{C}^{*}\)-actions. Let \(X\) be a smooth projective variety. A \(\mathbb{C}^{*}\)-action on \(X\) is said to be equalized at a fixed point \(x\) if any weight of the isotropy action on \(T_{x}X\) equals to \(0\) or \(\pm 1\). We call the action equalized if it's equalized at each fixed point. Denote by \(X^{\mathbb{C}^{*}}\) the set of fixed points of the \(\mathbb{C}^{*}\)-action. An irreducible component of \(X^{\mathbb{C}^{*}}\) is called extremal if it intersects with general \(\mathbb{C}^{*}\)-orbit closures. In the series of works [10][11][12], the authors study equalized \(\mathbb{C}^{*}\)-actions on projective manifolds with isolated extremal fixed components. For rational homogenous spaces they proved the following:
**Theorem 1.1**.: _[_12_]_ _Let \(X=G/P\) be a rational homogenous space of Picard number one, then \(X\) admits an equalized \(\mathbb{C}^{*}\)-action with two isolated extremal fixed points if and only if \(X\) is isomorphic to one of the followings:_
_(i) a smooth hyperquadric \(\mathbb{Q}^{n}\)._
_(ii) the Grassmannian variety_ \(Gr(n,2n)\)_._
_(iii) the Lagrangian Grassmanian variety_ \(Lag(n,2n)\)_._
_(iv) the spinor variety_ \(\mathbb{S}_{2n}\)_._
_(v) the 27 dimensional_ \(E_{7}\)_-variety_ \(E_{7}/P_{7}\)_._
The varieties classified above are exactly irreducible Hermitian symmetric spaces (IHSS for short) of tube type. Recall that an IHSS is called of tube type if its dual, as a bounded symmetric domain, is holomorphically equivalent to a tube domain over a self dual cone. In [13] Mok showed that an IHSS is of tube type if and only if for a point \(o\in X\) the complement of the vectors of maximal rank in \(T_{o}(X)\) is a hypersurface. Translating in the language of \(\mathbb{C}^{*}\)-actions, one can regard Theorem 1.1 as a generalization of Mok's result to rational homogenous spaces.
It is then a natural problem to characterize IHSS of tube type by \(\mathbb{C}^{*}\)-actions in a more general context. A \(\mathbb{C}^{*}\)-action on a projective manifold \(X\) is said to be of Euler type at a fixed point \(x\) if the isotropy action of \(\mathbb{C}^{*}\) on \(T_{x}X\) is by scalar multiplication, or equivalently if the \(\mathbb{C}^{*}\)-action is equalized at \(x\) and \(x\) is an isolated extremal fixed point. In Theorem 1.1, by taking certain conjugates of \(\mathbb{C}^{*}\) in \(Aut^{0}(X)\) one can show that for a general pair of points \(x,y\) on \(X\) there exists a \(\mathbb{C}^{*}\)-action which is of Euler type at \(x\) and its inverse action is of Euler type at \(y\) (see Section 4 for more details). Here a general pair of points on \(X\) is defined as a pair of points lying in a Zariski open dense subset of \(X\times X\). Our main result proves the converse:
**Theorem 1.2**.: _Let \(X\) be a smooth projective variety of Picard number one, then \(X\) is isomorphic to an IHSS of tube type if and only if for a general pair of points \(x,y\) on \(X\), there exists a \(\mathbb{C}^{*}\)-action on \(X\) which is of Euler type at \(x\) and its inverse action is of Euler type at \(y\)._
### Outline of the proof
The main ingredient of the proof is the VMRT theory developed by Hwang and Mok. Let \(X\) be a Fano manifold of Picard number one and let \(\mathcal{K}\) be a fixed irreducible dominant family of minimal rational curves on \(X\). The variety of minimal rational tangents (VMRT for short)
at a general point \(x\in X\) is the closed subvariety \(\mathcal{C}_{x}\subseteq\mathbb{P}T_{x}X\) consisting of all tangent directions at \(x\) of curves in \(\mathcal{K}\) passing through \(x\). A large part of the global geometry of the manifold is controlled by the VMRT \(\mathcal{C}_{x}\subseteq\mathbb{P}T_{x}X\) at a general point \(x\). The first step is to show that \(X\) in Theorem 1.2 is an equivariant compactification of vector group hence \(X\) can be recovered from its VMRT by Cartan-Fubini type theorem [14, Theorem 1.2]. Then we follow the methods developed in [14][15] to classify the projective subvariety \(\mathcal{C}_{x}\subseteq\mathbb{P}T_{x}X\) by studying its prolongaion of infinitesimal linear automorphisms.
**Definition 1.3**.: _(1) Let \(\mathfrak{g}\subseteq\mathfrak{gl}(V)\) be a Lie subalgebra, then the k-th prolongation of \(\mathfrak{g}\) is the space of symmetric multi-linear homomorphisms \(A:Sym^{k+1}V\to V\) such that for any fixed \(v_{1},...,v_{k}\in V\), the homomorphism \(A_{v_{1},...,v_{k}}:V\to V\) defined by:_
\[v\in V\to A(v,v_{1},...,v_{k})\in V\]
_is in \(\mathfrak{g}\)._
_(2) Let \(S\subseteq\mathbb{P}V\) be a projective subvariety. Let \(\hat{S}\subseteq V\) be its affine cone and \(T_{\alpha}(\hat{S})\) the tangent space at a smooth point \(\alpha\in\hat{S}\). The Lie algebra of infinitesimal linear automorphisms of \(\hat{S}\) is_
\[\mathfrak{aut}(\hat{S})=\{g\in End(V)|g(\alpha)\in T_{\alpha}(\hat{S})\,\text{ for any smooth point }\alpha\in\hat{S}\}=\{g\in End(V)|exp(tg)\cdot\hat{S}\subset\hat{S},\,t\in \mathbb{C}\}.\]
_Its k-th prolongation \(\mathfrak{aut}(\hat{S})^{(k)}\) will be called the k-th prolongation of \(S\subseteq\mathbb{P}V\)._
A crucial step of our proof is the following result.
**Theorem 1.4**.: _Let \(X\) be a smooth projective variety of Picard number one. Assume for a general pair of points \((x,y)\in X\times X\), there is a \(\mathbb{C}^{*}\)-action on \(X\) which is of Euler type at \(x\) and its inverse action is of Euler type at \(y\), then for the VMRT \(\mathcal{C}_{x}\subseteq\mathbb{P}T_{x}X\) we have:_
\[dim(\mathfrak{aut}(\hat{\mathcal{C}_{x}})^{(1)})=dim(X) \tag{1.1}\]
The identity (1.1) is known to be valid for any IHSS. Let \(X=G/P\) be an IHSS defined by a semi-simple algebraic group \(G\) and a maximal parabolic subgroup \(P\). Denote by \(P^{-}\) the opposite group of \(P\) and let \(x=eP\), \(y=\dot{w}_{0}x\) where \(w_{0}\) is the longest element in the Weyl group. Then there is an equalized \(\mathbb{C}^{*}\)-action on \(X\) such that \(x\) is an isolated extremal fixed point. And \(\mathfrak{aut}(\hat{\mathcal{C}_{x}})^{(1)}\) can be identified with the Lie algebra of \(R_{u}(P)\), induced by the \(\mathbb{Z}/2\mathbb{Z}\)-grading of the Lie algebra \(\mathfrak{g}=Lie(G)\). When X is of tube type, then \(y\) is also an isolated extremal fixed point of the \(\mathbb{C}^{*}\)-action and X is an equivariant compactification of \(R_{u}(P)\) with open orbit \(R_{u}(P)\cdot y\). Thus in this case the nonzero prolongation of \(\mathfrak{aut}(\hat{\mathcal{C}_{x}})\) comes from the vector group compactification on \(X\) with origin \(y\).
Generally assume that \(X\) satisfies the conditions of Theorem 1.2. Then the vector groups action of \(T_{y}X\) and \(T_{x}X\) on themselves can be extended to equivariant compactifications of vector groups on \(X\). Choosing a suitable non-degenrate projective embedding of \(X\subseteq\mathbb{P}V\). The actions of \(T_{y}X\) and \(T_{x}X\) on \(X\) can be lifted to linear actions on \(V\). Identifying \(Lie(T_{y}X),Lie(T_{x}X)\) with their images in \(\mathfrak{gl}(V)\), the adjoint actions of the two Lie subalgebras inside \(\mathfrak{gl}(V)\) will induce the identification: \(Lie(T_{y}X)\cong\mathfrak{aut}(\hat{\mathcal{C}_{x}})^{(1)}\).
In [15] Fu and Hwang classified irreducible non-degenrate nonsingular projective subvariety \(S\subseteq\mathbb{P}V\) with nonzero prolongations. Most of them are the VMRT of an IHSS or a nonsingular linear section of some IHSS. Using a case by case calculation of \(dim(\mathfrak{aut}(\hat{S})^{(1)})\), we show that \(\mathcal{C}_{x}\subseteq\mathbb{P}T_{x}X\) is projectively isomorphic to the VMRT of an IHSS. Then by a \(\mathbb{C}^{*}\)-equivariant Cartan-Fubini extension theorem, we conclude that \(X\) is \(\mathbb{C}^{*}\)-isomorphic to an IHSS. The \(\mathbb{C}^{*}\)-action on IHSS was studied in detail in [11], as a corollary it has two isolated extremal fixed points if and only if it is isomorphic to an IHSS of tube type.
The article is organized as follows. In Section 2 we first review the approach in [15] to associate \(\mathbb{C}^{*}\)-actions with vector group actions, then we apply it to the case when the \(\mathbb{C}^{*}\)-action has two isolated extremal fixed points. In Section 3 we study the prolongations of projective varieties, we first prove Theorem 1.4 then we calculate the dimension of prolongation for certain projective subvariety with nonzero prolongations. In Section 4 we finish the proof of our main result.
**Notations.** Throughout this article we work over the field of complex numbers. Given a line bundle \(\mathcal{L}\) on a variety \(X\), the principal open subset of a section \(s\in H^{0}(X,\mathcal{L})\) is denoted by \(D_{+}(s)=\{x\in X:s(x)\neq 0\}\), and the cycle-theoretic zero locus of \(s\) is denoted by \(Z(s)\). For a vector space \(V\) of dimension \(n\), we identify the regular functions on \(V\) with the total space of symmetric multilinear forms: \(\mathbb{C}[V]\cong\oplus_{k\geqslant 0}Sym^{k}(V^{*})\) by assigning a function \(f\) on \(V\) to \(P_{f}=\sum_{k\geqslant 0}P_{f,k}\) such that \(f(v)=\sum_{k\geqslant 0}P_{f,k}(v,v...,v)\).
## 2. From \(\mathbb{C}^{*}\)-action to vector group action
First we recall some basic facts and notions on \(\mathbb{C}^{*}\)-actions. Let \(X\) be a smooth projective variety with a \(\mathbb{C}^{*}\)-action. For a nontrivial \(\mathbb{C}^{*}\)-orbit of a point \(z\) on \(X\), the orbit map \(\psi_{z}:\mathbb{C}^{*}\to X\) extends uniquely to a morphism \(\Psi_{z}:\mathbb{P}^{1}\to X\). We denote the source (resp. sink) of the orbit to be \(\lim\limits_{t\to 0}tz=\Psi_{z}(0)\) (resp. \(\lim\limits_{t\to\infty}tz=\Psi_{z}(\infty)\)). Denote by \(\mathcal{Y}\) the set of irreducible components of \(X^{\mathbb{C}^{*}}\). Then for each \(Y\in\mathcal{Y}\), \(Y\) is a smooth closed subvariety. The isotropy action of \(\mathbb{C}^{*}\) on \(TX|_{Y}\) gives a decomposition of \(TX|_{Y}=T^{+}(Y)\oplus T^{-}(Y)\oplus TY\), where \(T^{+}(Y),T^{-}(Y)\) are the subbundles of \(TX|_{Y}\) on which \(\mathbb{C}^{*}\) acts with positive, negative weights. Denote \(C^{\pm}(Y)=\{x\in X:\lim\limits_{t^{\pm}\to 0}tx\in Y\}\). We recall the following theorem of Bialynicki-Birula [1].
**Theorem 2.1**.: _Assume that \(X\) is a smooth projective variety with a \(\mathbb{C}^{*}\)-action, then:_
_(1) For each \(Y\in\mathcal{Y}\), \(C^{\pm}(Y)\) are locally closed subsets and there are decompositions:_
\[X=\bigcup\limits_{Y\in\mathcal{Y}}C^{+}(Y)=\bigcup\limits_{Y\in\mathcal{Y}}C^{ -}(Y)\]
_(2) For each \(Y\in\mathcal{Y}\), there are \(\mathbb{C}^{*}\)-isomorphisms \(C^{+}(Y)\cong T^{+}(Y)\) and \(C^{-}(Y)\cong T^{-}(Y)\) lifting the natural map \(C^{\pm}(Y)\to Y\).The map \(C^{\pm}(Y)\to Y\) is algebraic and is a \(\mathbb{C}^{v^{\pm}(Y)}\)-fibration, where we set \(v^{\pm}(Y)=rank(T^{\pm}(Y))\)._
We call the unique \(Y\) such that \(C^{+}(Y)\) (resp. \(C^{-}(Y)\)) is dense the source (resp. sink) of the action. For any two components \(Y,Y^{\prime}\in\mathcal{Y}\), we call \(Y\prec Y^{\prime}\) if there exists a point \(x\in X\) such that \(lim_{t\to 0}t\cdot x\in Y\) and \(lim_{t\to\infty}t\cdot x\in Y^{\prime}\).
**Proposition 2.2**.: _[_10_, Lemma 2.8 ]_ _and [11, Lemma 3.5]_
_Let \(X\) be a smooth projective variety of Picard number one. Assume that there is a \(\mathbb{C}^{*}\)-action on \(X\) which is of Euler-type at \(x\), then there is a unique \(Y\in\mathcal{Y}\) such that_
\[\{Y^{\prime}\in\mathcal{Y}:Y^{\prime}\prec Y\}=\{x\}. \tag{2.1}\]
_Moreover \(C^{-}(Y)\) is a line bundle over \(Y\), i.e., \(v^{-}(Y)=1\)._
To associate \(\mathbb{C}^{*}\)-actions with vector group actions we recall Euler-symmetric varieties defined by Fu and Hwang in [10].
**Definition 2.3**.: _Let \(Z\subseteq\mathbb{P}V\) be a projective subvariety. A \(\mathbb{C}^{*}\)-action on \(Z\) is called of Euler-type at a nonsingular point \(x\) if the isotropic action on the tangent space \(T_{x}Z\) is by scalar multiplication. We say \(Z\subseteq\mathbb{P}V\) is an Euler-symmetric variety if for a general point \(x\in Z\), there exists a \(\mathbb{C}^{*}\)-action which is of Euler type at \(x\), where the \(\mathbb{C}^{*}\)-action comes from a multiplicative subgroup of \(GL(V)\)._
From [10, Theorem 3.7] any Euler-symmetric variety is an equivariant compactification of vector group. Moreover we have the following by [10, Theorem 3.7].
**Proposition 2.4**.: _Assume \(X\) to be a normal projective variety and let \(\mathcal{L}\) be a very ample line bundle on \(X\), then the followings are equivalent:_
_(1) For a general point \(x\) on \(X\), there is a \(\mathbb{C}^{*}\)-action which of Euler type at \(x\)._
_(2) \(X\) is an equivariant compactification of vector group and the scalar multiplication of \(\mathbb{C}^{*}\) on the vector group can be extended to a \(\mathbb{C}^{*}\)-action on \(X\)._
_(3) The projective subvariety \(X\subset\mathbb{P}H^{0}(X,\mathcal{L})^{\vee}\) is Euler-symmetric._
In the following we always assume that \(X\) is a smooth projective variety of Picard number one. We study \(\mathbb{C}^{*}\)-actions on \(X\) from the perspective of local projective differential geometry following [12] and FH20,
**Definition 2.5**.: _(1) Let \(x\in X\subseteq\mathbb{P}V\) be a nonsingular point of a nondegenerate projective variety, and let \(\mathcal{L}=\mathcal{O}_{\mathbb{P}V}(1)|_{X}\) be the line bundle on \(X\). For each nonnegative integer \(k\), \(\mathfrak{m}_{x,X}^{k}\) be the \(k\)-th power of the maximal ideal \(\mathfrak{m}_{x,X}\). For a section \(s\in H^{0}(X,\mathcal{L})\), let \(j_{x}^{k}(s)\) be the \(k\)-jet of \(s\) at \(x\) such that \(j_{x}^{0}=s_{x}\in\mathcal{L}_{x}\). The induced homomorphism :_
\[(V^{*}\cap Ker(j_{x}^{k-1}))/(V^{*}\cap Ker(j_{x}^{k}))\to\mathcal{L}_{x} \otimes Sym^{k}T_{x}^{*}X\]
_is injective. For each \(k\geqslant 2\), the subspace \(\mathbb{F}_{x}^{k}\subseteq Sym^{k}T_{x}^{*}X\) defined by the image of this homomorphism is called the \(k\)-th fundamental form of \(X\) at \(x\). Set \(F_{x}^{0}=\mathbb{C}\) and \(F_{x}^{1}=T_{x}^{*}X\), The collection of subspaces \(\mathbb{F}_{x}=\oplus_{k\geqslant 0}\mathbb{F}_{x}^{k}\subset\oplus_{k \geqslant 0}Sym^{k}T_{x}^{*}X\) is called the system of fundamental forms of \(X\) at \(x\)._
_(2) Let \(W\) be a vector space. For \(w\in W\), the contraction homomorphism \(\iota_{w}:Sym^{k+1}W^{*}\to Sym^{k}W^{*}\) sending \(\phi\in Sym^{k+1}W^{*}\) to \(\iota_{w}\phi\in Sym^{k}W^{*}\) is defined by:_
\[\iota_{w}\phi(w_{1},...,w_{k})=\phi(w,w_{1},...,w_{k})\]
_for any \(w_{1},...,w_{k}\in W\). By convention we define \(\iota_{w}(Sym^{0}W^{*})=0\)._
_(3) A subspace \(\mathbb{F}=\oplus_{k\geqslant 0}F^{k}\subset\oplus_{k\geqslant 0}Sym^{k}W^{*}\) with \(F^{0}=\mathbb{C},F^{1}=W^{*},F^{r}\neq 0\), and \(F^{r+i}=0\) for all \(i\geqslant 1\) is called a symbol system of rank \(r\) if \(\iota_{w}F^{k+1}\subseteq F^{k}\) for any \(w\in W\) and any \(k\geqslant 0\)._
We recall the following theorem of Cartan (see for example [13, Section 2.1]).
**Theorem 2.6**.: _Let \(X\subseteq\mathbb{P}V\) be a nondegenerate subvariety, and let \(x\in X\) be a general point. Then the system of fundamental forms \(\mathbb{F}_{x}\subset\oplus_{k\geqslant 0}Sym^{k}T_{x}^{*}X\) is a symbol system._
Now assume that \(X\) admits a \(\mathbb{C}^{*}\)-action which is of Euler type at a fixed point \(x\). Denote by \(\mathcal{L}\) a very ample line bundle on \(X\) and \(V=H^{0}(X,\mathcal{L})^{\vee}\). Choose a \(\mathbb{C}^{*}\)-linearization on \(\mathcal{L}\) such that if we write \(H^{0}(X,\mathcal{L})=\oplus_{k=0}^{r}H^{0}(X,\mathcal{L})_{w_{k}}\) as the sum of nonzero weight subspaces with respected to the \(\mathbb{C}^{*}\)-action, then we have \(0=w_{0}>w_{1}>w_{2}>...>w_{r}\). The fundamental forms of \(X\subset\mathbb{P}V\) at \(x\) can be identified as follows.
**Lemma 2.7**.: _(1) The subspace \(\{s\in H^{0}(X,\mathcal{L})\,|\,D_{+}(s)=C^{+}(x)\}\) is of dimension one and it equals to \(H^{0}(X,\mathcal{L})_{0}\). Take a nonzero section \(s_{0}\) with \(D_{+}(s_{0})=C^{+}(x)\) and consider the linear map:_
\[\eta:H^{0}(X,\mathcal{L}) \to\mathbb{C}[C^{+}(x)]\] \[s \to\eta(s)\]
_defined by: \(s|_{C^{+}(x)}=\eta(s)s_{0}|_{C^{+}(x)}\), for any \(s\in H^{0}(X,\mathcal{L})\). Then:_
_(2) \(\eta\) is an injective \(\mathbb{C}^{*}\)-equivariant linear map, where the \(\mathbb{C}^{*}\)-action on \(\mathbb{C}[C^{+}(x)]\) is given by \(z\cdot f(u)=f(z^{-1}\cdot u)\), for any \(z\in\mathbb{C}^{*}\), \(u\in C^{+}(x)\) and \(f\in\mathbb{C}[C^{+}(x)]\)._
_(3) Identifying \(\mathbb{C}[C^{+}(x)]\cong\mathbb{C}[T_{x}X]\cong\oplus_{k\geqslant 0}Sym^{k}T_{x}^{*}X\), then the image of \(\eta\) is identified with \(\mathbb{F}_{x}\), under which \(\eta(H^{0}(X,\mathcal{L})_{w_{k}})=\mathbb{F}_{x}^{-w_{k}}\). Furthermore \(w_{1}=-1\) and \(\eta|_{H^{0}(X,\mathcal{L})_{w_{1}}}\) is an isomorphism._
Proof.: As \(C^{+}(x)\) is isomorphic to the affine space \(T_{x}X\), \(D_{x}:=X\backslash C^{+}(x)\) is a divisor by Hartgo's theorem and \(Cl(X)\) is freely generated by the irreducible components of \(D_{x}\). Then by our assumption that \(X\) is of Picard number one, \(D_{x}\) is the prime generator of \(Cl(X)\). Thus we can write \(\mathcal{L}\cong\mathcal{O}_{X}(r_{0}D_{x})\) for some positive integer \(r_{0}\). For any nonzero section \(s\) with \(D_{+}(s)=C^{+}(x)\), write \(Z(s)=rD_{x}\) for some positive integer \(r\). Then we have \(rD_{x}\sim r_{0}D_{x}\), implying that \(r=r_{0}\) and thus the subspace \(\{s\in H^{0}(X,\mathcal{L}):D_{+}(s)=B^{+}(x)\}\) is of dimension one. The subspace is \(\mathbb{C}^{*}\)-invariant and we denote its weight to be \(w^{\prime}\). Take a nonzero section \(s_{0}\) in this subspace and define \(\eta\) as above, then \(\eta\) is injective as \(C^{+}(x)\) is open. For any \(s\in H^{0}(X,\mathcal{L})\), \(z\in\mathbb{C}^{*}\) and \(u\in C^{+}(x)\) we have:
\[(z\cdot s)(u)=z\cdot s(z^{-1}\cdot u)=z\cdot(\eta(s)(z^{-1}\cdot u)s_{0}(z^{- 1}\cdot u))=\eta(s)(z^{-1}u)\,(z\cdot s_{0})(u)=z^{w^{\prime}}(z\cdot\eta(s))(u )s_{0}(u),\]
implying that \(\eta(z\cdot s)=z^{w^{\prime}}(z\cdot\eta(s))\). For any \(s\in H^{0}(X,\mathcal{L})_{w_{k}}\), viewing \(\eta(s)\) as a regular function on \(T_{x}X\) via \(C^{+}(x)\cong T_{x}X\), then we have:
\[\eta(z\cdot s)(u)=z^{w_{k}}\eta(s)(u),\ z\cdot\eta(s)(u)=\eta(s)(z^{-1}\cdot u )=\eta(s)(z^{-1}u)\]
for any \(u\in T_{x}X\). Combined with \(\eta(z\cdot s)=z^{w^{\prime}}(z\cdot\eta(s))\) this shows that \(\eta(s)\) is a homogenous polynomial on \(T_{x}X\) with \(deg(\eta(s))=w^{\prime}-w_{k}\geqslant 0\). This implies that \(w^{\prime}=w_{0}=0\) and \(deg(\eta(s))=-w_{k}\). Thus \(\eta\) is \(\mathbb{C}^{*}\)-equivariant, and for any nonzero section \(s\in H^{0}(X,\mathcal{L})_{0}\), \(\eta(s)\) is a constant, which shows that \(C^{+}(x)=D_{+}(s)\), proving (1) and (2).
For a section \(s\in H^{0}(X,\mathcal{L})\),write \(\eta(s)=\sum_{k\geqslant 0}\eta_{k}(s)\) as the sum of homogenous functions. The \(k\)-th jet of \(s\) at \(x\) equals the class of \(\eta(s)|_{x}\) in \(\mathcal{O}_{x}/\mathfrak{m}_{x}^{k}\), whence under the identification it is represented by \(\sum_{i=0}^{k-1}\eta_{i}(s)\). On the other hand we have proved that \(\eta\) maps \(H^{0}(X,\mathcal{L})_{w_{k}}\) into the space of homogenous polynomials of degree \(-w_{k}\). Thus by the definition of fundamental forms we have \(\eta(H^{0}(X,\mathcal{L})_{w_{k}})=\mathbb{F}_{x}^{-w_{k}}\) and \(\operatorname{Im}(\eta)=\mathbb{F}_{x}\). Finally as \(\mathcal{L}\) is very ample, by [10, Proposition 7.2]\(\operatorname{Im}(\eta)\) generates \(\mathbb{C}[C^{+}(x)]\) as a \(\mathbb{C}-\)algebra. Thus \(\operatorname{Im}(\eta)\) contains the space of linear functions \(T_{x}^{*}X\), which has to be the image of \(H^{0}(X,\mathcal{L})_{w_{1}}\). It then follows that \(w_{1}=-1\) and \(\eta|_{H^{0}(X,\mathcal{L})_{w_{1}}}\) is an isomorphism.
From Lemma 2.7, the fundamental form at \(x\) is given by the collection of linear injection \(\eta|_{H^{0}(X,\mathcal{L})_{w_{k}}}\) for each \(k\). Dually it is given by the collection of surjective linear map \(\Pi_{k}:\,Sym^{-w_{k}}(T_{x}X)\to H^{0}(X,\mathcal{L})_{w_{k}}^{\vee}\) for each \(k\), where \(\Pi_{0}:\mathbb{C}\to H^{0}(X,\mathcal{L})_{0}^{\vee}\) maps \(1\) to the unique \(e_{0}\in H^{0}(X,\mathcal{L})_{0}^{\vee}\) such that \(e_{0}(s_{0})=1\) and \(\Pi_{1}\) is an isomorphism by Lemma 2.7 (3). Denote by \(f:X\to\mathbb{P}H^{0}(X,\mathcal{L})^{\vee}\) the projective embedding. Then under the identification \(C^{+}(x)\cong T_{x}X\), we have \(f(u)=[e_{0}+\sum_{k=1}^{r}\Pi_{k}(u,...,u)]\) for each \(u\in T_{x}X\) and \(f(x)=f(0)=[e_{0}]\). At this point we can deduce an easy corollary:
**Corollary 2.8**.: _Let \(X,\mathcal{L},x\) be as above, if \(r=1\) then \(X\) is isomorphic to a projective space._
Proof.: Write \(V=H^{0}(X,\mathcal{L})^{\vee}\) and \(W=H^{0}(X,\mathcal{L})_{w_{1}}^{\vee}\). Then we have \(V=\mathbb{C}e_{0}\oplus W\) and \(C^{+}(x)=f(T_{x}X)=\{[e_{0}+w]:w\in W\}\) as \(\Pi_{1}\) is surjective. Thus the sink of the action equals \(\mathbb{P}W\) and hence \(f\) is an isomorphism onto \(\mathbb{P}V\).
Now assume that \(x\) is chosen as a general point. The contraction homomorphism in Definition 2.5(2) defines a locally nilpotent action of \(T_{x}X\) on \(\oplus_{k\geq 0}Sym^{k}T_{x}^{k}X\). Then by Theorem 2.6, \(\mathbb{F}_{x}\) is a finite dimensional \(T_{x}X\)-invariant subspace. Dually it defines a nilpotent action of \(T_{x}X\) on \(H^{0}(X,\mathcal{L})^{\vee}\) through each \(\Pi_{k}\). More precisely for each \(0\leqslant k\leqslant r\) and for any \(v\in T_{x}X\), denote \(\Gamma_{v}|_{H^{0}(X,\mathcal{L})_{w_{k}}^{\vee}}:H^{0}(X,\mathcal{L})_{w_{ k}}^{\vee}\to H^{0}(X,\mathcal{L})_{w_{k+1}}^{\vee}\) by:
\[\Gamma_{v}\circ\Pi_{k}(v_{1},...,v_{k})=\Pi_{k+1}(v,v_{1},...,v_{k}),\ \ \forall k \geqslant 1\ \ \text{and}\ \ \Gamma_{v}(e_{0})=\Pi_{1}(v),\Gamma_{v}(H^{0}(X,\mathcal{L})_{w_{r}}^{\vee})=0. \tag{2.2}\]
Then \(f|_{T_{x}X}\) can be also written as: \(f(w)=\sum_{k=0}^{r}\Gamma_{w}^{k}(e_{0})\) for any \(w\in T_{x}X\).
In the following we denote by \(V_{k}=H^{0}(X,\mathcal{L})_{w_{k}}^{\vee}\) for each \(k\). It can be easily checked that the induced linear map \(\Gamma:T_{x}X\to\mathfrak{gl}(V)\) is a homomorphism between Lie algebras. As \(x\) is a general point, \(X\subset\mathbb{P}V\) is Euler-symmetric by Proposition 2.4, where the vector group action on \(X\) is given by a linear representation \(\rho_{x}:T_{x}X\to GL(V)\) defined as the following (see [10, Proof of Theorem 3.7]):
\[\rho_{x}(u)(v)=\sum_{l=0}^{r}\sum_{k=0}^{l}\binom{l}{k}\Gamma_{u}^{l-k}(v_{k}), \tag{2.3}\]
for any \(u\in T_{x}X\) and \(v=\sum_{k=0}^{r}v_{k}\in V\), where \(v_{k}\in V_{k}\) for each \(k\).
**Proposition 2.9**.: _Let \(X,\mathcal{L},\rho_{x},f\) be as above, where \(x\) is chosen as a general point. Then:_
_(1) For each \(0\leqslant k\leqslant r\), \(w_{k}=-k\)._
_(2) For each \(k\geqslant 1\) and for any \(u_{1},...,u_{k}\in T_{x}X\), \(\Pi_{k}(u_{1},...,u_{k})=\Gamma_{u_{1}}\circ...\circ\Gamma_{u_{k}}(e_{0})\)._
_(3) The induced action of \(T_{x}X\) on \(\mathbb{P}V\) leaves \(X\) invariant and lifts the action of \(T_{x}X\) on \(C^{+}(x)\). Moreover denote by \(d\rho_{x}:T_{x}X\to\mathfrak{gl}(V)\) the differential map of \(\rho_{x}\), then \(d\rho_{x}|_{V_{k}}=(k+1)\,\Gamma|_{V_{k}}\) for each \(k\). In particular we have_
\[d\rho_{x}(u)\cdot V_{k}\subseteq V_{k+1},\]
_for any \(u\in T_{x}X\), any \(k\geqslant 0\) and \(d\rho_{x}(u)(e_{0})=\Pi_{1}(u)=\Gamma_{u}(e_{0})\)._
Proof.: As \(x\) is a general point, \(\mathbb{F}_{x}\) is a symbol system. Thus for any \(l\) such that \(\mathbb{F}_{x}^{l+1}\neq 0\) we have \(\mathbb{F}_{x}^{l}\neq 0\) as well. Then from Lemma 2.7 (3), we conclude that \(\mathbb{F}_{x}=\oplus_{k=0}^{r}\mathbb{F}_{x}^{r}\), \(w_{k}=-k\) and \(\eta(H^{0}(X,\mathcal{L})_{w_{k}})=\mathbb{F}_{x}^{k}\) for each \(k\). (2) is a direct consequence of our definition in (2.2). For (3) to show the first assertion, it suffices to check \(\rho_{x}(u^{\prime})\cdot f(u)=f(u+u^{\prime})\), for any \(u,u^{\prime}\in T_{x}X\). For any \(u,u^{\prime}\in T_{x}X\), we have:
\[f(u+u^{\prime})=[\sum_{k=0}^{r}\Gamma_{u+u^{\prime}}^{k}(e_{0})]=[\sum_{l=0}^{r} \sum_{k=0}^{l}\binom{l}{k}\Gamma_{u}^{k}\circ\Gamma_{u^{\prime}}^{l-k}(e_{0})] =[\rho_{x}(u^{\prime})(\sum_{k=0}^{r}\Gamma_{u}^{k}(e_{0}))]=\rho_{x}(u^{\prime}) \cdot f(u).\]
Now for any \(u\in T_{x}X\) and \(v=\sum_{k=0}^{r}v_{k}\in V\):
\[d\rho_{x}(u)(v)=\frac{d}{d_{z}}_{|z=0}(\rho_{x}(zu)(v))=\frac{d}{d_{z}}_{|z=0}( \sum_{l=0}^{r}\sum_{k=0}^{l}\binom{l}{k}z^{l-k}\Gamma_{u}^{l-k}(v_{k}))=\sum_{l= 1}^{r}l\cdot\Gamma_{u}(v_{l-1}).\]
In other words we have \(d\rho_{x}|_{V_{k}}=(k+1)\,\Gamma|_{V_{k}}\), for any \(k\geqslant 0\). In particular, \(d\rho_{x}.V_{k}\subseteq V_{k+1}\) and \(d\rho_{x}(u)(e_{0})=\Gamma_{u}(e_{0})=\Pi_{1}(u)\) for any \(u\in T_{x}X\)
Next we apply it to the case when both the sink and the source are isolated. Assume that for a general pair of points \(x,y\) on \(X\), there is a \(\mathbb{C}^{*}\)-action on \(X\) which is of Euler type at the source \(x\) and its inverse action is of Euler type at \(y\). Take the linearization of the inverse action on \(\mathcal{L}\) such that the decomposition of the associated weight subspaces equals \(H^{0}(X,\mathcal{L})=\oplus_{k=0}^{r}H^{0}(X,\mathcal{L})^{\prime}_{w^{\prime} _{k}}\), where \(0=w^{\prime}_{0}>...>w^{\prime}_{r}\). Applying Proposition 2.9, we have the following:
**Corollary 2.10**.: _Let \(X,\mathcal{L},x\) and \(y\) be as above, then:_
_(1) We have \(H^{0}(X,\mathcal{L})_{w_{k}}=H^{0}(X,\mathcal{L})^{\prime}_{w^{\prime}_{-k}}\) and \(w_{k}=-r-w^{\prime}_{r-k}\)._
_(2) We have \(V_{r}=H^{0}(X,\mathcal{L})^{\prime}_{w^{\prime}_{0}}=\{s\in H^{0}(X,\mathcal{ L}):D_{+}(s)=X^{-}(y)\}\) and \(dim(H^{0}(X,\mathcal{L})^{\prime}_{w^{\prime}_{0}})=1\). Furthermore \(f(y)=[e_{r}]\) where \(e_{r}\) is a nonzero element in \(V_{r}\)._
_(3) There is a vector group action of \(T_{y}X\) on \(V\) namely \(\rho_{y}:T_{y}X\to GL(V)\), such that the induced action on \(\mathbb{P}V\) leaves \(X\) invariant and lifts the action of \(T_{y}X\) on \(C^{-}(y)\). Moreover for the differential map \(d\rho_{y}\) we have \(d\rho_{y}\cdot V_{0}=0\) and:_
\[d\rho_{y}(w)\cdot V_{k+1}\subseteq V_{k},\]
_for any \(w\in T_{y}X\) and for any \(k\geqslant 0\)._
Proof.: (1) follows directly from the definition of \(w^{\prime}_{k}\). (2) follows by (1), Lemma 2.7 (1) and Proposition 2.9 (3). For (3) we dually denote by \(V^{\prime}_{k}=(H^{0}(X,\mathcal{L})^{\prime}_{w^{\prime}_{k}})^{\vee}\) for each \(k\). Then by (1) we have \(V^{\prime}_{k}=V_{r-k}\). And from Proposition 2.9 (3) there is a vector group action of \(T_{y}X\) on \(V\), such that the induced action on \(X\) extends the action of \(T_{y}X\) on \(C^{-}(y)\). Moreover \(d\rho_{y}\cdot V^{\prime}_{k}\subseteq V^{\prime}_{k+1}\), equivalently we have \(d\rho_{y}\cdot V_{k+1}\subseteq V_{k}\) for any \(k\geqslant 0\) and \(d\rho_{y}\cdot V_{0}=0\).
## 3. Prolongations of projective subvarieties
In this section, we study prolongation of projective subvarieties in two steps following [12][13]. Firstly we study the prolongation of the VMRT of Euler-symmetric varieties at a general point. Then we calculate the dimension of \(\mathfrak{aut}(\hat{S})^{(1)}\) for certain \(S\subset\mathbb{P}V\) with nonzero prolongation, which was explicitly formulated in [12].
### Prolongation of the VMRT
In this section we apply the results built in Section 2 to prove Theorem 1.4. Throughout the section, \(X\) is assumed to be a Fano manifold of Picard number one. Let us recall the definition of VMRT.
**Definition 3.1**.: _An irreducible component \(\mathcal{K}\) of the space Ratcurves\({}^{n}(X)\) of rational curves on \(X\) is called a minimal rational component if the subvariety \(\mathcal{K}_{x}\) of \(\mathcal{K}\) parameterizing curves passing through a general point \(x\in X\) is non-empty and proper. Curves parameterized by \(\mathcal{K}\) will be called minimal rational curves. Let \(\rho:\mathcal{U}\to\mathcal{K}\) be the universal family and \(\mu:\mathcal{U}\to X\) the evaluation map. The tangent map \(\tau:\mathcal{U}\dashrightarrow\mathbb{P}T(X)\) is defined by \(\tau(u)=T_{\mu(u)}(\mu(\rho^{-1}(\rho(u))))\). The closure \(\mathcal{C}\subset\mathbb{P}T(X)\) of its image is the total space of variety of minimal rational tangents. The natural projection \(\mathcal{C}\to X\) is a proper surjective morphism and a general fiber \(\mathcal{C}_{x}\subset\mathbb{P}T_{x}X\) is called the variety of minimal rational tangents (VMRT for short) at the point \(x\in X\)._
The VMRT of an Euler-symmetric variety at a general point is related to its fundamental forms as follows.
**Definition 3.2**.: _Assume that for a general point \(x\) on \(X\), there exists a \(\mathbb{C}^{*}\)-action on \(X\) which is of Euler type at \(x\). Take a very ample line bundle \(\mathcal{L}\) on \(X\) and denote by \(f:X\to\mathbb{P}H^{0}(X,\mathcal{L})^{\vee}\) the projective embedding. Denote by \(\mathbb{F}_{x}\) the fundamental forms of \(X\) at \(x\) and take \(\eta\) as in Lemma 2.7. For each \(k\geqslant 2\) denote:_
\[Bs(\mathbb{F}_{x}^{k})=\{[w]\in\mathbb{P}T_{x}X:\phi(w,...,w)=0,\forall\phi\in \mathbb{F}_{x}^{k}\subset Sym^{k}T_{x}^{*}X\}\]
_Then \(Bs(\mathbb{F}_{x}^{k})=\{[w]\in\mathbb{P}T_{x}X:\eta(s)(w)=0,\forall s\in H^{0 }(X,\mathcal{L})_{w_{k}}\}\) and we have the inclusions: \(Bs(\mathbb{F}_{x}^{2})\subset Bs(\mathbb{F}_{x}^{3})\subset...\subset Bs( \mathbb{F}_{x}^{r})\) as \(\mathbb{F}_{x}\) is a symbol system. Denote the base locus of fundamental forms at \(x\) to be \(Bs(\mathbb{F}_{x})=Bs(\mathbb{F}_{x}^{l_{x}})\), where \(l_{0}\) is the smallest integer \(l\) such that \(Bs(\mathbb{F}_{x}^{l})\) is non-empty._
**Proposition 3.3**.: _[_12_, Prop 4.4 and Prop 5.4(iii)]_
_Let \(X,x,\mathcal{L}\) be as above. Let \(\mathcal{K}\) be the family of minimal rational curves on \(X\) and \(\mathcal{C}\subset\mathbb{P}T(X)\) the VMRT-structure on \(X\). Then \(\mathcal{C}_{x}=Bs(\mathbb{F}_{x})\subset\mathbb{P}T_{x}X\), which is an irreducible, nonsingular and non-degenerate projective subvariety._
Next we aim to study the prolongation of infinitesimal linear automorphism of \(C_{x}\subseteq\mathbb{P}T_{x}X\), the most important example of which is the case of IHSS.
_Example 1_.: Let \(G\) be a simple algebraic group, \(B\subset G\) a Borel subgroup, and \(T\subset B\) a maximal torus. Denote by \(\Phi\) the root system, \(\Delta\subset\Phi\) the simple roots and \(\mathcal{D}\) the Dynkin diagram. Let \(\mathfrak{g}=Lie(G)\) be the Lie algebra of \(G\) and let \(\mathfrak{h}\subset\mathfrak{g}\) be the Cartan subalgebra. For a subset \(I\subset\Delta\), denote by \(P_{I}\) the standard parabolic subgroup indexed by \(I\), and denote by \(P_{I}^{-}\subset G\) the opposite group of \(P_{I}\). We write the quotient \(G/P_{I}\) as \(\mathcal{D}(I)\), which is a smooth projective rational variety of Picard number \(|I|\). In the following we always assume \(I=\{\alpha\}\) for a simple root \(\alpha\in\Delta\). For any root \(\beta\in\Phi\), the multiplicity of the simple root \(\alpha\) in \(\beta\) is denoted by \(m_{\alpha}(\beta)\). Denote by \(y=eP\) and \(x=\hat{w}_{0}P\), where \(w_{0}\) is the longest element in the Weyl group of \(G\).
Write \(\mathfrak{g}=\underset{k\in\mathbb{Z}}{\oplus}\mathfrak{g}_{k}\), where \(\mathfrak{g}_{k}=\underset{m_{\alpha}(\beta)=k}{\oplus}\mathfrak{g}_{\beta}\) for each \(k\in\mathbb{Z}\). Then \(X=G/P_{\alpha}\) is called an IHSS if \(\mathfrak{g}\) admits a short grading: \(\mathfrak{g}=\mathfrak{g}_{-1}\oplus\mathfrak{g}_{0}\oplus\mathfrak{g}_{1}\), or equivalently \(R_{u}(P_{\alpha}^{-})\) and \(R_{u}(P_{\alpha})\) are vector groups (see for example Ar11). In this case, \(G/P_{\alpha}\) is an equivariant compactification of the vector group \(R_{u}(P_{\alpha}^{-})\), with origin \(y\). Define a \(\mathbb{C}^{*}\)-action on \(\mathcal{D}(I)\) by the cocharacter \(\sigma_{\alpha}:\mathbb{G}_{m}\to T\), such that \(\sigma_{\alpha}(\beta)=\delta_{\alpha,\beta},\forall\beta\in\Delta\). The \(\mathbb{C}^{*}\)-action is equalized at the sink \(y\) and \(C^{-}(y)=R_{u}(P^{-})\cdot y\). This shows that \(\mathcal{D}(I)\) is Euler-symmetric by Proposition 2.4. Denote by \(H\subset P_{\alpha}\) the Levi-part of \(P\), then \(y\) is \(H\)-fixed. And \(\mathfrak{g}_{0}=Lie(H),\mathfrak{g}_{1}=Lie(R_{u}(P_{\alpha}))\), \(\mathfrak{g}_{-1}=Lie(R_{u}(P_{\alpha}^{-}))\). Finally the prolongation of \(\mathfrak{aut}(\hat{C_{y}})\) can be interpreted as adjoint actions inside the simple grading algebra \(\mathfrak{g}\) as follows.
(1) \(T_{y}X=\mathfrak{g}/\mathfrak{p}\cong\mathfrak{g}_{-1}\) and \(\mathcal{C}_{y}\subset\mathbb{P}T_{y}X\) is the unique closed orbit of the isotropy action of \(H\) on \(\mathbb{P}T_{y}X\cong\mathbb{P}\mathfrak{g}_{-1}\).
(2) \(\mathfrak{aut}(\hat{C_{y}})\cong\mathfrak{g}_{0}\) is given by the adjoint action of \(\mathfrak{g}_{0}\) on \(\mathfrak{g}_{-1}\).
(3) \(\mathfrak{aut}(\hat{C_{x}})^{(1)}\cong\mathfrak{g}_{1}\) is given by the adjoint action:
\[\mathfrak{g}_{1} \to Hom(\mathfrak{g}_{-1},\mathfrak{g}_{0})\] \[\alpha \to(\beta\to[\alpha,\beta]), \tag{3.1}\]
where the image lies in \(Sym^{2}T_{x}^{*}X\otimes T_{x}(X)\) as \(\mathfrak{g}_{-1}\) is abelian. We list IHSS and their VMRTs in the following table.
Now back to the general situation, assume that for a general pair of points \(x,y\) on \(X\), there is a \(\mathbb{C}^{*}\)-action which is of Euler type at \(x\) and its inverse action is of Euler type at \(y\). Take a very ample line bundle \(\mathcal{L}\) on \(X\) and denote \(V=H^{0}(X,\mathcal{L})^{\vee}\). Let \(f:X\to\mathbb{P}V\) be the projective embedding. Recall \(\rho_{x},\rho_{y}\) the linear actions of \(T_{x}X\) and \(T_{y}X\) on \(V\) respectively. We denote by \(\mathfrak{g}_{1}=\mathrm{I}m(d\rho_{x})\subset\mathfrak{g}l(V)\) and \(\mathfrak{g}_{-1}=\mathrm{I}m(d\rho_{y})\subset\mathfrak{g}l(V)\). By definition \(\mathfrak{g}_{1},\mathfrak{g}_{-1}\subset aut(\hat{X})\), thus \(\forall\alpha\in\mathfrak{g}_{1},\forall\beta\in\mathfrak{g}_{-1}\), \(\gamma:=[\alpha,\beta]\) lies in \(\mathfrak{aut}(\hat{X})\) as well. Denote by \(G_{\gamma}:=\{exp(z\gamma)\,|\,z\in\mathbb{C}\}\subset GL(V)\) the one-parameter subgroup. Then the action of \(G_{\gamma}\) on \(\mathbb{P}V\) leaves \(X\) invariant. Moreover we have:
**Lemma 3.4**.: \(x\) _and \(y\) are fixed by \(G_{\gamma}\). Denote by \(\Phi_{\gamma}:G_{\gamma}\to GL(T_{x}X)\) the induced isotropy action of \(G_{\gamma}\) on \(T_{x}X\), then for any \(w\in T_{x}X\) we have:_
\[\Pi_{1}((d\Phi_{\gamma})(\gamma)(w))=[\gamma,d\phi_{x}(w)]\cdot e_{0}\in V_{1}. \tag{3.2}\]
Proof.: By Proposition 2.9(iii) and Corollary 2.10(3), \(\alpha\cdot V_{k}\subset V_{k+1}\), and \(\beta\cdot V_{k+1}\subset V_{k}\) for each \(0\leqslant k\leqslant r-1\). Thus \(\gamma\cdot V_{k}\subset V_{k}\) for any \(0\leqslant k\leqslant r\), whence \(G_{\gamma}\cdot V_{k}\subset V_{k}\) for each \(k\). This particularly shows that \(x\) and \(y\) are fixed by \(G_{\gamma}\) as \(x=[e_{0}],y=[e_{r}]\) and \(V_{0},V_{r}\) are both of dimension one. It also implies that the action of \(G_{\gamma}\) commutes with the \(\mathbb{C}^{*}\)-action. Now for any nonzero tangent vector \(w\in T_{x}X\), under the identification \(C^{+}(x)\cong T_{x}X\), consider the holomorphic arc \(\theta_{w}\) on X through \(x\):
\[\theta_{w}:\mathbb{C} \to X\subset\mathbb{P}V\] \[z \to f(zw)=[\sum_{k=0}^{r}z^{k}\Gamma_{w}^{k}(e_{0})]\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline IHSS \(X=G/P\) & \(\mathbb{Q}^{n}\) & \(Gr(a,a+b)\) & \(\mathbb{S}_{n}\) & \(Lag(n,2n)\) & \(E_{6}/P_{1}\) & \(E_{7}/P_{7}\) \\ \hline VMRT \(\mathcal{C}_{x}\) & \(\mathbb{Q}^{n-2}\) & \(\mathbb{P}^{a-1}\times\mathbb{P}^{b-1}\) & \(Gr(2,n)\) & \(\mathbb{P}^{n-1}\) & \(\mathbb{S}_{5}\) & \(E_{6}/P_{1}\) \\ \hline \(\mathcal{C}_{x}\subset\mathbb{P}T_{x}(X)\) & Hyperquadric & Segre & \(\mathrm{Pl}\ddot{u}\mathrm{cker}\) & second Veronese & Spinor & Severi \\ \hline \end{tabular}
\end{table}
Table 1. IHSS and their VMRTs
Denote the closure of its image to be \(C_{w}\). Then \(C_{w}\) is a non-trival \(\mathbb{C}^{*}\)-orbit closure with source \(x\). As the action of \(G_{\gamma}\) commutes with the \(\mathbb{C}^{*}\)-action, \(g\cdot C_{w}\) is also a non-trival \(\mathbb{C}^{*}\)-orbit closure through \(g\cdot x=x\) for any \(g\in G_{\gamma}\). This implies that \(g\cdot f(w)\in C^{+}(x)\) for any \(w\), in other words we have \(g\cdot C^{+}(x)\subset C^{+}(x)\).
Now we calculate the action of \(G_{\gamma}\) on \(T_{x}X\) via \(T_{x}X\cong C^{+}(x)\). Write \(\gamma\cdot e_{0}=ce_{0}\) for some \(c\in\mathbb{C}\), then for any \(g_{t}=exp(t\gamma)\) we have:
\[f(g_{t}\cdot w) =[\sum_{k=0}^{r}\Gamma_{g_{t}\cdot w}^{k}(e_{0})]=[e_{0}+\Gamma_{ g_{t}\cdot w}^{1}(e_{0})+\sum_{k=2}^{r}\Gamma_{g_{t}\cdot w}^{k}(e_{0})],\] \[g_{t}\cdot f(w) =g_{t}\cdot[\sum_{k=0}^{r}\Gamma_{w}^{k}(e_{0})]=[e_{0}+e^{-tc}g_ {t}\cdot\Gamma_{w}^{1}(e_{0})+\sum_{k=2}^{r}e^{-tc}g_{t}\cdot\Gamma_{w}^{k}(e _{0})].\]
Thus from \(f(g_{t}\cdot w)=g_{t}\cdot f(w)\) we conclude that:
\[\Pi_{1}(g_{t}\cdot w)=\Gamma_{g_{t}\cdot w}^{1}(e_{0})=exp(t\,\gamma|_{V_{1}}- tc\,\operatorname{Id}|_{V_{1}})\cdot\Pi_{1}(w), \tag{3.3}\]
from which we see that the action of \(G_{\gamma}\) on \(T_{x}X\) via \(T_{x}X\cong C^{+}(x)\) is linear. Then as the differential map of the isomorphism \(T_{x}X\cong C^{+}(x)\) at \(0\in T_{x}X\) equals \(\operatorname{Id}\nolimits_{|T_{x}X}\), we conclude that \(\Phi_{\gamma}(g)(w)=g\cdot w\). Thus:
\[\Pi_{1}(d\Phi_{\gamma}(\gamma)(w)) =\frac{d}{dt}_{|t=0}(\Pi_{1}(g_{t}\cdot w))=\frac{d}{dt}_{|t=0}(e ^{-tc}e^{t\gamma}\cdot\Pi_{1}(w))\] \[=-c\,\Pi_{1}(w)+\gamma.\Pi_{1}(w)=[\gamma,d\phi_{x}(w)].e_{0}\]
from \(\Pi_{1}(w)=\Gamma_{w}(e_{0})=d\phi_{x}(w)\cdot e_{0}\).
As \(G_{\gamma}\) fixes \(x\), it acts on the family of minimal rational curves through \(x\). Thus the image of \(\Phi_{\gamma}\) is contained in \(Aut^{0}(\hat{\mathcal{C}}_{x})\) and \(d\Phi_{\gamma}(\gamma)\in\mathfrak{aut}(\hat{\mathcal{C}}_{x})\). Then under the identification \(\Pi_{1}\), we can rewrite Lemma 3.4 as follows, which is a generalization of the map (3.1).
**Corollary 3.5**.: _Consider the linear map \(\lambda:\mathfrak{g}_{-1}\to Sym^{2}(V_{1}^{*})\otimes V_{1}\) given by:_
\[\lambda(d\phi_{y}(\beta)):\quad\quad V_{1}\times V_{1} \longrightarrow V_{1}\] \[(\Pi_{1}(\alpha),\Pi_{1}(\xi)) \longrightarrow[[d\phi_{y}(\beta),d\phi_{x}(\alpha)],d\phi_{x}(\xi) ].e_{0}, \tag{3.4}\]
_for any \(\beta\in T_{y}X\) and for any \(\alpha,\xi\in T_{x}X\). Then \(\text{Im}(\lambda)\subset\mathfrak{aut}(\hat{\mathcal{C}}_{x})^{(1)}\), under the identitication \(\Pi_{1}\)._
Proof.: It suffices to check \(\lambda(d\phi_{y}(\alpha)\) is symmetric, which follows from the fact that \(\mathfrak{g}_{1}\) is abelian.
Now Theorem 1.4 is a corollary of the following proposition.
**Proposition 3.6**.: \(\lambda\) _induces an isomorhism from \(\mathfrak{g}_{-1}\) onto \(\mathfrak{aut}(\hat{\mathcal{C}}_{x})^{(1)}\)._
Proof.: By [13, Theorem 1.1.3] there is a natural inclusion: \(\mathfrak{aut}(\hat{\mathcal{C}}_{x})^{1}\hookrightarrow(T_{x}X)^{\vee}\), whence it suffices to show that \(\lambda\) is injective.
Assume otherwise that \(\lambda(d\phi_{y}(\beta))=0\) for some nonzero \(\beta\in T_{y}X\). For any \(\alpha\in T_{x}X\), denote by \(\gamma_{\alpha}=[d\phi_{y}(\beta),d\phi_{x}(\alpha)]\). We now show that \(\gamma_{\alpha}=l(\alpha)\operatorname{Id}_{V}\) for some \(l\in(T_{x}X)^{\vee}\). To do this it suffices to show that \(G_{\gamma_{\alpha}}\) acts on \(V\) by scalar multiplications. By Lemma 3.4 we have \(d\Phi_{\gamma_{\alpha}}(\gamma_{\alpha})=0\). From (3.3) it implies that \(G_{\gamma_{\alpha}}\) acts trivally on \(C^{+}(x)\), and consequently on \(X\) as \(C^{+}(x)\) is open dense in \(X\). Thus for any nonzero vector \(v\in\hat{X}\), \(v\) is an eigenvector of the linear action of \(G_{\gamma_{\alpha}}\) on \(V\). For any \(g\in G_{\gamma_{\alpha}}\) denote by \(\{V_{g(c)}:c\in J_{g}\}\) the characteristic subspaces of the action of \(g\) on \(V\), then \(X\subset\cup_{c\in J_{g}}\operatorname{I\!P}_{g(c)}\). As \(X\) is non-deeprate and irreducible, there exists some \(c\in J_{g}\) such that \(V_{g(c)}=V\), i.e., \(g\) acts on \(V\) by scalar multiplications.
Now denote by \(U_{\beta}=\{exp(d\phi_{y}(t\beta)):t\in\mathbb{C}\}\subset GL(V)\) the \(1\)-dimensional vector subgroup of \(\text{Im}(\phi_{y})\) and \(V_{l}=\{exp(d\phi_{x}(\alpha)):l(\alpha)=0\}\) the vector subgroup of \(\text{Im}(\phi_{x})\). Then by Corollary 2.10\(x\) is fixed by \(U_{\beta}\) as \(x=[e_{0}]\) and \(d\phi_{y}(\beta)\cdot e_{0}=0\). Thus \(V_{l}.x\) is also fixed by \(U_{\beta}\) as the action of \(U_{\beta}\) and \(V_{l}\) on \(X\) commutes by our definition of \(l\). On the other hand as \(\beta\neq 0\), \(U_{\beta}\) acts freely on \(C^{-}(y)\cong T_{y}X\). This implies that \(V_{l}\cdot x\subset C^{+}(x)\backslash C^{-}(y)\). We prove that this induces a contradiction:
If \(l=0\) then \(V_{l}=Im(\rho_{x})\) and \(V_{l}\cdot x=C^{+}(x)\). But \(C^{+}(x)\cap C^{-}(y)\) is non-empty as they are both open dense in \(X\).
If \(l\neq 0\) then \(V_{l}\cdot x\) is a hyperplane in \(C^{+}(x)\cong T_{x}X\). By the proof of Lemma 2.7, \(D_{y}:=X\backslash C^{-}(y)\) is an irreducible divisor of \(X\). So as an open subset of \(D_{y}\), \(C^{+}(x)\backslash C^{-}(y)\) is an irreducible divisor of \(C^{+}(x)\). This implies that \(C^{+}(x)\backslash C^{-}(y)\) equals the hyperplane \(V_{l}\cdot x\). On the other hand by Corollary 2.10 we can write \(C^{+}(x)\backslash C^{-}(y)=\{w\in T_{x}X:\eta(s_{r})(w)=0\}\) where \(s_{r}\) is a nonzero section in \(H^{0}(X,\mathcal{L})_{w_{r}}\). Thus
\(Bs(\mathbb{F}_{x}^{r})=\{[w]\in\mathbb{P}T_{x}X:\eta(s_{r})(w)=0\}\) is a hyperplane in \(\mathbb{P}T_{x}X\). But then \(\mathcal{C}_{x}=Bs(\mathbb{F}_{x})\subset Bs(\mathbb{F}_{x}^{r})\) is linear degenerate in \(\mathbb{P}T_{x}X\), contradicting Proposition 3.3.
### Projective subvarieties with nonzero prolongations
Let us recall the classification result of projective subvarieties with nonzero prolongations by Fu and Hwang as follows.
**Theorem 3.7**.: _[_10_, Main Theorem]_ _and [10, Theorem 7.13] Let \(S\subset\mathbb{P}V\) be an irreducible nonsingular nondegenerate variety such that \(\mathfrak{a}ut(\hat{S})^{1}\neq 0\). Then \(S\subset\mathbb{P}V\) is projectively equivalent to one of the followings:_
_(1) The VMRT of an IHSS._
_(2) The VMRT of a symplectic Grassmannian._
_(3) A nonsingular linear section of \(Gr(2,5)\subset\mathbb{P}^{9}\) of codimension \(\leqslant 2\)._
_(4) A nonsingular \(\mathbb{P}^{4}\)-general linear section of \(\mathbb{S}_{5}\subset\mathbb{P}^{15}\) of codimension \(\leqslant 3\)._
_(5) Biregular projections of (1) and (2) with nonzero prolongations, which are completely described in Section 4 of [10]._
_Remark 3.8_.: As noted in [10, Proposition 2.11], all nonsingular sections of \(Gr(2,5)\subset\mathbb{P}^{9}\) with codimension \(s\leqslant 3\) are projectively equivalent.
The main result of this subsection is the following result based on Theorem 3.7.
**Proposition 3.9**.: _Let \(S\subset\mathbb{P}V\) be one of the projective subvarieties in Theorem 3.7 (2)(3)(4)(5), then:_
\[dim(\mathfrak{a}ut(\hat{S})^{(1)})<dim(V). \tag{3.5}\]
We will prove this proposition case by case based on Theorem 3.7.
#### 3.2.1. Case (2) and (5)
In these cases, the prolongation of \(\mathfrak{a}ut(\hat{S})\) was explicitly formulated in [10]. First we consider Case (2) and the case of biregular projections of (2):
**Lemma 3.10**.: _Let \(W\) and \(Q\) be vector spaces of dimensions \(k\geqslant 2\) and \(m\) respectively. Set \(L=Sym^{2}(Q)\subset V=Sym^{2}(W\oplus Q)\) and \(U=V/L\). For \(\phi\in Sym^{2}(W\oplus Q)\) denote by \(\phi^{\#}\in Hom(W^{\vee}\oplus Q^{\vee},W\oplus Q)\) the corresponding homomorphism via the natural inclusion \(Sym^{2}(W\oplus Q)\subset Hom((W^{\vee}\oplus Q^{\vee},W\oplus Q)\). For \(L_{2}\subset U\), let \(\text{Im}(L_{2})\) be the linear space of \(\{\text{Im}(\phi^{\#}):\widetilde{\phi}\in L_{2}\}\). Define \(\text{Im}_{W}(L_{2})=P_{Q}(Im(L_{2}))\subset W\), where \(P_{Q}:W\oplus Q\to W\) is the projection to the first factor, then:_
_(i) Denote by \(p_{L}:\mathbb{P}V\dashrightarrow\mathbb{P}(V/L)\) the projection from \(\mathbb{P}L\). Let \(v_{2}:\mathbb{P}(W\oplus Q)\rightarrow\mathbb{P}(Sym^{2}(W\oplus Q))\) be the second Veronese embedding, \(Z\) the proper image of \(Im(v_{2})\). Then \(Z\subset\mathbb{P}V/L=\mathbb{P}U\) is isomorphic to the VMRT of the sympletic Grassmannian \(Gr_{w}(k,\Sigma)\) at a general point and \(\mathfrak{a}ut(\hat{Z})^{(1)}\cong Sym^{2}(W^{\vee})\)._
_(ii) If \(Z\cap\mathbb{P}L_{2}=\emptyset\), then \(\mathfrak{a}ut(\widehat{p_{L_{2}}(Z)})^{(1)}\cong Sym^{2}(W/Im_{W}(L_{2}))^{ \vee}\)._
_(iii) \(dim(\mathfrak{a}ut(\hat{Z})^{(1)})<dim(V/L)\). Let \(L_{2}\subset U\) be as in (ii), if \(\mathfrak{a}ut(\widehat{p_{L_{2}}(Z)})^{(1)}\neq 0\), then:_
\[dim(\mathfrak{a}ut(\widehat{p_{L_{2}}(Z)})^{(1)})<dim(U/L_{2}).\]
Proof.: (i) and (ii) are from [10, Proposition 4.18].
Under the identification: \(V=Sym^{2}(W\oplus Q)\subset Hom(W^{\vee}\oplus Q^{\vee},W\oplus Q)\), we write \(U=V/L=Sym^{2}(W)\oplus Hom(Q^{\vee},W)\), where we identify \(Sym^{2}(W)\) inside \(Hom(W^{\vee},W)\). Thus \(dim(V/L)>dim(\mathfrak{a}ut(\widehat{Z})^{(1)})\) as \(Hom(Q^{\vee},W)\neq 0\). Take a basis of \(Im_{W}(L_{2})\) to be \(e_{1},...,e_{t}\) and extend it to a basis of \(W\): \(e_{1},...,e_{t},e_{t+1},...,e_{k}\). Then by the definition of \(Im_{W}(L_{2})\), we have:
\[L_{2}\subset \{(\phi,\eta)\in Sym^{2}(W)\oplus Hom(Q^{\vee},W)\,|\,Im(\phi^{ \#})\subset Im_{W}(L_{2})\ and\ Im(\eta)\subset Im_{W}(L_{2})\}\] \[\cong Sym^{2}(Im_{W}(L_{2}))\oplus Hom(Q^{\vee},Im_{W}(L_{2})),\]
whence \(dim(L_{2})\leqslant\frac{t(t+1)}{2}+mt\). Then:
\[dim(U/L_{2})-dim(\mathfrak{a}ut(\widehat{p_{L_{2}}(Z)})^{(1)}) \geqslant\frac{k(k+1)}{2}+km-\frac{t(t+1)}{2}-tm-\frac{(k-t)(k-t+1)} {2}\] \[=(m+t)(k-t)>0,\]
where \(k-t>0\) as \(\mathfrak{a}ut(\widehat{p_{L_{2}}(Z)})^{(1)}\neq 0\).
By [10, Main Theorem (C)], the other cases of (5) are biregular projections of the VMRT of \(Gr(a,a+b),\mathbb{S}_{n}\) and \(Lag(n,2n)\) respectively. We prove these cases by the following three lemmas.
**Lemma 3.11**.: _Let \(A\) and \(B\) be vector spaces with \(a=dim(A)\geqslant b=dim(B)\geqslant 3\). Let \(V=Hom(A,B)\). For a subspace \(L\subset V\), set \(\text{Im}(L)=\{\text{Im}(\phi)\subset B:\phi\in L\}\). \(Ker(L)=\bigcap\limits_{\phi\in L}Ker(\phi)\). Then:_
_(i) \(S=\{[\phi]\in\mathbb{P}V:rank(\phi)\leqslant 1\}\subset\mathbb{P}V\) is projectively isomorphic to the VMRT of \(Gr(a,a+b)\)._
_(iii) Let \(L\subset V\) such that \(L\cap Sec(S)=\emptyset\), then \(\mathfrak{aut}(\widetilde{p_{L}}(S))^{(1)}\cong Hom(B/Im(L),Ker(L))\)_
_(iii) Let \(L\subset V\) be as in (ii), if \(\mathfrak{aut}(\widetilde{p_{L}}(S))^{(1)}\neq 0\) then \(dim(\mathfrak{aut}(\widetilde{p_{L}}(S))^{(1)})<dim(V/L)\)._
Proof.: (i) and (ii) are from [13, Proposition 4.10]. For (iii), denote \(s=dim(Ker(L))\) and \(t=dim(Im(L))\), by the definition we have
\[L\subset\{\phi\in Hom(A,B):\phi|_{Ker(L)}=0,Im(\phi)\subset Im(L)\}=Hom(A/Ker (L),Im(L)),\]
implying \(dim(L)\leqslant(a-s)t\). Thus
\[dim(V/L)-dim(\mathfrak{aut}(\widetilde{p_{L}(S)})^{(1)})=ab-dim(L)-(b-t)s \geqslant ab-(a-s)t-(b-t)s=(a-t)(b-t)+st>0,\]
where \(s<a\) and \(t<b\) as \(\mathfrak{aut}(\widetilde{p_{L}(S)})^{(1)}\neq 0\).
**Lemma 3.12**.: _Let \(W\) be a vector space of dimension \(n\geqslant 6\). \(V=\wedge^{2}W\). For each \(\phi\in\wedge^{2}W\), denote by \(\phi^{\#}\in Hom(W^{\vee},W)\) via the inclusion \(\wedge^{2}W\subset W\otimes W=Hom(W^{\vee},W)\). For a subspace \(L\subset V\), define \(Im(L)\subset W\) as the linear span of \(\{Im(\phi^{\#})\subset W,\phi\in L\}\). Then:_
_(i) \(S=\{[\phi]\in V:rk(\phi)\leqslant 2\}\subset\mathbb{P}V\) is isomorphic to the VMRT of \(\mathbb{S}_{n}\)._
_(ii) If \(L\subset V\) such that \(\mathbb{P}L\cap Sec(S)=\emptyset\), then \(\mathfrak{aut}(\widetilde{p_{L}(S)})^{(1)}\cong\wedge^{2}(W/Im(L))^{\vee}\)._
_(iii) Let \(L\subset V\) be as in (ii), if \(\mathfrak{aut}(\widetilde{p_{L}(S)})^{(1)}\neq 0\), then \(\widetilde{dim(\widetilde{p_{L}(S)})^{(1)}}<dim(V/L)\)._
Proof.: (i) and (ii) are from [13, Proposition 4.11]. For (iii) take a basis of \(Im(L)\) to be \(e_{1},...,e_{t}\) and extend it to a basis of \(W\): \(e_{1},...,e_{t},e_{t+1},...,e_{n}\). Denote the dual basis of \(W^{\vee}\) to be \(f_{1},...,f_{n}\) such that \(f_{i}(e_{j})=\delta_{i,j}\) for any \(1\leqslant i,j\leqslant n\). Identify \(Hom(W^{\vee},W)\) with \(M_{n\times n}(\mathbb{C})\) through:
\[Hom(W^{\vee},W) \longrightarrow M_{n\times n}(\mathbb{C})\] \[\mathcal{A} \longrightarrow A=(a_{ij}:1\leqslant i,j\leqslant n) \tag{3.6}\]
such that \(\mathcal{A}(f_{i})=\sum\limits_{j=1}^{n}a_{ij}e_{j}\). Then \(V\) corresponds to all skew-symmetric matrices. Now
\[L\subset\{\mathcal{A}\in V:Im(\mathcal{A})\subset Im(L)\}=\{\mathcal{A}\in V :a_{ij}=0\,\,\,\text{if}\,\,\,\,i\geqslant r+1\,\,\,\text{or}\,\,\,j\geqslant r +1\},\]
thus \(dim(L)\leqslant\dfrac{t(t-1)}{2}\). Then:
\[dim(V/L)-dim(\mathfrak{aut}(\widehat{p_{L}(S)})^{(1)}) =\dfrac{n(n-1)}{2}-dim(L)-\dfrac{(n-t)(n-t-1)}{2}\] \[\geqslant\dfrac{n(n-1)}{2}-\dfrac{t(t-1)}{2}-\dfrac{(n-t)(n-t-1) }{2}=t(n-t)>0,\]
where \(t>0\) as \(L\neq 0\) and \(t<n\) as \(\mathfrak{aut}(\widehat{p_{L}(S)})^{(1)}\neq 0\).
**Lemma 3.13**.: _Let \(W\) be a vector space of dimension \(n\geqslant 3\). \(V=Sym^{2}W\). For each \(\phi\in Sym^{2}W\), denoted by \(\phi^{\#}\in Hom(W^{\vee},W)\) via the inclusion \(Sym^{2}W\subset W\otimes W=Hom(W^{\vee},W)\). For a subspace \(L\subset V\), define \(Im(L)\subset W\) as the linear span of \(\{Im(\phi^{\#})\subset W,\phi\in L\}\). Then:_
_(i) \(S=\{[\phi]\in V:rk(\phi)\leqslant 1\}\subset\mathbb{P}V\) is isomorphic to the VMRT of \(Lag(n,2n)\)._
_(ii) If \(L\subset V\) such that \(\mathbb{P}L\cap Sec(S)=\emptyset\), then \(\mathfrak{aut}(\widetilde{p_{L}(S)})^{(1)}\cong Sym^{2}(W/Im(L))^{\vee}\)._
_(iii) Let \(L\subset V\) be as in (ii), if \(\mathfrak{aut}(\widetilde{p_{L}(S)})^{(1)}\neq 0\), then \(dim(\widetilde{p_{L}(S)})^{(1)})<dim(V/L)\)._
Proof.: (i) and (ii) are from [13, Proposition 4.12]. For (iii), as in Lemma 3.11 we take a basis of \(Im(L)\) to be \(e_{1},...,e_{r}\) and extend it to a basis of \(W\) to be \(e_{1},...,e_{r},e_{r+1},...,e_{n}\). Denote the dual basis to be \(f_{1},...,f_{n}\). Keep the identification (3.5) then \(V\) corresponds to all symmetric matrices and we have
\[L\subset\{\mathcal{A}\in V:Im(\mathcal{A})\subset Im(L)\}=\{\mathcal{A}\in V:a _{ij}=0\,\,\,\text{if}\,\,\,\,i\geqslant r+1\,\,\,\text{or}\,\,\,j\geqslant r +1\},\]
implying \(dim(L)\leqslant\frac{r(r+1)}{2}\) and thus
\[dim(V/L)-dim(\mathfrak{aut}(\widehat{p_{L}(S)})^{(1)}) =\frac{n(n+1)}{2}-dim(L)-\frac{(n-r)(n-r+1)}{2}\] \[\geqslant\frac{n(n+1)}{2}-\frac{r(r+1)}{2}-\frac{(n-r)(n-r+1)}{2}=r (n-r)>0,\]
where \(r>0\) as \(L\neq 0\) and \(r<n\) as \(\mathfrak{aut}(\widehat{p_{L}(S)})^{(1)}\neq 0\).
#### 3.2.2. Case (3)
Let \(X=Gr(2,5)\subset\mathbb{P}^{9}\). For each \(k=1,2\), denote by \(X_{k}\subset\mathbb{P}^{9-k}\) the nonsingular linear section of codimension \(k\). Then Case \(X_{1}\) follows from [14, Section 3.4] and Case \(X_{2}\) follows from [14, Lemma 4.6].
#### 3.2.3. Case (4)
Let \(S=\mathbb{S}_{5}\subset\mathbb{P}^{15}\). For each \(k=1,2,3\), denote by \(S_{k}\subset\mathbb{P}^{15-k}\) the nonsingular \(\mathbb{P}^{4}\)-general linear section of codimension \(k\) as described in [14, Proposition 2.12].
(i) Case \(S_{1}\) follows from [14, Section 3.3].
(ii) By [14, Proposition 7.6]\(S_{3}\) is quadratically symmetric. The VMRT of \(S_{3}\) at a general point is a nonsingular linear section of \(Gr(2,5)\subset\mathbb{P}^{9}\) of codimension \(3\), which has zero prolongations by Theorem 3.7. Then by the proof of [14, Theorem 6.15] we conclude that \(\mathfrak{aut}(\hat{S_{3}})\cong\mathbb{C}\).
(iii) To prove Case \(S_{2}\) we recall the following characterization of \(S_{2}\) proved by Kuznetsov.
**Theorem 3.14**.: _[_14_, Proposition 6.1 and Lemma 6.7]_ _Let \(S_{K}\subset S\) be a nonsingular linear section of \(S\) of codimension 2, then the followings are equivalent:_
_(a) \(S_{K}\) is projectively equivalent to \(S_{2}\);_
_(b) The Hilbert space \(F_{4}(S_{K})\) of linear 4-spaces on \(S_{K}\) is non-empty;_
_(c) There exists a line \(L\) in \(S_{K}\) such that_
\[\mathcal{N}_{L/S_{K}}\cong\mathcal{O}_{\mathcal{L}}(-2)\oplus\mathcal{O}_{L}( 1)^{\oplus 6} \tag{3.7}\]
_Moreover, such line is unique and is equal to the intersection of all 4-spaces on \(S_{K}\)._
Now assume otherwise that \(dim(\mathfrak{aut}(\hat{S_{2}})^{(1)})=dim(V)\). Take \(L\) as the line defined in Theorem 3.14. For any point \(x=[\hat{x}]\in L\), take a linear function \(l\in V^{\vee}\) such that \(l(\hat{x})\neq 0\). Then by [14, Proposition 2.3.1] there exists a unique \(\mathcal{A}\in\mathfrak{aut}(\hat{S_{2}})^{(1)}\) such that \(\mathcal{A}_{\alpha,\alpha}=l(\alpha)\alpha\) for any \(\alpha\in\hat{S_{2}}\). Moreover denote by \(P_{\alpha}=T_{\alpha}(\hat{S_{2}})\) the tangent space of \(\hat{S_{2}}\) at \(\alpha\), then we have:
\[2\mathcal{A}_{\alpha,\beta}=l(\alpha)\beta+l(\beta)\alpha, \tag{3.8}\]
for any \(\alpha\in\hat{S_{2}}\) and any \(\beta\in P_{\alpha}\). Denote by \(s\) the semisimple part of \(\mathcal{A}_{\hat{x}}\). By the proof of [14, Theorem 1.1.3], the one parameter subgroup \(\{exp(2ts):t\in\mathbb{C}\}\) induces a \(\mathbb{C}^{*}\)-action on \(S_{2}\) which is of Euler type at \(x\). We shall deduce the contradiction from the \(\mathbb{C}^{*}\)-action.
First we claim that there are exactly three different weight subspaces of the \(\mathbb{C}^{*}\)-action on \(V\). In fact by \(\mathcal{A}_{\hat{x}}(V)\subset P_{\hat{x}}\) and by (3.8), the linear action of \(\mathbb{C}^{*}\) on \(V\) has at most three different weight subspaces. On the other hand denote by \(\mathcal{L}=\mathcal{O}_{\mathbb{P}V}(1)|_{S_{2}}\). As \(S_{2}\) is linear normal in \(\mathbb{P}V\), \((S_{2},\mathcal{L},x)\) satisfies the conditions in Lemma 2.7. Thus by Corollary 2.8 it has exactly three weighted subspaces. Under the setting of Section 2 we have \(r=2\). Denote by \(W=H^{0}(S_{2},\mathcal{L})^{\vee}_{-1}\), \(U=H^{0}(S_{2},\mathcal{L})^{\vee}_{w_{2}}\) and \(f:S_{2}\rightarrow\mathbb{P}V\) the projective embedding. Then we have: \(dim(V)=14\), \(dim(W)=dim(T_{x}S_{2})=8\) and \(dim(U)=dim(V)-1-dim(W)=5\). We now check that the \(\mathbb{C}^{*}\)-action on \(S_{2}\) satisfies the following two properties.
(a) There are exactly three irreducible components of \(X^{\mathbb{C}^{*}}\): the isolated source \(\{x\}\), the unique component \(Y_{1}\) contained in \(\mathbb{P}W\) and the unique component \(Y_{2}\) contained in \(\mathbb{P}U\). First note that for any \(Y\in\mathcal{Y}\), \(Y\subset\mathbb{P}V_{k}\) for some k. If \(Y\subset\mathbb{P}W\), then a \(\mathbb{C}^{*}\)-orbit whose sink lies in \(Y\) has its source equal to \(x\). Thus by Proposition 2.3 such \(Y\) is unique, and \(C^{-}(Y)\) is a line bundle over \(Y\). If \(Y\subset\mathbb{P}U\) then we see that \(v^{+}(Y)=0\) whence \(Y\) is the unique sink of the \(\mathbb{C}^{*}\)-action.
(b) We have \(dim(Y_{1})>0\). Otherwise assume that \(Y_{1}=\{y\}\) is a single point. If \(dim(Y_{2})=0\) then by Proposition 2.3 and (a), we have \(v^{+}(Y_{1})=v^{-}(Y_{1})=1\) and thus \(dim(S_{2})=dim(T_{y}S_{2})=v^{+}(Y_{1})+v^{-}(Y_{1})=2<8\), which is a contradiction. If \(dim(Y_{2})>0\) then we have \(D_{x}=S_{2}\backslash C^{+}(x)=C^{+}(y)\cup Y_{2}\). This implies that \(Y_{2}\), as a divisor of \(D_{x}\), is of dimension 6, contradicting the fact that \(Y_{2}\subset\mathbb{P}U\cong\mathbb{P}^{4}\).
Now as \(L\) is the intersection of all 4-spaces in \(X_{2}\), it is \(\mathbb{C}^{*}\)-invariant. Moreover as \(x\in L\) and the action is of Euler type at \(x\), we conclude that \(L\) is a non-trivial \(\mathbb{C}^{*}\)-orbit closure with source \(x=[e_{0}]\). Denote the orbit to be \(\mathbb{C}^{*}\cdot f(w)=\{[e_{0}+z\Pi_{1}(w)+z^{-w_{2}}\Pi_{2}(w,w)]:z\in \mathbb{C}^{*}\}\) for some nonzero \(w\in T_{x}S_{2}\) and denote by \(y\) the sink of the orbit. Then we must have \(\Pi_{2}(w,w)=0\) and \(y\in Y_{1}\). Otherwise the sink of
the orbit would be \([\Pi_{2}(w,w)]\in\mathbb{P}U\). Then the line \(L\) will be contained in \(\mathbb{P}(\mathbb{C}e_{0}\oplus U)\), contradicting the fact that \(\Pi_{1}(w)\neq 0\) as \(\Pi_{1}\) is injective by Lemma 2.7 (3). Now take any point \(y^{\prime}\in Y_{1}\), denote by \(L_{y^{\prime}}\) the unique non-trival \(\mathbb{C}^{*}\)-orbit closure with sink \(y^{\prime}\) and source \(x\). Then \(L_{y^{\prime}}\) is exactly the line connecting \(x\) and \(y^{\prime}\). By [13, Lemma 2.16] and by the proof of [13, Proposition 2.17], the splitting type of \(T_{S_{2}}|_{L_{y^{\prime}}}\) is determined by the weights of the isotropy action of \(\mathbb{C}^{*}\) on \(T_{y^{\prime}}S_{2}\). As \(Y_{1}\) is irreducible, the weights of \(\mathbb{C}^{*}\) on \(T_{y^{\prime}}S_{2}\) remain invariant as \(y^{\prime}\) varies in \(Y_{1}\). This implies that the splitting type of \(T_{S_{2}}|_{L_{y^{\prime}}}\) also remains invariant, contradicting the uniqueness of \(L\) as \(dim(Y_{1})>0\). Thus we conclude that \(dim(\mathfrak{a}ut(\hat{S_{2}})^{(1)})<dim(V)\). This completes the proof of Proposition 3.9.
## 4. Proof of main result
In this section we will prove Theorem 1.2. Let's first recall the Cartan-Fubini type extension theorem proved by Hwang and Mok [12]. We will use the following version taken from [11, Theorem 6.8]
**Theorem 4.1**.: _Let \(X_{1}\) and \(X_{2}\) be two Fano manifolds of Picard number 1, different from projective spaces. Let \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\) be families of minimal rational curves on \(X_{1}\) and \(X_{2}\) respectively. Assume that for a general point \(x\in X_{1}\), the VMRT \(\mathcal{C}_{x}\subset\mathbb{P}T_{x}(X_{1})\) is irreducible and nonsingular. Let \(U_{1}\subset X_{1}\) and \(U_{2}\subset X_{2}\) be connected analytical open subsets. Suppose that there exists a biholomorphic map \(\phi:U_{1}\to U_{2}\) such that for a general point \(x\in U_{1}\), the differential \(d\phi_{x}:\mathbb{P}T_{x}(U_{1})\to\mathbb{P}T_{\phi(x)}(U_{2})\) sends \(\mathcal{C}_{x}\) isomorphically to \(\mathcal{C}_{\phi(x)}\). Then there exists a biregular morphism \(\Phi:X_{1}\to X_{2}\) such that \(\phi=\Phi|U_{1}\)._
The extension theorem enables us to characterize an Euler-symmetric variety by its VMRT. We present a \(\mathbb{C}^{*}\)-equivariant version for our convenience.
**Corollary 4.2**.: _Let \(X,X^{\prime}\) be two Fano manifolds of Picard number 1, and \(\mathcal{K},\mathcal{K}^{\prime}\) are families of minimal rational curves on \(X,X^{\prime}\) respectively. \(\mathcal{C}\subset\mathbb{P}T(X)\) and \(\mathcal{C}^{\prime}\subset\mathbb{P}T(X^{\prime})\) the associated VMRT sturctures on \(X\) and \(X^{\prime}\). Assume that for a general point \(x\) on \(X\) and for a general point \(x^{\prime}\) on \(X^{\prime}\) there are \(\mathbb{C}^{*}\)-actions on \(X\) and \(X^{\prime}\) such that the actions are of Euler type at \(x\) and \(x^{\prime}\) respectively. If \(\mathcal{C}_{x}\) is projectively isomorphic to \(\mathcal{C}^{\prime}_{x}\), then there is a \(\mathbb{C}^{*}\)-equivariant isomorphism \(\Phi:X\to X^{\prime}\) maps \(x\) to \(x^{\prime}\)._
Proof.: Assume the projective isomorphism between \(\mathcal{C}_{x}\subset\mathbb{P}T_{x}X\) and \(\mathcal{C}^{\prime}_{x^{\prime}}\subset\mathbb{P}T_{x^{\prime}}X^{\prime}\) is given by the linear isomorphism \(\phi:T_{x}X\to T_{x^{\prime}}X^{\prime}\). Then the identifications \(C^{+}(x)\cong T_{x}X\) and \(C^{+}(x^{\prime})\cong T_{x^{\prime}}X^{\prime}\) induce an isomorphism \(\overline{\phi}:C^{+}(x)\to C^{+}(x^{\prime})\). By Proposition 3.3, their VMRTs at general points are both irreducible and nonsingular. To extend \(\overline{\phi}\), it suffices to check the differential map of \(\overline{\phi}\) preserves the VMRT at a general point. As \(x\) and \(x^{\prime}\) are both taken as general points, by Proposition 2.10 (3) the action of \(T_{x}X\) on \(C^{+}(x)\) and the action of \(T_{x^{\prime}}X^{\prime}\) on \(C^{+}(x^{\prime})\) can be extended to an action on \(X\) and \(X^{\prime}\) respectively. This shows the VMRT structures over \(C^{+}(x)\) and \(C^{+}(x^{\prime})\) are both locally flat. Thus by our definition of \(\overline{\phi}\), for any point \(y\in C^{+}(x)\), \(d\overline{\phi}_{y}\) must map \(\mathcal{C}_{y}\) isomorphically onto \(\mathcal{C}^{\prime}_{\overline{\phi}(y)}\). This yields the existence of \(\Phi\). Finally by our assumption \(\overline{\phi}\) is \(\mathbb{C}^{*}\)-equivariant, thus \(\Phi\) is also \(\mathbb{C}^{*}\)-equivariant as \(C^{+}(x)\) is open dense.
Now let \(X=G/P=\mathcal{D}(I)\) be a rational homogenous space of Picard number one defined by a simple algebraic group \(G\) and a parabolic subgroup \(P\) given by \(I=\{\alpha\}\) for a simple positive root \(\alpha\in\Delta\). We review some facts about \(\mathbb{C}^{*}\)-actions on rational homogenous spaces, most of which are taken from [13].
Up to composing with a character, a \(\mathbb{C}^{*}\)-action on \(X\) is given by a cocharacter \(\sigma:\mathbb{G}_{m}\to T\) and the left multiplication of the maximal torus \(T\) on \(G\). For a simple root \(\beta\), we denote the cocharacter \(\sigma_{\beta}\) by assigning \(\sigma_{\beta}(\eta)=\delta_{\eta,\beta}\) for any \(\eta\in\Delta\). For a cocharacter \(\sigma\), denote \(\mathfrak{g}_{k}=\bigoplus_{\theta\in\Phi:\sigma(\theta)=k}\mathfrak{g}_{\theta}\).
**Proposition 4.3**.: _Let \(X=G/P=\mathcal{D}(I)\) be a rational homogenous space of Picard number one, then:_
_(i) A \(\mathbb{C}^{*}\)action on \(X\) given by a cocharacter \(\sigma\in X_{*}(T)\) is equalized if and only if the grading of \(\mathfrak{g}\) is short, i.e., \(\mathfrak{g}=\mathfrak{g}_{1}\oplus\mathfrak{g}_{0}\oplus\mathfrak{g}_{-1}\). In this case, up to a conjugation we can assume \(\sigma=\sigma_{\beta}\) for some \(\beta\in\Delta\)._
_(ii) A \(\mathbb{C}^{*}\)-action on \(\mathcal{D}(I)\) given by the cocharacter \(\sigma_{\beta}\) is equalized with an isolated sink if and only if \(\alpha=\beta\) and \(\sigma_{\alpha}\) defines a short grading of \(\mathfrak{g}\), equivalently \(X\) is an IHSS and \(\sigma=\sigma_{\alpha}\). In this case, the \(\mathbb{C}^{*}\)-action is equalized of weight -1 at the sink \(x^{\prime}=eP\), and it has an isolated source if and only if \(X=\mathcal{D}(I)\) is an IHSS of tube type, where the isolated source is \(y^{\prime}=\hat{w_{\omega}}x^{\prime}\)._
_(iii) Assume that \(X=\mathcal{D}(I)\) is an IHSS, with the \(\mathbb{C}^{*}\)-action given by \(\sigma_{\alpha}\). The maximal dimensional Bruhat cell is \(C^{-}(x^{\prime})=R_{u}(P^{-})\cdot x^{\prime}\) and \(X\) is an equivariant compactification of \(R_{u}(P^{-})\). If \(X\) is of tube type, then \(C^{+}(y^{\prime})=R_{u}(P)\cdot y^{\prime}\) and \(X\) an equivariant compactification of \(R_{u}(P)\)._
Now assume \(X=\mathcal{D}(I)\) to be an IHSS where \(I=\{\alpha\}\) for some simple root \(\alpha\) and consider the \(\mathbb{C}^{*}\)-action on \(X\) given by the cocharacter \(-\sigma_{\alpha}\). Then the action is of Euler type at \(x=eP\) and its inverse action is of Euler type at \(y=\dot{w_{0}}\cdot x\). By (iii) the \(\mathbb{C}^{*}\)- action satisfies the condition of Proposition 2.4 (ii).
We are ready to prove our main result.
Proof of Theorem 1.2.: If \(X=\mathcal{D}(I)\) is an IHSS of tube type, denote the \(\mathbb{C}^{*}\)-action on \(X\) by \(-\sigma_{\alpha}\). Then \(X\) is the equivariant compactification of \(R_{u}(P^{-})\) and \(R_{u}(P)\) respectively, where \(x=eP\) is fixed by \(R_{u}(P)\) and \(y=\dot{w_{0}}x\) is fixed by \(R_{u}(P^{-})\). Consider the morphism \(R_{u}(P)\times R_{u}(P^{-})\to X\times X:(u,v)\to(uv\cdot x,u\cdot y)\), its image is a dense and constructible, whence contains a dense open subset. Moreover for any element \((u\cdot y,uv\cdot x)\) in the image we can define a \(\mathbb{C}^{*}\)-action on \(X\) by:
\[\mathbb{C}^{*}\times X\longrightarrow X\]
\[(\ t\,\ x\ )\longrightarrow uv\,t\,(uv)^{-1}\cdot x,\]
such that the \(\mathbb{C}^{*}\)-action is of Euler type at the source \(\tilde{y}=uv\cdot y=u\cdot y\) and its inverse action is of Euler type at \(\tilde{x}=uv\cdot x\).
Conversely assume that for a general pair of points \(x,y\) on \(X\), there is a \(\mathbb{C}^{*}\)-action which is of Euler type at \(x\) and the inverse action is of Euler type at \(y\). Take \(\mathcal{K}\) to be a family of minimal ratioinal curves on \(X\), and let \(\mathcal{C}\subset\mathbb{P}T(X)\) be the VMRT structure. By Theorem 1.4, for the projective embedding \(\mathcal{C}_{x}\subset\mathbb{P}T_{x}X\) we have \(dim(\mathfrak{aut}(\dot{\mathcal{C}}_{x}^{)(1)})=dim(T_{x}X)\). Thus from Theorem 3.7 and Proposition 3.9, \(\mathcal{C}_{x}\) is projectively isomorphic to the VMRT of an IHSS. Denote the IHSS by \(X^{\prime}=\mathcal{D}(I)\) and consider the \(\mathbb{C}^{*}\)-action on \(X^{\prime}\) given by \(-\sigma_{\alpha}\) with the isolated source \(x^{\prime}\). Then \((X,x,X^{\prime},x^{\prime})\) satisfies the condition of Corollary 4.2 and hence \(X\) is \(\mathbb{C}^{*}\)-equivariant isomorphic to \(X^{\prime}\). As the \(\mathbb{C}^{*}\)-action on \(X\) has isolated sink and source, \(X^{\prime}\) must be an IHSS of tube type by Proposition 4.3 (ii).
## Acknowledgements
I am very grateful to my advisor Baohua Fu for sharing valuable ideas on Euler-symmetric varieties with me. I am also indebeted to him for many helpful suggestions during the preparation of the paper. I would like to thank Cong Ding for reading the first draft of this paper and for useful discussions and suggestions. I am grateful to Zhijun Luo for helpful discussions.
|
2308.11742
|
Linear Programming based Reductions for Multiple Visit TSP and Vehicle
Routing Problems
|
Multiple TSP ($\mathrm{mTSP}$) is a important variant of $\mathrm{TSP}$ where
a set of $k$ salesperson together visit a set of $n$ cities. The
$\mathrm{mTSP}$ problem has applications to many real life applications such as
vehicle routing. Rothkopf introduced another variant of $\mathrm{TSP}$ called
many-visits TSP ($\mathrm{MV\mbox{-}TSP}$) where a request $r(v)\in
\mathbb{Z}_+$ is given for each city $v$ and a single salesperson needs to
visit each city $r(v)$ times and return back to his starting point. A
combination of $\mathrm{mTSP}$ and $\mathrm{MV\mbox{-}TSP}$ called many-visits
multiple TSP $(\mathrm{MV\mbox{-}mTSP})$ was studied by B\'erczi, Mnich, and
Vincze where the authors give approximation algorithms for various variants of
$\mathrm{MV\mbox{-}mTSP}$.
In this work, we show a simple linear programming (LP) based reduction that
converts a $\mathrm{mTSP}$ LP-based algorithm to a LP-based algorithm for
$\mathrm{MV\mbox{-}mTSP}$ with the same approximation factor. We apply this
reduction to improve or match the current best approximation factors of several
variants of the $\mathrm{MV\mbox{-}mTSP}$. Our reduction shows that the
addition of visit requests $r(v)$ to $\mathrm{mTSP}$ does $\textit{not}$ make
the problem harder to approximate even when $r(v)$ is exponential in number of
vertices.
To apply our reduction, we either use existing LP-based algorithms for
$\mathrm{mTSP}$ variants or show that several existing combinatorial algorithms
for $\mathrm{mTSP}$ variants can be interpreted as LP-based algorithms. This
allows us to apply our reduction to these combinatorial algorithms as well
achieving the improved guarantees.
|
Aditya Pillai, Mohit Singh
|
2023-08-22T19:05:25Z
|
http://arxiv.org/abs/2308.11742v1
|
# Linear Programming based Reductions for Multiple Visit TSP and Vehicle Routing Problems
###### Abstract
Multiple TSP (mTSP) is a important variant of TSP where a set of \(k\) salesperson together visit a set of \(n\) cities. The mTSP problem has applications to many real life applications such as vehicle routing. Rothkopf [1] introduced another variant of TSP called many-visits TSP (MV-TSP) where a request \(r(v)\in\mathbb{Z}_{+}\) is given for each city \(v\) and a single salesperson needs to visit each city \(r(v)\) times and return back to his starting point. A combination of mTSP and MV-TSP called many-visits multiple TSP (MV-mTSP) was studied by Berczi, Mnich, and Vincze [2] where the authors give approximation algorithms for various variants of MV-mTSP.
In this work, we show a simple linear programming (LP) based reduction that converts a mTSP LP-based algorithm to a LP-based algorithm for MV-mTSP with the same approximation factor. We apply this reduction to improve or match the current best approximation factors of several variants of the MV-mTSP. Our reduction shows that the addition of visit requests \(r(v)\) to mTSP does _not_ make the problem harder to approximate even when \(r(v)\) is exponential in number of vertices. To apply our reduction, we either use existing LP-based algorithms for mTSP variants or show that several existing combinatorial algorithms for mTSP variants can be interpreted as LP-based algorithms. This allows us to apply our reduction to these combinatorial algorithms as well achieving the improved guarantees.
Introduction
The traveling salesperson problem (TSP) is a fundamental problem in combinatorial optimization. Given a complete graph on \(n\) vertices and non-negative edge costs that satisfy the triangle inequality, the goal is to find a Hamiltonian cycle of minimum cost that visits all vertices. TSP and its variants have been at the forefront of development of algorithms, in theory as well as practice. From an approximation algorithmic perspective, Christofides [3] and Serdyukov [4], gave a \(3/2\)-approximation algorithm for TSP which was recently improved to roughly \(3/2-10^{-36}\) by Karlin, Klein, and Oveis-Gharan [5].
In this work, we aim to consider the multiple visit versions of TSP as well as many of its variants. In the multiple visit version of TSP, which we call MV-TSP, we are given a requirement \(r(v)\in\mathbb{Z}_{+}\) for each vertex \(v\in V\) and the goal is to find a closed walk that visits each vertex exactly \(r(v)\) times. Simply, introducing \(r(v)\)_copies_ of each vertex \(v\in V\) and solving the TSP instance in the corresponding semi-metric, it is easy to see that any \(\rho\)-approximation for TSP gives a \(\rho\)-approximation algorithm for MV-TSP. Unfortunately, this reduction is not polynomial time since the input size is logarithmic in \(\max_{v}r(v)\) while the algorithm takes time polynomial in \(\max_{v}r(v)\). This raises an important question:
Is there a polynomial-time reduction that implies that a \(\rho\)-approximation for TSP gives a \(\rho\)-approximation for MV-TSP?
We ask the same question for variants of the TSP problem, in particular, for the variants inspired by the classical vehicle routing problem. An extension of TSP is multiple TSP, which we call mTSP, where there is a specified number of salespersons \(k\) and the goal is to find \(k\) minimum cost cycles such that all vertices are visited by one salesperson. There are several variations depending on whether the salesperson start at a fixed set of depot vertices \(D\) and whether or not all salesperson need to be used. We refer the reader to a survey by Bektas [6] detailing different variants, applications, and several algorithms for mTSP. The multi-visit version of mTSP that we call MV-mTSP is again defined similarly: we are given a graph, edge costs, a visit function \(r:V\rightarrow\mathbb{Z}_{+}\), an integer \(k\), and possibly a set of \(k\) depots and the goal is to find \(k\) minimum cost closed walks so that each vertex is visited \(r(v)\) times. There are several variants depending on whether there are depots, if all salesperson need be used, and if the demands for each vertex can be satisfied by multiple salesperson. Approximation algorithms for many of these variants were studied recently [2]. We give a detailed description of the different variants in Section 3.
### Our Results and Contributions
Our main result is to show that there is a polynomial time algorithm that implies any \(\rho\)-approximation algorithm for TSP that is _linear programming based_ implies a \(\rho\)-approximation algorithm for MV-TSP. By LP-based algorithms, we imply any algorithm that ensures that the objective value of the output solution is at most \(\rho\) times the objective of the classical Held-Karp LP relaxation for the TSP.
**Theorem 1**.: _If there is a \(\rho\)-approximation algorithm for TSP where the \(\rho\) guarantee is towards the optimum solution of LP (1) then there is a \(\rho\)-approximation algorithm for the MV-TSP problem._
We also show that the above reduction also holds for various MV-mTSP as well. This allows to either obtain improved approximation algorithms or match the best known approximation for many of these variants. We outline the improved approximation in Table 1. For many variants of
mTSP, previously only combinatorial algorithms were known. We first interpret these combinatorial algorithms as LP-based algorithms by showing that they also give a bound on the integrality gap of the standard Held-Karp style relaxations for these variants. For TSP, this part is analogous to showing the classical Christofides' \(\frac{3}{2}\)-approximation algorithm also bounds the integrality gap of the Held-Karp relaxation to within the bound of \(\frac{3}{2}\) as was done by Wolsey [7] and Shmoys and Williamson [8].
There is another variant that includes an additional constraint which requires each vertex to be visited by exactly one salesperson. This means that all vertices are visited \(r(v)\) times and all tours are vertex disjoint. We cannot apply our technique to the vertex disjoint tours variants, however we are able to get an approximation for the single depot multi-visit multiple TSP problem using ideas from [2].
**Theorem 6**.: _There is a polynomial time algorithm for the single depot multi-visit mTSP (SD-MV-mTSP) problem with vertex disjoint tours with an approximation factor of \(\frac{7}{2}\)._
Next we list a result for one non-depot mTSP variant where we are required to use all \(k\) salespersons which we call the unrestricted mTSP\({}_{+}\) problem. Here we only get an improved result for a single visit variant and are not able to apply the reduction to its multi-visit variant. The previous best approximation factor was \(4\) which was shown by Berczi et al [2].
**Theorem 4**.: _There is a polynomial time algorithm for the unrestricted mTSP\({}_{+}\) problem with an approximation factor of \(2\)._
#### 1.1.1 Overview of Technique and Results
We now give a overview of the main technique using the example of MV-TSP. The same techniques apply to rest of the variants with some additional modifications. We assume that there is a \(\rho\)-approximation for TSP which is LP-based. As previously mentioned, the running time of the algorithm for MV-TSP needs to be polynomial in \(\max_{v\in V}\log r(v)\) and \(n\). A simple exponential time approximation algorithm for MV-TSP is to make \(r(v)\) copies of each vertex \(v\) and apply a TSP \(\rho\)-approximation algorithm to this graph. On the other hand, if \(\max_{v\in V}r(v)\) was polynomial in \(n\) then we would get a \(\rho\)-approximation polynomial time algorithm. Our main technique is to use a LP relaxation of MV-TSP to fix certain edges in our solution (without taking a loss in the objective) and construct a new instance where the visit requirement of each vertex is polynomial. We then apply the simple reduction to TSP we described above. We note that our reduction relies on the connection between the LP relaxations of MV-TSP and TSP: the LP relaxations only differ
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Depot Restriction}} & \multicolumn{1}{|c|}{SV} & \multicolumn{1}{|c|}{MV Problem Name} & \multicolumn{1}{|c|}{MV} & \multicolumn{1}{|c|}{MV} \\ & \multicolumn{1}{|c|}{(previous work)} & & \multicolumn{1}{|c|}{(previous work)} & \multicolumn{1}{|c|}{(this work)} \\ \hline \hline \(\leq\) & \(3/2+\varepsilon\)[9] & MV-mTSP\({}_{0}\) & \(2\)[2] & \(\mathbf{2}\) \\ \(=\) & \(2\)[2] & MV-mTSP\({}_{+}\) & \(3\)[2] & \(\mathbf{2}\) \\ One Depot, \(k\) salesperson & \(3/2\)[10] & SD-MV-mTSP\({}_{+}\) & \(3\)[2] & \(\mathbf{3/2}\) \\ \hline \end{tabular}
\end{table}
Table 1: Variants where all \(k\) depot/salesperson must be used are marked with \(=\) in the first column and variants where some tours may be empty are marked with \(\leq\). MV stands for multi-visit and SV stands for single-visit. The numbers in the table are all the best approximation factors.
in that TSP requires every vertex has degree 2 while MV-TSP has degree \(2r(v)\). As a result our reduction is limited in that we cannot use any algorithm for TSP but only an algorithm that has a guarantee towards the LP relaxation of TSP.
For many variants that we consider, a challenge then arises. Several of the existing algorithms give approximation guarantees as compared to the integral solution and do not compare the algorithm's solution to the cost of the linear programming relaxation. For these problems, we first formulate a Held-Karp style LP relaxation and either show that an existing algorithm has an approximation guarantee relative to the LP value or give a new algorithm which has a guarantee towards the LP value. For this we use characterizations of matroid intersection polytope which we apply to constrained spanning trees and related problems in Section 5.
To illustrate our technique in more detail, let us return to the MV-TSP problem. All missing proofs for this result will appear in Appendix A and also appear more generally in our general framework in Section 4. We use the following standard Held-Karp LP relaxation for TSP which we call LP (1),
minimize \[\sum_{e\in E}c_{e}x_{e}\] (1) s.t. \[x(\delta(v))=2 \forall v\in V\] \[x(\delta(S))\geq 2 \forall S\subseteq V\] \[0\leq x_{e}\leq 1 \forall e\in E.\]
The following linear program is a relaxation for MV-TSP that generalizes the Held-Karp LP,
minimize \[\sum_{e\in E}c_{e}x_{e}\] (2) s.t. \[x(\delta(v))=2r(v) \forall v\in V\] \[x(\delta(S))\geq 2 \forall S\subset V\] \[x_{e}\geq 0 \forall e\in E.\]
We need the following lemma which shows that the simple reduction from MV-TSP to TSP is polynomial time when the visit requests \(r(v)\) are polynomial in \(n\). Moreover, the reduction maintains the approximation factor of the LP relaxation based algorithm used for TSP. The reduction basically relies on replacing each vertex with \(r(v)\) copies and then applying the LP-based algorithm.
**Lemma 1.1**.: _If there is a \(\rho\)-approximation algorithm for TSP where the \(\rho\) guarantee is towards the optimum solution of LP (1) then there exists an algorithm that given a solution \(y\) to LP (2) outputs a closed walk \(T:E\rightarrow\mathbb{Z}\) satisfying \(\sum_{e\in C}T(e)c_{e}\leq\rho\sum_{e\in E}c_{e}y_{e}\) with a run-time polynomial in \(\max_{v\in V}r(v)\) and \(n\)._
Now we show how to use Algorithm in Lemma 1.1 for a general instance where \(r\) is not polynomially bounded. This algorithm (Algorithm (1)) solves LP (2) and fixes edges in the solution that are integrally set and reduce the visit requests accordingly. Finally, the reduced visits are polynomial in size so we can then apply Lemma 1.1. One has to carefully verify that the _reduced_ linear programming solution is a feasible solution to the LP relaxation for the reduced instance which can be done by verifying the constraints carefully.
The next two claims show that Step 3 of the Algorithm (1) is valid.
**Claim 1.2**.: _The new visit function \(\tilde{r}\) satisfies \(1\leq\tilde{r}(v)\leq 2n\) for all \(v\in V\)._
Proof.: For \(v\in V\) we have,
\[\tilde{r}(v) =r(v)-\sum_{e\in\delta(v)}k_{e}\] \[=r(v)-\sum_{e\in\delta(v)}\frac{x_{e}-\tilde{x}_{e}}{2}=\frac{1} {2}\sum_{e\in\delta(v)}\tilde{x}_{e}.\]
If all \(e\in\delta(v)\) satisfy \(x_{e}\leq 4\) then \(\tilde{r}(v)=r(v)\geq 1\). Otherwise, the lower bound follows since for \(x_{e}>4\) we have \(\tilde{x}_{e}\geq 2\). The upper bound follows since \(\tilde{x}_{e}\leq 4\) for all \(e\in E\).
**Claim 1.3**.: _The solution \(\tilde{x}\) is feasible for LP (2) with graph \(G\) and \(\tilde{r}\)._
Proof.: We have \(\tilde{x}\geq 0\) and \(\sum_{e\in\delta(v)}\tilde{x}_{e}=\sum_{e\in\delta(v)}x_{e}-2k_{e}=2(r(v)- \sum_{e\in\delta(v)}k_{e})=2\tilde{r}(v)\). For any set \(S\subset V\), if \(x_{e}\leq 4\) for all \(e\in\delta(S)\) then \(\tilde{x}(\delta(S))=x(\delta(S))\geq 2\). Otherwise if there is an edge \(e\in\delta(S)\) such that \(x_{e}>4\) we get \(\tilde{x}(\delta(S))\geq\tilde{x}_{e}\geq 2\) by definition of \(\tilde{x}\).
Now we get the following theorem which is our main result.
**Theorem 1**.: _If there is a \(\rho\)-approximation algorithm for TSP where the \(\rho\) guarantee is towards the optimum solution of LP (1) then there is a \(\rho\)-approximation algorithm for the MV-TSP problem._
As a corollary of Theorem 1 we get the following by applying the work of Karlin, Klein, and Oveis-Gharan [11].
**Corollary 1.1**.: _There is an approximation algorithm for the MV-TSP problem with an approximation factor less than \(\frac{3}{2}-10^{-36}\)._
We also apply this reduction to reduce different variants of MV-mTSP to mTSP. One variation of mTSP is single depot multiple TSP, SD-mTSP, which was studied by Frieze [10]. For SD-mTSP we are given a graph with edge costs, one depot vertex, and an integer \(k\) and the goal is to find \(k\) cycles that contain the depot so that all non-depot vertices are contained in exactly one cycle. In this paper we also introduce the multi-visit version of this problem called SD-MV-mTSP which is defined similarly as SD-mTSP except each non-depot vertex needs to be visited \(r(v)\) times. We also apply our technique to reduce SD-MV-mTSP to SD-mTSP.
### Related Work
The variant of TSP when a visit request \(r(v)\) is given for each vertex \(v\) is called MV-TSP. There is also a variant of MV-TSP called path MV-TSP where instead of a closed walk the goal is to find a walk between given vertices \(s\neq t\) so that vertices \(v\notin\{s,t\}\) are visited \(r(v)\) times. Berczi, Mnich, Vincze [12] gave a \(\frac{3}{2}\) for both path MV-TSP and MV-TSP.
The variant of TSP where multiple salesperson are used is usually referred to as mTSP. Frieze shows a \(\frac{3}{2}\)-approximation for a variant where \(k\) salesperson are required to start a fixed depot vertex vertex \(v_{1}\). Frieze's algorithm generalizes the Christofides-Serdyukov algorithm [3, 4] for TSP. The mTSP problem is a relaxation of the vehicle routing problem (VRP). In VRP a set of \(k\) vehicles need to visit a set of customers with known demands while starting and ending at a fixed depot vertex. Further, the set of vehicles have a vehicle capacity which limits the total demand each vehicle can serve. If the vehicle capacity is sufficiently large so that the vehicles are not restricted by the demands then VRP is equivalent to mTSP. Thus there are several works for VRP that apply ideas from TSP algorithms such as a paper by Christofides, Mingozzi, Toth [13] where the authors give exact VRP algorithms based on finding min cost trees.
A different version of mTSP is when the different salesperson are required to start from different depot vertices. Given a set of \(k\) depot vertices the goal is to find at most \(k\) minimum cost cycles such that each vertex contains exactly one depot and all vertices are contained in exactly one cycle. Rathinam, Sengupta, and Darbha [14] showed a \(2\)-approximation algorithm for this problem which was then improved to \(2-\frac{1}{k}\) by Xu, Xu, and Rodrigues [15]. Xu and Rodrigues [15] showed a \(\frac{3}{2}\)-approximation when the number of depots \(k\) is constant and very recently Deppert, Kaul, and Mnich [9] showed a \(\frac{3}{2}\)-approximation for arbitrary \(k\).
The mTSP problem with depots can be generalized further when \(m\) depots are available and there are \(k\) salesperson satisfying \(k\leq m\). Both Malik, Rathinam, and Darbha [16] and Carnes and Shmoys [17] gave \(2\)-approximations for this problem. Later Xu and Rodrigues [18] gave a \((2-\frac{1}{2k})\)-approximation. The algorithm in [15] can be adapted to this case to get a \(\frac{3}{2}\)-approximation when \(m\) is constant.
Berczi, Mnich, and Vincze [2] considered various problems that have both the constraints of mTSP and MV-TSP which are referred to as MV-mTSP. They consider \(8\) different variants of MV-mTSP and show equivalencies among some of the \(8\) variants. Additionally, they give constant factor approximations for the different variants using many ideas from previous TSP algorithms such as tree doubling.
## 2 Preliminaries
A graph \(G=(V,E)\) is defined on vertex set \(V\) and edge set \(E\) which we will always take to be the complete graph in this paper. For sets \(A,B\subseteq V\) we denote by \(E(A,B)\subseteq E\) edges that have one endpoint in \(A\) and one endpoint in \(B\). We use \(E(A)\) as a shorthand for \(E(A,A)\) and \(\delta(A)=E(A,V-A)\) meaning \(\delta(A)\) is the set of edges with exactly one endpoint in \(A\). For a single vertex \(v\) we write \(\delta(v)\) instead of \(\delta(\{v\})\) to denote its set of neighbors. The degree of a vertex \(v\) is denoted by \(d(v)\) which is the number of edges incident to that vertex meaning \(d(v)=|\delta(v)|\). We note that any loops on a vertex contribute \(2\) to the degree. Additionally for a set of edges \(T\subseteq E\), we use \(d_{T}(v)\) to denote the number of edges in \(T\) that contain \(v\). Throughout the paper we use LPs whose variables correspond to edges of the graph and for LP variable \(x\in\mathbb{R}^{|E|}\) we use \(x(T)=\sum_{e\in T}x_{e}\) for all \(T\subseteq E\).
We also use the notion of a matroid in this paper. A matroid \(\mathcal{M}\) is defined by a ground set \(E\) and a collection of independent sets \(\mathcal{I}\subseteq 2^{E}\) satisfying three properties
1. \(\emptyset\in\mathcal{I}\)
2. If \(A\in\mathcal{I}\) then \(B\in\mathcal{I}\) for all \(B\subseteq A\)
3. If \(A,B\in\mathcal{I}\) with \(|A|<|B|\) then there exists \(x\in B-A\) so that \(A\cup\{x\}\in\mathcal{I}\).
An independent set of maximum cardinality is called a base. We use two specific matroids in this paper: partition matroids and graphic matroids. A partition matroid is defined by a partition of the ground set \(E=P_{1}\dot{\cup}\ldots\dot{\cup}P_{k}\) each with a capacity \(c_{i}\leq|P_{i}|\) and a set \(S\in\mathcal{I}\) if \(|S\cap P_{i}|\leq c_{i}\) for all \(i=1,\ldots,k\). A graphic matroid is defined on a graph \(G\) with the set of edges as the ground set and a set \(T\subseteq E\) is independent if the set of edges \(T\) is acyclic in \(G\). All matroids \(\mathcal{M}\) have a rank function \(r:2^{E}\rightarrow\mathbb{Z}\) which is defined as \(r(S)=\max_{A\subseteq S}\{|A||A\in\mathcal{I}\}\). It is well known that the convex hull of indicator vectors of independent sets in a matroid is described by \(\{\mathbf{x}\geq 0\in R^{|E||}x(S)\leq r(S)\forall S\subseteq E\}\) and for matroids \(\mathcal{M}_{1}=(E,\mathcal{I}_{1}),\mathcal{M}_{2}=(E,\mathcal{I}_{2})\) with rank functions \(r_{1},r_{2}\) the convex hull of the indicator vectors of \(\mathcal{I}_{1}\cap\mathcal{I}_{2}\) is given by \(\{\mathbf{x}\geq 0\in R^{|E||}x(S)\leq\min(r_{1}(S),r_{2}(S))\forall S\subseteq E\}\). Moreover, both the matroid and matroid intersection polytopes are TDI (totally dual integral). We refer the reader to [19] for more details on matroids and matroid polytopes.
## 3 Problem Description
In this section we formally describe each of the problems and the requirements of feasibility. We use the same names and notations for the problem names as [2]. Throughout the paper we will use \(n\) to denote the number of vertices in the graph that is the input to each problem and \(c:E\times E\rightarrow\mathbb{R}_{\geq 0}\) to denote the cost function. The cost function \(c\) is a semi-metric meaning it is symmetric and satisfies triangle inequality, but does not satisfy \(c_{vv}=0\). In particular, we have the following
1. Symmetry: \(c_{uv}=c_{vu}\) for all \(u,v\in V\)
2. Triangle Inequality: \(c_{uv}\leq c_{ux}+c_{xv}\) for all \(u,v,x\in V\).
We observe that the triangle inequality implies for all \(u,v\in V\)\(c_{vv}\leq 2c_{uv}\). This means our algorithms are allowed to use loops to satisfy the visit requirement and will pay for those loops. We recall that a loop is counted twice in the degree of a vertex so taking a loop on a vertex counts as one visit to that vertex. In many single visit variants such as TSP using loops violates feasibility so for most single visit variants we may assume \(c_{vv}=0\) and \(c\) is a metric. There are a few single visit variants that use loops and this will be specified below.
First we describe the simpler variants.
1. We call the standard traveling salesperson problem as TSP. For TSP, we are given a complete graph \(G\) on a vertex set \(V\) and the goal is to find a Hamiltonian cycle of minimum cost.
2. We call the multi-visit TSP problem as MV-TSP. In MV-TSP, we are given a complete graph \(G\) on a vertex set \(V\) and a visit function \(r:V\rightarrow\mathbb{Z}\) satisfying \(r(v)\geq 1\) for all \(v\in V\). The goal is to find a minimum cost closed walk such all vertices are visited exactly \(r(v)\) times.
3. We call the multiple TSP problem as mTSP. There are 4 variants of mTSP depending on 2 parameters.
3.1. The first is whether or not all salespersons are used. In mTSP\({}_{+}\), we are given a complete graph \(G\) on a vertex set \(V\) and a number \(k\geq 1\) and the goal is to find exactly \(k\) cycles such that every vertex is contained in exactly one cycle. In mTSP\({}_{0}\) we have the same inputs and the goal is to find at most \(k\) cycles so that every vertex is contained in exactly one cycle. 3.2. The second is whether or not we have depot vertices. If there are depot vertices then any pair of depots cannot be in the same cycle. If there are no depots then the problem is called _unrestricted_. From these two parameters we get the following 4 problems: unrestricted mTSP\({}_{+}\), unrestricted mTSP\({}_{0}\), mTSP\({}_{+}\), and mTSP\({}_{0}\). Both mTSP\({}_{+}\), and mTSP\({}_{0}\) are allowed to use loops.
In this paper we mainly study variants that are a mix of the multiple TSP and multi-visit TSP problems which we call MV-mTSP For MV-mTSP, there are 8 variants of problems that arise from three parameters. The two parameters from mTSP carry over which are whether or not all salesperson are used and whether or not there are depot vertices. In addition to these two parameters, there is also a parameter that imposes a restriction on whether the visit requirements for a vertex need to satisfied by one salesperson or if different salesperson in total can satisfy the visit requirements. Variants where the different tours are required to be vertex disjoint are called _vertex disjoint_ tours and variants where different tours are allowed to intersect are called _arbitrary_. Using these 3 parameters we get 8 problems some of which include unrestricted mTSP\({}_{+}\) with arbitrary tours, mTSP\({}_{+}\) with vertex disjoint tours, and mTSP\({}_{0}\) with vertex disjoint tours.
We note that some of the variants among all 8 possibilities are equivalent, some can be reduced to another in only one direction, and others cannot be reduced to each other in either direction. We refer the reader to [2] which explains the connections between the problems in detail and gives examples. We now describe the problems that we consider in this paper.
1. In MV-mTSP\({}_{+}\) with arbitrary tours, we are given a complete graph \(G\) on a vertex set \(V\) and a subset of depots \(D\subseteq V\) with \(|D|=k\). The goal is to find exactly \(k\) closed walks such that all non-depot vertices \(v\) are visited \(r(v)\) times and each closed walk contains exactly one depot.
2. In unrestricted mTSP\({}_{+}\) with arbitrary tours, we are given a complete graph \(G\) on a vertex set \(V\) and an integer \(k\). The goal is to find \(k\) cycles so that every vertex is contained in one cycle. We note that here a single loop on a vertex is a valid cycle.
Next we describe variants where there is one depot, but we have \(1\leq k\leq n-1\) salesperson available.
1. In SD-mTSP\({}_{+}\), we are given a complete graph and an integer \(1\leq k\leq n-1\). The goal is to find exactly \(k\) cycles containing at least 3 vertices so that all cycle contain the depot vertex \(v_{1}\) and every other vertex \(v\neq v_{1}\) is contained in exactly one cycle. We note that if cycles with two vertices were allowed, meaning \(v_{1},v,v_{1}\) is valid, then Frieze [10] shows a reduction to the problem where each cycle has at least 3 vertices.
2. In SD-MV-mTSP\({}_{+}\) with arbitrary tours, we are given a complete graph, an integer \(1\leq m\leq n-1\), and a visit function \(r:V-v_{1}\rightarrow\mathbb{Z}\). The goal is to find exactly \(k\) closed walks starting at the depot vertex \(v_{1}\) so that all non-depot vertices \(v\) are visited \(r(v)\) times.
3. In SD-MV-mTSP\({}_{+}\) with vertex disjoint tours, we are given a complete graph, an integer \(1\leq m\leq n-1\), and a visit function \(r:V-v_{1}\to\mathbb{Z}\). is to find exactly \(k\) closed walks starting at the depot vertex so that all non-depot vertices \(v\) are visited \(r(v)\) times and any two closed walks only intersect at the depot vertex.
## 4 General Framework
In this section we outline a general framework that we will apply to the various variants of multi-visit TSP problems. The following LP will be our template for the LP relaxation of the single visit variants of the problems we will consider. We note that only MV-TSP and MV-mTSP\({}_{+}\) with arbitrary tours are the only problems that fall _exactly_ into the framework. Other problems fall very closely in the framework and for those problems we follow the template given in this section and use parts of the proofs given here. For \(D\subseteq V\) we write the following LP
minimize \[\sum_{e\in E}c_{e}x_{e}\] (3) s.t. \[x(\delta(v))=2 \forall v\in V\] \[x(\delta(S\cup D))\geq 2 \forall S\subseteq V-D\] \[x(E(D,D))=0\] \[0\leq x_{e}\leq 2 \forall e\in E.\]
We now show its corresponding multi-visit variant for a visit function \(r:V-D\to\mathbb{Z}\)
minimize \[\sum_{e\in E}c_{e}x_{e}\] (4) s.t. \[x(\delta(v))=2r(v) \forall v\in V-D\] \[x(\delta(v))=2 \forall v\in D\] \[x(\delta(S\cup D))\geq 2 \forall S\subset V-D\] \[x(E(D,D))=0\] \[x_{e}\geq 0 \forall e\in E.\]
We note that we can set \(D=\emptyset\) in the above linear programs which gives us the result for MV-TSP shown in Section 1.1.1.
The following lemma allows us to relate the LP relaxation of the multi-visit problem to its corresponding single visit variant.
**Lemma 4.1**.: _Let \(G\) be a complete graph with edge set \(E\), \(c\) be a cost function on its edges, and \(r:V-D\to\mathbb{Z}\) be a integer valued function on the vertex set. Let \(G^{r}=(V^{r},E^{r})\) be a complete graph on vertex set \(V^{r}\) that has \(r(v)\) copies of each vertex \(v\in V-D\) and one copy of each vertex \(v\in D\). We extend the cost \(c\) function that assigns cost \(c_{e}\) for \(e=\{u,v\}\in E\) to all edges \(\{u_{i},v_{j}\}\in E^{r}\). Given a solution \(x\) to LP (4) we can construct a feasible solution \(z\) to LP (3) on graph \(G^{r}\) satisfying \(c^{T}x=c^{T}z\)._
Proof.: Let \(e^{\prime}=\{u_{i},v_{j}\}\in E^{r}\) and \(e=\{u,v\}\) are the original copies of \(u_{i},v_{j}\) in \(V\), then we set \(z_{e}=\frac{x_{e}}{r(u)r(v)}\) and if either \(u,v\) are in \(D\) we think of \(r(u)=1\) or \(r(v)=1\) while defining \(z\). For the
last property we have, \(\sum_{e\in E^{r}}c_{e}z_{e}=\sum_{e=\{u,v\}\in E}c_{e}r(u)r(v)z_{e}=\sum_{e\in E}c _{e}x_{e}\). We observe that the solution \(x\) is constructed by distributing the degree of each vertex \(v\in V-D\) uniformly to all \(r(v)\) copies so we get that \(x(\delta(v_{i}))=\frac{z(\delta(v))}{r(v)}=2\) and for \(v\in D\)\(x(\delta(v))=z(\delta(v))=2\). We get that for any \(e\in E(D,D)\) we have \(z_{e}=x_{e}=0\) so \(z(E(D,D))=0\). Finally \(0\leq z\leq 2\) follows since \(z_{e}\leq 2\) if and only if \(x_{e}\leq 2r(u)r(v)\) which follows since \(x_{e}\leq 2\min(r(u),r(v))\).
We show that \(x\) satisfies \(x(\delta(D\cup T))\geq 2\) for all \(T\subset V^{r}-D\). Let \(k\) be the number of vertices \(v\in V\) such that there exist copies of \(v_{i},v_{j}\) such that \(T\) contains exactly one of \(v_{i},v_{j}\). We show that \(x(\delta(D\cup T))\geq 2\) exists by induction on \(k\). If \(k=0\), then we take \(S\subset V-D\) to be the set acquired by taking the original copy \(v\) of each vertex \(v_{i}\in T\) and we get that \(z(\delta(T\cup D))=x(\delta(S\cup D))\geq 2\). If \(k>0\) there exits a vertex \(v\in V-D\) such that \(T\) does not contain all \(r(v)\) copies of \(v\). Let \(B=V^{r}-(T\cup D)\), \(T(v)=T\cap\{v_{1},\ldots,v_{r(v)}\}\), \(B(v)=\{v_{1},\ldots,v_{r(v)}\}-T(v)\). For any \(v_{i}\in T(v)\) let \(Z_{1}=z(E(v_{i},D\cup T-T(v)),Z_{2}=z(E(v_{i},B))\) and we note that the values of \(Z_{1},Z_{2}\) do not change depending on the choice of \(v_{i}\) since all copies of vertex \(v\) have the same set of neighbors and \(z\) values assigned to their edges. Thus we have that \(z(E(T(v),D\cup T-T(v)))=|T(v)|Z_{1}\) and \(z(E(T(v),B))=|T(v)|Z_{2}\). In the case that \(Z_{2}\geq Z_{1}\), we observe that \(z(\delta(D\cup T-T(v))=z(\delta(D\cup T))+z(E(T(v),D\cup T-T(v)))-z(E(T(v),B)) =z(\delta(D\cup T-T(v)))+|T(v)|(Z_{1}-Z_{2})\leq z(\delta(D\cup T-T(v)))\). Thus we get \(z(\delta(D\cup T-T(v))\geq 2\) by applying induction to \(T-T(v)\) implying \(z(\delta(D\cup T))\geq 2\). Similarly if \(Z_{1}\geq Z_{2}\), we get that \(2\leq z(\delta(D\cup T\cup B(v))=z(\delta(D\cup T))+z(E(B(v),T\cup D))-z(E(B(v),B-B(v)))=z(\delta(D\cup T))+|B(v)|(Z_{2}-Z_{1})\leq z(\delta(D\cup T\cup B(v)))\) and first inequality follows by applying induction to \(T\cup B(v)\).
Using these LP relaxations and Lemma 4.1, we can use apply an approximation algorithm for the single visit case to the multi-visit case which will have a run-time depending on \(n\) and the visit function \(r\).
**Lemma 4.2**.: _Let \(\mathcal{A}\) be an approximation algorithm that takes a solution \(x\) to LP (3) and outputs an integral solution \(C\subseteq E\) to LP (3) satisfying \(\sum_{e\in C}c_{e}\leq\rho\sum_{e\in E}c_{e}x_{e}\) with a run-time polynomial in \(n\). There exists an algorithm that given a solution \(y\) to LP (4) outputs an integral solution \(T:E\rightarrow\mathbb{Z}\) satisfying \(\sum_{e\in E}c_{e}T(e)\leq\rho\sum_{e\in E}c_{e}y_{e}\) with a run-time polynomial in \(\max_{v\in V}r(v)\) and \(n\)._
Proof.: We will convert the solution \(y\) to a solution \(x\) to LP (1) on the graph \(G^{r}\) by applying Lemma 4.1 to \(y\). Thus we can apply Algorithm \(\mathcal{A}\) on \(y\) as a solution to \(G^{r}\) which we convert to a solution on \(G\) by replacing every edge in the solution with its original corresponding edge in \(G\). Let \(T:E\rightarrow\mathbb{Z}\) be the solution we get. We now show \(T\) is feasible for LP (4). First for every \(v\in V\) we get, \(\sum_{e\in\delta(v)}T(e)=2r(v)\) since the solution in \(G^{r}\) contained \(r(v)\) copies of \(v\) each with degree \(2\). Moreover, for all \(S\subset V-D\) we have \(\sum_{e\in\delta(S\cup D)}T(e)\geq 2\). For the approximation factor we have, \(\sum_{e\in E}T(e)c_{e}\leq\rho\sum_{e\in E}c_{e}x_{e}=\rho\sum_{e\in E}c_{e}y_{e}\) where the first equality follows since we \(\mathcal{A}\) is a \(\rho\)-approximation and the second equality follows by Lemma 4.1. Finally, the run-time follows since \(G^{r}\) has at most \(n\max_{v\in V}r(v)\) vertices and \(\mathcal{A}\) is a polynomial time algorithm.
Using this lemma we get the following polynomial time algorithm.
The next two claims show that Step 3 of the algorithm is valid.
**Claim 4.3**.: _The new visit function \(\tilde{r}\) satisfies \(1\leq\tilde{r}(v)\leq 2n\) for all \(v\in V\)._
Proof.: For \(v\in V\) we have,
\[\tilde{r}(v) =r(v)-\sum_{e\in\delta(v)}k_{e}\] \[=r(v)-\sum_{e\in\delta(v)}\frac{x_{e}-\tilde{x}_{e}}{2}=\frac{1}{ 2}\sum_{e\in\delta(v)}\tilde{x}_{e}.\]
If all \(e\in\delta(v)\) satisfy \(x_{e}\leq 4\) then \(\tilde{r}(v)=r(v)\geq 1\). Otherwise the lower bound follows since for \(x_{e}>4\) we have \(\tilde{x}_{e}\geq 2\). The upper bound follows since \(\tilde{x}_{e}\leq 4\) for all \(e\in E\).
**Claim 4.4**.: _The solution \(\tilde{x}\) is feasible for LP (4) with graph \(G\) and \(\tilde{r}\)._
Proof.: We have \(\tilde{x}\geq 0\) and \(\sum_{e\in\delta(v)}\tilde{x}_{e}=\sum_{e\in\delta(v)}x_{e}-2k_{e}=2(r(v)- \sum_{e\in\delta(v)}k_{e})=2\tilde{r}(v)\). For any set \(S\subset V-D\), if \(x_{e}\leq 4\) for all \(e\in\delta(S\cup D)\) then \(\tilde{x}(\delta(S\cup D))=x(\delta(S\cup D))\geq 2\). Otherwise if there is an edge \(e\in\delta(S\cup D)\) such that \(x_{e}>4\) we get \(\tilde{x}(\delta(S\cup D))\geq\tilde{x}_{e}\geq 2\) by definition of \(\tilde{x}\). Finally we have \(\tilde{x}_{e}=x_{e}=0\) for all \(e\in E(D,D)\) implying \(\tilde{x}(E(D,D))=0\). Moreover, \(T\) contains no edges between vertices in \(D\) since \(k_{e}=0\) for all \(e\in E(D,D)\) and \(T^{\prime}(e)=0\) for all \(e\in E(D,D)\).
**Theorem 2**.: _Let \(x^{*}\) be the optimal LP solution to LP (4) and \(\rho\) be the approximation factor of algorithm \(\mathcal{A}\) whose guarantee is relative the value of LP (3). Then Algorithm (2) outputs a feasible integral solution to LP (4), \(T:E\to\mathbb{Z}\) satisfying \(\sum_{e\in E}T(e)c_{e}\leq\rho c^{T}x^{*}\) and runs in time polynomial in \(n\)._
Proof.: Let \(T^{\prime}\) be the solution we get from the third step before we increase each edge by \(2k_{e}\). By Claim 4.3 and Lemma 4.2 we have that \(\tilde{x}\) satisfies \(\sum_{e\in E}T^{\prime}(e)c_{e}\leq\rho\sum_{e\in E}c_{e}\tilde{x}_{e}\). Let \(S=\{e\in E|k_{e}>0\}\) be the set of edges then we have that \(\sum_{e\in E}T(e)c_{e}=\sum_{e\in E}T^{\prime}(e)c_{e}+2\sum_{e\in S}k_{e}c_{e} \leq\rho\sum_{e\in E}c_{e}\tilde{x}_{e}+2\sum_{e\in S}k_{e}c_{e}\leq\rho c^{T} x^{*}\). For the run-time, we get have that Step 3 runs in time polynomial in \(n\) by Claim 4.3 and Lemma 4.2 and the rest of the steps are clearly polynomial time in \(n\). Finally, \(T\) is a feasible integral solution to LP (4) since \(T^{\prime}\) satisfies the cut constraints which implies \(T\) also satisfies the cut constraints since \(T(e)\geq T^{\prime}(e)\) for all \(e\) and by definition \(T\) satisfies degree constraints.
Tree Characterizations
Many algorithms for TSP variants work with a tree with some specified structure and a LP characterization of the tree is needed to bound the approximation guarantee. In this section we characterize the polytopes of three different trees. We first characterize connected graphs that have fixed degree \(2k\) on vertex \(v_{1}\in V\). We define \(\kappa(S)\) as the number of components in the graph \((V,S)\) for all \(S\subseteq E\),
minimize \[\sum_{e\in E}c_{e}z_{e}\] (5) s.t. \[z(S)\geq\kappa(\overline{S})-1 \forall S\subseteq E\] \[z(\delta(v_{1}))=2k\] \[z_{e}\geq 0 \forall e\in E.\]
**Claim 5.1**.: _Let \(T_{2k}^{*}\) be a minimum cost spanning tree among all spanning trees in \(G\) that have degree \(2k\) on vertex \(v_{1}\). Then we have that the indicator vector \(T_{2k}^{*}\) is an optimal solution to LP (5)._
Proof.: First we define matroid \(\mathcal{M}_{1}\) as the dual of the graphic matroid on graph \(G\). Next we define \(\mathcal{M}_{2}\) as a partition matroid with parts \(P_{1},P_{2},\ldots P_{|E|}\) where \(P_{1}\) contains edges incident to \(v_{1}\) and has capacity \(d(v_{1})-2k\), and the remaining edges go in a unique \(P_{j}\) with capacity \(1\). Thus common independent sets of \(\mathcal{M}_{1},\mathcal{M}_{2}\) are sets \(R\subseteq E\) such that \(R\) has at most \(d(v_{1})-2k\) edges incident to \(v_{1}\) and \(E-R\) contains a spanning tree. Moreover, an optimal solution must be also satisfy that \(E-R\) is a spanning tree. By the matroid intersection theorem, the polytope given by constraints is \(\{x\geq 0|x(S)\leq r_{\mathcal{M}_{i}}(S),\forall i\in[2]\text{ and }S\subseteq E\}\) totally-dual integral and therefore integral. Turning an inequality to equality in a TDI system maintains the TDI property so we can restrict our polytope to common independent sets of \(\mathcal{M}_{1},\mathcal{M}_{2}\) that have degree exactly \(d(v_{1})-2k\) on \(v_{1}\). Then we observe that \(E-T_{2k}^{*}\) is an optimal solution to \(\max_{I\in\mathcal{I}_{1}\cap\mathcal{I}_{2}}c(I)\) and is also an optimal solution to the following LP
maximize \[\sum_{e\in E}c_{e}x_{e}\] (6) s.t. \[x(\delta(v_{1}))=d(v_{1})-2k\] \[x(S)\leq r_{\mathcal{M}_{1}}(S) \forall S\subset E\] \[x_{e}\leq 1 \forall e\in E.\]
Here we have that \(r_{\mathcal{M}_{1}}(S)=|S|-\kappa(\overline{S})+1\) since \(\mathcal{M}_{1}\) is the dual matroid to the graphic matroid. If we negate the objective of LP (6), change it to a minimization problem, add \(\sum_{e\in e}c_{e}\) to the objective, and make the variable change \(z_{e}=1-x_{e}\) we get LP (5). This is true since the following hold for all \(S\subseteq E\)
1. \(\sum_{e\in E}c_{e}-\sum_{e\in E}c_{e}x_{e}=\sum_{e\in e}c_{e}z_{e}\)
2. \(x(\delta(v_{1}))=d(v_{1})-2k\iff 2k=d(v_{1})-x(\delta(v_{1}))\iff 2k=z( \delta(v_{1}))\)
3. \(x(S)\leq|S|-\kappa(E-S)+1\iff\kappa(\overline{S})-1\leq|S|-x(S)\iff\kappa( \overline{S})-1\leq z(S)\).
If \(x^{*}\) is an optimal solution to LP (6) then \(z^{*}=\mathbf{1}-x^{*}\) is an optimal solution to LP (5), so \(T^{*}_{2k}\) is an optimal solution to LP (5) since \(E-T^{*}_{2k}\) is an optimal solution to LP (6).
Similar to the previous LP, we now give the LP formulation of spanning trees who have fixed degree \(2k\leq n-1\) on a fixed vertex \(v_{1}\). We use the following LP where \(n\) is the number of vertices in the graph
minimize \[\sum_{e\in E}c_{e}z_{e}\] (7) s.t. \[z(E(S))\leq|S|-1 \forall S\subseteq V\] \[z(\delta(v_{1}))=2k\] \[z(E)=n-1\] \[0\leq z_{e}\leq 1 \forall e\in E.\]
**Claim 5.2**.: _Let \(T^{*}_{2k}\) be a minimum cost spanning tree among all spanning trees in \(G\) that have degree \(2k\) on vertex \(v_{1}\). Then we have that the indicator vector \(T^{*}_{2k}\) is an optimal solution to LP (7)._
Proof.: Let \(\mathcal{M}_{1}\) be the graphic matroid on \(G\) and \(\mathcal{M}_{2}\) be a partition matroid with one part containing all edges incident to \(v_{1}\) with capacity \(2k\) and another part containing the remaining edges with capacity \(n-1-2k\). Then \(T^{*}_{2k}\) is an optimal common base in \(\mathcal{M}_{1},\mathcal{M}_{2}\) and the polytope of common bases is give by the constraints of LP (7).
Next we show another LP which will be used for a MV-mTSP problem with depots. For a set of vertices \(D\), let \(\hat{G}\) be the graph with all vertices of \(D\) contracted into a single vertex \(\hat{d}\) and for \(S\subseteq E-E(D,D)\) let \(\kappa_{\hat{G}}(S)\) be the number of components in the graph \(\hat{G}\) with edges \(S\).
minimize \[\sum_{e\in E-E(D,D)}c_{e}x_{e}\] (8) s.t. \[x(S)\geq\kappa_{\hat{G}}(\overline{S}-E(D,D))-1 \forall S\subseteq E-E(D,D)\] \[x(E(d,V-D))\geq 1 \forall d\in D\] \[0\leq x_{e}\leq 1 \forall e\in E-E(D,D).\]
We show the following claim which follows from the ideas from [20].
**Claim 5.3**.: _A \(D\)-forest cover is a forest cover such that each component includes exactly one vertex from \(D\) and each component has at least two vertices. Then the value of LP (8) is the cost of the min cost \(D\)-forest cover._
Proof.: For a \(D\)-forest cover \(F\) we observe that \(\overline{F}\) is a edge set that satisfies \(|\{\{v,d\}\in\overline{F},v\notin D\}|\leq n-|D|\) for all \(d\in D\) and that \(F\) is spanning tree in the graph \(\hat{G}\). We now define matroids \(\mathcal{M}_{1},\mathcal{M}_{2}\) on the ground set \(E-E(D,D)\) with independent sets \(\mathcal{I}_{1},\mathcal{I}_{2}\) whose common independents set correspond to complements of \(D\)-forest covers. We define \(\mathcal{M}_{1}\) as a partition matroid which has parts \(P_{d}\) for each \(d\in D\) that contain edges \(\{\{d,v\}|v\notin D\}\) with capacity \(n-|D|-1\) and all other edges \(e\notin E(D,D)\) go in a unique part \(P_{e}\) with capacity \(1\). We define \(\mathcal{M}_{2}\) as the dual of the graphic
matroid on graph \(\hat{G}\). Then the complements of \(D\)-forest covers are common independent sets of \(\mathcal{M}_{1},\mathcal{M}_{2}\) and the complements of all common independent sets of \(\mathcal{M}_{1},\mathcal{M}_{2}\) contain \(D\)-forest covers. This implies that the cost of a min cost \(D\)-forest cover is \(c(E-E(D,D))-\max_{F\in\mathcal{I}_{2}\cap\mathcal{I}_{2}}c(F)\). By the characterization of the matroid intersection polytope the value of \(\max_{F\in\mathcal{I}_{2}\cap\mathcal{I}_{2}}c(F)\) is
\[\max\sum_{e\in E-E(D,D)}c_{e}x_{e}\] s.t. \[\sum_{e\in E(d,V-D)}x_{e}\leq n-|D|-1 \forall d\in D\] \[x(S)\leq r_{\mathcal{M}_{2}}(S) \forall S\subseteq E-E(D,D)\] \[0\leq x_{e}\leq 1 \forall e\in E-E(D,D).\]
Similar to Claim 5.1 we make the variable change \(z_{e}=1-x_{e}\), add \(\sum_{e\in E-E(D,D)}c_{e}\), negate the objective, and make it minimization to get LP (8). This follows since the following hold
1. \(\sum_{e\in E-E(D,D)}c_{e}-\sum_{e\in E-E(D,D)}c_{e}x_{e}=\sum_{e\in E-E(D,D)}c_ {e}z_{e}\)
2. \(\sum_{e\in E(d,V-D)}x_{e}\leq n-|D|-1\iff\sum_{e\in E(d,V-D)}z_{e}\geq 1\)
3. \(x(S)\leq|S|-\kappa_{\hat{G}}(E-E(D,D)-S)+1\iff z(S)\geq\kappa_{\hat{G}}(E-E(D,D )-S)-1\iff z(S)\geq\kappa_{\hat{G}}(\overline{S}-E(D,D))-1\).
Thus \(x^{*}\) is an optimal solution if and only if \(z^{*}=\mathbf{1}-x^{*}\) is an optimal solution to LP (8).
## 6 MV-mTSP Arbitrary Tours
### Approximation Algorithm with Depots
In this subsection, we give a 2-approximation for the MV-mTSP\({}_{+}\) with arbitrary tours problem. Here we are given a set of depot vertices \(D\) with \(|D|=k\) and the goal is to find \(k\) closed walks that each closed walk uses exactly one depot. We note that \(r(v)=1\) for all \(v\in D\) since we require all walks to be non-empty. For the single visit case of the problem, a 2-approximation was given by [2]. This algorithm is a simple combinatorial tree-doubling algorithm. To allow us to use this algorithm in our reduction, we first show that the tree doubling algorithm achieves a 2-approximation relative to its LP relaxation for the single visit case in Lemma 6.2. We use the following LP which comes from LP (3) by setting \(D\) to the set of depot vertices
\[\text{minimize} \sum_{e\in E}c_{e}x_{e}\] (9) s.t. \[x(\delta(v))=2 \forall v\in V\] \[x(\delta(S\cup D))\geq 2 \forall S\subset V-D\] \[x(E(D,D))=0\] \[0\leq x_{e}\leq 2 \forall e\in E.\]
We recall the definition of a \(D\)-forest cover.
**Definition 1**.: _A \(D\)-forest cover is a forest cover such that each component includes exactly one vertex from \(D\) and each component has at least two vertices._
In the following claim, we show that the cost of min-cost \(D\)-forest cover as given by LP (8) is at most the cost of the optimal solution to LP (9).
**Claim 6.1**.: _Let \(z^{*}\) be an optimal solution to LP (9) and \(x^{*}\) be an optimal solution to LP (8) then we have \(c^{T}x^{*}\leq c^{T}z^{*}\)._
Proof.: Let \(z\) be a feasible solution to LP (9), we will show that \(z\) is a feasible solution to LP (8). We note that LP (8) is defined on edges \(E-E(D,D)\) and \(z\) is defined on edges \(E\), but \(z\) satisfies \(z_{e}=0\) for all \(e\in E(D,D)\). For all \(d\in D\) we have that \(z(E(d,V-D))=z(\delta(d))-z(E(d,D))=z(\delta(v))=2>1\) where the second equality follows since \(0\leq z(E(d,D))\leq z(E(D,D))=0\). We recall that we use the graph \(\hat{G}\) in LP (8) which we get by contracting all vertices in \(D\) to a single vertex. Let \(\hat{d}\) be the contracted depot vertex in \(\hat{G}\) and for \(S\subseteq E-E(D,D)\) let \(C_{1},\ldots,C_{p}\) be the components of the sub-graph of \(\hat{G}\) with edges \(E-S-E(D,D)\) such that \(\hat{d}\in C_{1}\). Then we have that \(z(S)\geq\sum_{i<j\leq p}z(E(C_{i},C_{j}))=\frac{1}{2}\left(z(\delta((C_{1}- \hat{d})\cup D))+\sum_{i=2}^{p}z(\delta(C_{i}))\right)\geq p=\kappa_{\hat{G}} (\overline{S}-E(D,D))>\kappa_{\hat{G}}(\overline{S}-E(D,D))-1\). The second to last inequality follows since for \(i>1\) we have \(C_{i}\subseteq V-D\) so \(z(\delta(C_{i}))=z(\delta(D\cup C_{i}^{\prime}))\geq 2\) for some \(C_{i}^{\prime}\subseteq V-D\).
The following lemma now follows straightforwardly about the LP-based guarantee for Tree Doubling Algorithm.
**Lemma 6.2**.: _Let \(z^{*}\) be an optimal solution to linear programming relaxation for the \(\mathrm{mTSP}_{+}\), LP (9). then the output of the Tree Doubling Algorithm returns a solution whose cost is at most twice the objective value of \(z^{*}\)._
We now present the reduction to achieve a \(2\)-approximation for the \(\mathrm{MV}\)-\(\mathrm{mTSP}_{+}\) with arbitrary tours using Lemma 6.2 and our general reduction in Theorem 2.
**Theorem 3**.: _There is a polynomial time \(2\)-approximation algorithm for the \(\mathrm{MV}\)-\(\mathrm{mTSP}_{+}\) with arbitrary tours problem._
Proof.: The following linear program is a relaxation for the \(\mathrm{MV}\)-\(\mathrm{mTSP}_{+}\) problem.
\[\text{minimize }\sum_{e\in E}c_{e}x_{e} \tag{10}\] \[\text{s.t. }x(\delta(v))=2 \forall v\in D\] \[x(\delta(v))=2r(v) \forall v\in V-D\] \[x(\delta(S\cup D))\geq 2 \forall S\subset V-D\] \[x(E(D,D))=0\] \[0\leq x_{e}\leq 2 \forall e\in E.\]
Observe that the LP (10) is exactly the same as LP (4) in the general framework. Moreover, LP (9) is exactly the same as LP (3). From Lemma 6.2, we obtain that Tree Doubling Algorithm satisfies the condition as needed for Theorem 2. Thus applying Theorem 2, we obtain an integral solution whose cost is at most \(2\) times the cost of the optimal solution to LP (4) which is at most the cost of the optimal solution to the MV-mTSP\({}_{+}\) with arbitrary tours problem as claimed.
### Approximation for Unrestricted Variant
In this subsection we show there is a \(2\)-approximation for unrestricted mTSP\({}_{+}\) with arbitrary tours. We note that we allow using loops for the single visit version here. Our algorithm is the following.
```
Input:\(G,k\in\mathbb{Z},1\leq k\leq n\) Output:\(k\) cycles that cover all vertices in the graph
1 Add a new vertex \(d\) that has all edges to vertices of \(G\) to get a new graph \(G^{\prime}\). Extend the cost function of the graph by setting \(c_{dv}=\frac{c_{uv}}{2}\).
2 Find a tree of minimum cost that has degree \(k\) on the vertex \(d\).
3 Remove the vertex \(d\) and all edges incident to it. Among the remaining \(k\) components, if the component is a singleton then add a loop in that component. Otherwise double the edges of the tree in the remaining components and shortcut so that each component is a cycle. Return the resulting \(k\) cycles.
```
**Algorithm 4** Unrestricted mTSP\({}_{+}\)
Now we describe our LP relaxation for this problem. We note that our LP is just the LP relaxation of the tree that integrals solutions must contain. For the previous problem this was implicitly true as we showed a fractional solution \(x\) was in the up-hull of the tree polytope. In this case the tree is defined on a different graph \(G^{\prime}\) than \(x\) is defined on. We denote by \(E^{\prime}=E\cup\{\{d,v\}|v\in V\}\) as the edge set of \(G^{\prime}\) where \(d\) is a new dummy vertex that we added to the graph. We extend the cost function by setting \(c_{dv}=\frac{c_{uv}}{2}\) for all \(v\in V\).
\[\text{minimize} \sum_{e\in E^{\prime}}c_{e}z_{e}\] (11) s.t. \[z(E^{\prime}(S))\leq|S|-1 \forall S\subseteq V\cup\{d\} \tag{12}\] \[z(\delta(d))=k\] \[z(E^{\prime})=n\] \[0\leq z_{e}\leq 1 \forall e\in E^{\prime}\]
**Claim 6.3**.: _Let OPT be the optimal value of a optional solution to the unrestricted mTSP problem with \(k\) salesperson and \(z^{*}\) be an optimal solution to LP (11). Then we have \(c^{T}z^{*}\leq\text{\rm OPT}\)._
Proof.: Let \(C_{1},C_{2},\ldots,C_{k}\) a feasible solution then we will construct a feasible solution \(z\) to LP (11) whose cost (in the extended graph \(G^{\prime}\)) is at most the cost of \(C_{1},\ldots,C_{k}\). To get \(z\), we construct a spanning tree in the graph \(G^{\prime}\) such that the dummy vertex \(d\) has degree \(k\). To construct the tree \(T\), for each \(C_{i}\) we arbitrarily remove one edge from \(C_{i}\) and add an edge between one of the endpoints of the missing edge and the dummy vertex \(d\). For each \(C_{i}\), if we remove edge \(e=\{u,v\}\) with cost \(c_{e}\) we add either \(\{d,v\}\) or \(\{d,u\}\) which only decreases the cost since \(c_{du}=\frac{c_{uu}}{2}\leq c_{e}\) and \(c_{dv}=\frac{c_{uv}}{2}\leq c_{e}\)
**Theorem 4**.: _There is a polynomial time algorithm for the unrestricted \(\mathrm{mTSP}_{+}\) problem with an approximation factor of \(2\)._
Proof.: Let \(M_{1},\ldots,M_{k}\) be the output of Algorithm (4) and \(z^{*}\) be an optimal solution to LP (11). We will show that \(\sum_{i=1}^{k}c(M_{i})\leq 2c^{T}z^{*}\). Let \(T^{*}\) be the tree from the second step of Algorithm (4). First we show that \(\sum_{i=1}^{k}c(M_{i})\leq 2c(T^{*})\). If we acquire \(M_{i}\) by adding a loop to to a singleton vertex \(v\) then the cost of \(M_{i}\) is \(c_{vv}\) while the cost of the edge adjacent to \(M_{i}\) in \(T^{*}\) is \(\frac{c_{vv}}{2}\) which is twice the cost in the algorithms output. Now we consider when \(M_{i}\) is a non-singleton component, let \(\{d,v\}\) be the edge in \(T^{*}\) such that \(v\in M_{i}\) and let \(u\in V\) such that \(\{u,v\}\) is an edge in \(T^{*}\). Then we have that \(c_{uv}+\frac{c_{vv}}{2}\leq 2c_{uv}\). For any other edge \(e\in M_{i}\), the algorithm pays at most \(2c_{e}\) while the cost in \(T^{*}\) is \(c_{e}\). Then summing up the cost of all cycles and applying these bounds gives \(\sum_{i=1}^{k}c(M_{i})\leq 2c(T^{*})\). Then the proof is concluded by observing \(c(T^{*})=c^{T}z^{*}\) since \(T^{*}\) is an optimal solution to LP (11) by Claim 5.2.
## 7 Single Depot Multi-Visit mTSP
In this section we give a \(\frac{3}{2}\)-approximation for the SD-MV-\(\mathrm{mTSP}_{+}\) with arbitrary tours problem and a \(\frac{7}{2}\)-approximation for the SD-MV-\(\mathrm{mTSP}_{+}\) with vertex disjoint tours problem.
### Approximation Algorithm for Arbitrary Tours
First we convert the algorithm for SD-mTSP by Frieze [10] to a LP analysis since Frieze shows that this algorithm achieves a \(3/2\) approximation relative to the integral optimal solution.
```
Input:\(G=(V=\{v_{1},\ldots,v_{n}\}),c:V\times V\rightarrow\mathbb{R}_{\geq 0},k\in\mathbb{N}\) Output:\(k\) cycles that contain \(v_{1}\) that span the graph and vertices not equal to \(v_{1}\) are visited exactly once
1 Among spanning trees \(T\) such that \(d_{T}(v_{1})=2k\) find a min cost tree \(T^{*}\).
2 Find a min cost perfect matching \(M^{*}\) on the vertices with odd degree in \(T^{*}\).
3 Add \(M^{*}\) to \(T^{*}\) which is now a Eulerian graph. Let \(w_{1}=v_{1},\ldots,w_{s}=v_{1}\) be the Eulerian tour and let \(U\) be the neighbors of \(v_{1}\) in \(T^{*}\). Delete a node \(w_{i}\) in the sequence if 1. \(w_{i}\) has appeared before and \(w_{i}\neq v_{1}\) or 2. \(w_{i}\in U\) and \(v_{1}\notin\{w_{i-1},w_{i+1}\}\). Return the sequence obtained after short cutting.
```
**Algorithm 5** Single Depot mTSP
We use the following LP for SD-mTSP. We note that the LP does not exactly fit the LP in the general framework (LP (3)) since there is a different constraint on the degree of vertex \(v_{1}\). We still use the general framework in this section, but we show that each part of the framework still holds
with the additional degree constraint.
\[\text{minimize }\sum_{e\in E}c_{e}x_{e} \tag{13}\] \[\text{s.t. }x(\delta(v))=2 \forall v\in V-v_{1}\] \[x(\delta(v_{1}))=2k\] \[x(\delta(S))\geq 2 \forall S\subseteq V\] \[x_{e}\geq 0 \forall e\in E.\]
We first show the cost of \(T^{*}\) is at most the cost of the LP relaxation for the problem.
**Lemma 7.1**.: _Let \(x^{*}\) be an optimal solution to LP (13). Then we have,_
\[c(T^{*})\leq c^{T}x^{*}.\]
Proof.: We show that any solution \(x\) to LP (13) is feasible for LP (5) which will conclude the proof. The solution \(x\) clearly satisfies \(x(\delta(v_{1}))=2k\) and \(0\leq x\) by the constraints of LP (13). Let \(S\subset E\) and \(C_{1},\ldots,C_{m}\) be a partition of the vertex set \(V\) in the graph \((V,\overline{S})\). Then we have that,
\[x(S) \geq\sum_{i<j}x(E(C_{i},C_{j}))=\frac{1}{2}\sum_{i=1}^{k}x(\delta (C_{i}))\] \[\geq m=\kappa(\overline{S})>\kappa(\overline{S})-1.\]
The first inequality follows since \(E(C_{i},C_{j})\subseteq S\) since \(C_{1},\ldots,C_{k}\) are components in the graph with edges \(\overline{S}\) and the second inequality follows since \(x\) is feasible for LP (13).
We can show the cost of the matching is a at most \(1/2\) the cost of the LP optimum.
**Claim 7.2**.: _Let \(x^{*}\) be an optimal solution to LP (13). Then we have \(c(M^{*})\leq\frac{c^{T}x^{*}}{2}\)._
Proof.: Let \(S\) be the set of odd degree vertices in \(T*\) then \(M^{*}\) is a min-cost \(S\)-join in \(G\). The polytope for \(S\)-joins is given by \(\{x\geq 0|x(\delta(P))\geq 1,\forall P\text{ such that }|P\cap S|\text{ is odd}\}\). Then the claim follows since \(x^{*}/2\) is a feasible solution for the \(S\)-join polytope.
Then the above two lemmas imply the following.
**Lemma 7.3**.: _Let \(x^{*}\) be an optimal solution to LP (13) Algorithm (5) returns a solution \(C\) satisfying \(\sum_{e\in C}c_{e}\leq\frac{3}{2}c^{T}x^{*}\)._
Proof.: This follows since by the triangle inequality \(\sum_{e\in C}c_{e}\leq c(M^{*})+c(T^{*})\leq\frac{3}{2}c^{T}x^{*}\).
Now we are ready to get a \(3/2\) algorithm for the multi-visit variant. We need the following to characterize solutions to the problem.
**Lemma 7.4**.: _Given a connected graph with edge set \(T\) such that \(d_{T}(v_{1})=2k\) and \(d_{T}(v)=2r(v)\), we can can decompose the edges of \(T\) into \(k\) closed walks containing \(v_{1}\)._
Proof.: The graph \(G\) is Eulerian so there exists an Eulerian \(C\) circuit starting at \(v_{1}\) and the circuit is given by a sequence of vertices \(w_{1}=v_{1},\ldots,w_{k}=v_{1}\). Let \(w_{1},w_{2},\ldots,w_{j}\) be a prefix of the sequence such that \(j\) is the smallest index greater than \(1\) such that \(w_{j}=v_{1}\). We will use \(w_{1},\ldots,w_{j}\) as the first closed walk. Next we reduce the graph by deleting all edges used by the first closed walk and then by removing any isolated vertices. We now show this graph is still Eulerian. Clearly all vertices have even degree since we removed an even number of edges from each vertex. The graph remains connected since the Eulerian circuit \(C\) will not use any of the removed vertices or edges in the graph. Thus we can inductively repeat this process to get \(k\) closed walks containing \(v_{1}\) so that each vertex \(v\) is visited a total of \(r(v)\) times.
We use following LP for the multi-visit version of the problem.
\[\text{minimize} \sum_{e\in E}c_{e}x_{e} \tag{14}\] \[\text{s.t.} x(\delta(v_{1}))=2k\] \[x(\delta(v))=2r(v) \forall v\in V\] \[x(\delta(S))\geq 2 \forall S\subseteq V\] \[x_{e}\geq 0 \forall e\in E.\]
The proof of this is nearly identical to Lemma 4.2.
**Claim 7.5**.: _Let \(\mathcal{A}\) be an algorithm that takes a solution \(z\) to LP 14 and outputs \(k\) cycles \(t_{1},\ldots,t_{k}\) satisfying \(\sum_{i=1}^{k}c(t_{i})\leq\rho c^{T}z\). Then given a solution \(x\) to LP (13) there is an algorithm that outputs a feasible solution to \(\mathrm{SD}\)-\(\mathrm{MV}\)-\(\mathrm{mTSP}\), \(T:E\to\mathbb{Z}\) satisfying \(\sum_{e\in E}c_{e}T(e)\leq\rho c^{T}x\) in time polynomial in \(\max_{v\neq v_{1}}r(v)\) and \(n\)._
Proof.: We follow the proof of Lemma 4.2. We construct the graph \(G^{r}\) identically as in Lemma 4.2 for the non-depot vertices meaning \(G^{r}\) has \(r(v)\) copies of each vertex \(v\in V-v_{1}\) and for \(v_{1}\) the graph \(G^{r}\) has only one copy. We also extend the cost function \(c\) where an edge \(\{u_{i},v_{j}\}\) has cost \(c_{e}\) where \(e=\{u,v\}\) is the edge with between the original vertices that \(u_{i},v_{j}\) are copies of. For each edge \(e=\{u_{i},v_{j}\}\in E^{r}\) where \(u_{i},v_{j}\) are copies of \(u,v\in V\) we construct a solution \(z_{e}=\frac{x_{uv}}{r(u)r(v)}\) where in this context we set \(r(v_{1})=1\).
We show that \(z\) is a feasible solution to LP 14 for graph \(G^{r}\) and satisfies \(c^{T}x=c^{T}z\) which follows from Lemma 4.1 with one slight addition. We use Lemma 4.1 by setting \(D=\{v_{1}\}\) and observe that the graph \(G^{r}\) and cost function \(c\) in the Lemma 4.1 are the same as \(G^{r},c\) given in this lemma. Then in Lemma 4.1 we showed that \(z(\delta(v^{\prime}))=\frac{x(\delta(v))}{r(v)}\) where \(v^{\prime}\in V^{r}\) is a copy of vertex \(v\in V\) which implies for \(v\neq v_{1}\) that \(z(\delta(v))=2\) and \(z(\delta(v_{1}))=2k\). Finally in Lemma 4.1 we showed that \(z(\delta(S\cup d))\geq 2\) for all \(S\subset V^{r}-\{d\}\) which is equivalent to \(z(\delta(T))\geq 2\) for all \(T\subset V^{r}\) since if \(d\notin T\) then we have that \(d\in V^{r}-T\) implying \(z(\delta(T))=z(\delta(V^{r}-T))\geq 2\).
Thus by applying algorithm \(\mathcal{A}\) to \(G^{r}\) we get \(k\) cycles containing \(v_{1}\) that visit all vertices \(v\neq v_{1}\) once. These \(k\) cycles correspond to \(k\) closed walks \(t_{1},\ldots,t_{k}\) in \(G\) starting at \(v_{1}\) that together satisfy the visit requirements for all vertices. We also have that \(\sum_{i=1}^{k}c(t_{i})\leq\rho c^{T}x\) since the closed walks in \(c_{1},\ldots,c_{k}\) have the same cost as the cycles in \(G^{r}\) and the cycles in \(G^{r}\) have cost at most \(c^{T}z=c^{T}x\). The run-time follows since the maximum number of vertices in \(G^{r}\) is \(n\max_{v\neq v_{1}}r(v)\).
The following algorithm is nearly identical to Algorithm (2) when we set \(D=\{v_{1}\}\).
**Algorithm 6** Multi-Visit TSP Single Depot Arbitrary
**Input:**\(G=(V=\{v_{1},\ldots,v_{n}\}),c:V\times V\rightarrow\mathbb{R}_{\geq 0},m\in \mathbb{N},r:V-v_{1}\rightarrow\mathbb{Z}\)
**Output:**\(k\) tours that contain \(v_{1}\) such that vertices \(v\neq v_{1}\) are visited \(r(v)\) times
1 Solve LP (14) to get solution \(x^{*}\).
For all edges \(e\) let \(\tilde{x}_{e}=x_{e}-2k_{e}\) such that \(k_{e}=0\) if \(x_{e}\leq 4\) and otherwise \(k_{e}\) is set so that \(2\leq\tilde{x}_{e}<4\) and \(k_{e}\in\mathbb{Z}\). Define a function \(\tilde{r}:V-v_{1}\rightarrow\mathbb{Z}\) where \(\tilde{r}(v)=r(v)-\sum_{e\in\delta(v)}k_{e}\) and \(\tilde{k}=\frac{1}{2}\tilde{x}(\delta(v_{1}))\).
Use Claim 7.5 with solution \(\tilde{x}\) on instance \(G,\tilde{r},\tilde{k}\).
Increase the number of times each edge is used in the previous step by \(2k_{e}\) and return the resulting solution.
**Algorithm 6** Multi-Visit TSP Single Depot Arbitrary
As in the previous section we show the following lemmas and claims to show this algorithm gets a \(\rho\) approximation.
The proof of this claim is identical to Claim 4.3.
**Claim 7.6**.: _For all \(v\in V-v_{1}\) we have \(1\leq\tilde{r}(v)\leq 2n\) and \(\tilde{k}\geq 1\)._
Proof.: The proof of Claim 4.3 shows \(1\leq\tilde{r}(v)\leq 2n\) for all \(v\in V-v_{1}\). Now we show \(\tilde{k}\geq 1\). If we have that if \(x_{e}\leq 4\) for all \(e\in\delta(v_{1})\) then \(\tilde{k}=k\geq 1\) otherwise if there exists \(e\) such that \(x_{e}>4\) then \(\tilde{k}=\frac{1}{2}\tilde{x}(\delta(v_{1}))\geq\frac{\tilde{x}_{e}}{2}>2\).
**Claim 7.7**.: _The solution \(\tilde{x}\) is a feasible solution for LP (14) with graph \(G\) and \(\tilde{r},\tilde{k}\)._
Proof.: By definition we have \(\tilde{x}(\delta(v_{1}))=2\tilde{k}\) and the rest claim follows from the proof of Claim 4.4.
Then we get the following theorem whose proof is identical to the proof of Theorem 2.
**Theorem 5**.: _Let \(x^{*}\) be an optimal solution to LP (14) and \(\rho\) be the approximation factor of algorithm \(\mathcal{A}\) for SD-mTSP whose guarantee is relative to the value of LP (13). Then Algorithm (6) outputs a solution to SD-MV-mTSP, \(T:E\rightarrow\mathbb{Z}\) satisfying \(\sum_{e\in E}T(e)\leq\rho c^{T}x^{*}\) and runs in time polynomial in \(n\)._
This gives the following corollary which we get by using the analysis of the Frieze algorithm we showed at the beginning of the section.
**Corollary 5.1**.: _There is an approximation algorithm for the SD-MV-mTSP problem with an approximation factor of \(\frac{3}{2}\)._
### Approximation Algorithm for for Vertex Disjoint Tours
Here we show an algorithm for the vertex disjoint variant that achieves a \(7/2\)-approximation. We note that this result does not follow the general framework and follows from a simple use of the single visit algorithm. The idea for this algorithm is from [2] which is that we can first find a mTSP solution that visits all vertices once. Then we add loops to the different tours to satisfy the visit requirements and adding loops allows us to maintain the vertex disjoint property that all solutions to single visit variants have.
We need the following claim.
**Claim 7.8**.: _Let \(\mathrm{OPT}\) be the value of the optimum solution. Then we have \(\sum_{v\in V-v_{1}}r(v)c(vv)\leq 2\mathrm{OPT}\)._
Proof.: Let \(T:E\to\mathbb{Z}\) be an optimal solution. Then we have
\[\mathrm{OPT} =\frac{1}{2}\sum_{e\in\delta(v_{1})}c_{e}T(e)+\frac{1}{2}\sum_{v \in V-v_{1}}\sum_{e\in\delta(v)}T(e)c_{e}\] \[\geq\frac{1}{2}\sum_{e\in\delta(v_{1})}c_{e}T(e)+\frac{1}{2}\sum_{ v\in V-v_{1}}2r(v)\min_{e\in\delta(v)}c_{e}\] \[\geq\frac{1}{2}\sum_{e\in\delta(v_{1})}c_{e}T(e)+\frac{1}{2}\sum_{ v\in V-v_{1}}r(v)c_{vv}\] \[\geq\frac{1}{2}\sum_{v\in V-v_{1}}r(v)c_{vv}.\]
The first inequality follows since for all \(v\in V-v_{1}\) we have \(\sum_{e\in\delta(v_{1})}T(e)=2r(v)\) and the second inequality follows by the triangle inequality since for any edge \(c\in\delta(v)\) we have \(c_{vv}\leq 2c_{e}\).
Then showing the following claim will imply that we get a \(7/2\)-approximation.
**Claim 7.9**.: _Let \(c_{1},\ldots,c_{k}\) be the \(k\) cycles returned in the first step of the algorithm. Then we have that \(\sum_{i=1}^{k}c(c_{i})\leq\frac{3}{2}\mathrm{OPT}\)._
Proof.: Let \(p_{1},\ldots,p_{m}\) be a optimal solution with value \(\mathrm{OPT}\). We have that any two cycles \(p_{i},p_{j}\) only intersect at the depot vertex \(v_{1}\) since we are in the vertex disjoint tours setting. For each \(p_{i}\) we can shortcut to get cycle \(r_{i}\), so that all vertices in \(p_{i}\) are visited once and by the triangle inequality we have that \(c(r_{i})\leq c(p_{i})\) implying \(\sum_{i=1}^{k}c(r_{i})\leq\mathrm{OPT}\). Thus we get that \(\sum_{i=1}^{k}c(c_{i})\leq\frac{3}{2}\sum_{i=1}^{k}c(r_{i})\leq\mathrm{OPT}\) where the first inequality follows since Algorithm (5) is a \(3/2\)-approximation.
Thus the above two claims imply the following theorem.
**Theorem 6**.: _There is a polynomial time algorithm for the single depot multi-visit \(\mathrm{mTSP}\) (SD-MV-mTSP) problem with vertex disjoint tours with an approximation factor of \(\frac{7}{2}\)._
Proof.: Let \(c_{1},\ldots,c_{k}\) be the cycle acquired in the first step of the algorithm. The cycles \(c_{1},\ldots,c_{k}\) only intersect at the depot vertex \(v_{1}\) since they are a feasible solution to \(\mathrm{SD-mTSP}\). Adding the loops to the cycles keeps this property so Algorithm 7 outputs a feasible solution. Finally, the cost of the solution is \(\sum_{i=1}^{k}c(c_{i})+\sum_{v\in V-v_{1}}(r(v)-1)c_{vv}\leq\frac{7}{2}\mathrm{OPT}\) where the last inequality follows by Claim 7.9 and Claim 7.8. The run-time follows immediately since both steps of the algorithm are polynomial time.
Further Directions
In this paper we gave a reduction from various multi-visit TSP problems and their respective single visit versions. Our reduction relies on the connection between the LP relaxations of multi-visit variants and their respective single visit variants. There are two open questions that follow naturally.
**Get a \(3/2\) approximation for \(\mathrm{MV}\)-\(\mathrm{mTSP}_{0}\) with arbitrary tours.** For the \(\mathrm{MV}\)-\(\mathrm{mTSP}_{0}\) with arbitrary tours problem, we are given \(k\) depots and the visit function \(r\) and the goal is to find at most \(k\) closed walks so that all non-depot vertices \(v\) are visited \(r(v)\) times and each closed walk contains exactly one depot. Very recently Deppert, Kaul, and Mnich [9] showed the following LP for \(\mathrm{mTSP}_{0}\) has an integrality gap of \(2\) and gave a \(3/2\)-approximation for \(\mathrm{mTSP}_{0}\)
\[\mathrm{minimize} \sum_{e\in E}c_{e}x_{e}\] (15) s.t. \[x(\delta(v))=2 \forall v\in V-D\] \[x(\delta(S\cup D))\geq 2 \forall S\subset V-D\] \[x(E(D,D))=0\] \[0\leq x_{e}\leq 2 \forall e\in E.\]
This means we cannot apply our reduction to \(\mathrm{MV}\)-\(\mathrm{mTSP}_{0}\) with arbitrary tours by using LP (15) for \(\mathrm{mTSP}_{0}\). One direction is to get a reduction from \(\mathrm{MV}\)-\(\mathrm{mTSP}_{0}\) with arbitrary tours to \(\mathrm{mTSP}_{0}\) that does not use LPs.
**Apply the reduction to the unrestricted \(\mathrm{MV}\)-\(\mathrm{mTSP}_{+}\) with arbitrary tours.** In Section 6 we gave a \(2\)-approximation for the \(\mathrm{mTSP}_{+}\) problem and the approximation factor was with respect to the value of LP (11). We recall that LP (11) was not a LP relaxation where the characteristic vectors of integral solutions to the problem are feasible, but instead it was the convex hull of certain trees that all integral solutions contain. We are not able to apply the reduction described in Section 4 as the LP does not follow the structure of the LP described in the general framework. In particular it is difficult to find a feasible solution \(\tilde{x}\) for the reduced visit function \(\tilde{r}\). Either finding a different LP relaxation or finding a different way to apply the reduction to LP (11) would improve the approximation factor of unrestricted \(\mathrm{MV}\)-\(\mathrm{mTSP}_{+}\) from \(4\) to \(2\).
|
2305.11966
|
Constraints on the ultra-fast outflows in the narrow-line Seyfert 1
galaxy Mrk 1044 from high-resolution time- and flux-resolved spectroscopy
|
Ultra-fast outflows (UFOs) have been revealed in a large number of active
galactic nuclei (AGN) and are regarded as promising candidates for AGN feedback
on the host galaxy. The nature and launching mechanism of UFOs are not yet
fully understood. Here we perform a time- and flux-resolved X-ray spectroscopy
on four XMM-Newton observations of a highly accreting narrow-line Seyfert 1
(NLS1) galaxy, Mrk 1044, to study the dependence of the outflow properties on
the source luminosity. We find that the UFO in Mrk 1044 responds to the source
variability quickly and its velocity increases with the X-ray flux, suggesting
a high-density ($10^{9}-4.5\times10^{12}\,\mathrm{cm}^{-3}$) and radiatively
driven outflow, launched from the region within a distance of $98-6600\,
R_\mathrm{g}$ from the black hole. The kinetic energy of the UFO is
conservatively estimated ($L_\mathrm{UFO}\sim4.4\%L_\mathrm{Edd}$), reaching
the theoretical criterion to affect the evolution of the host galaxy. We also
find emission lines, from a large-scale region, have a blueshift of $2700-4500$
km/s in the spectra of Mrk 1044, which is rarely observed in AGN. By comparing
with other sources, we propose a correlation between the blueshift of emission
lines and the source accretion rate, which can be verified by a future sample
study.
|
Yerong Xu, Ciro Pinto, Daniele Rogantini, Stefano Bianchi, Matteo Guainazzi, Erin Kara, Chichuan Jin, Giancarlo CUsumano
|
2023-05-19T19:26:48Z
|
http://arxiv.org/abs/2305.11966v1
|
Constraints on the ultra-fast outflows in the narrow-line Seyfert 1 galaxy Mrk 1044 from high-resolution time- and flux-resolved spectroscopy
###### Abstract
Ultra-fast outflows (UFOs) have been revealed in a large number of active galactic nuclei (AGN) and are regarded as promising candidates for AGN feedback on the host galaxy. The nature and launching mechanism of UFOs are not yet fully understood. Here we perform a time- and flux-resolved X-ray spectroscopy on four _XMM-Newton_ observations of a highly accreting narrow-line Seyfert 1 (NLS1) galaxy, Mrk 1044, to study the dependence of the outflow properties on the source luminosity. We find that the UFO in Mrk 1044 responds to the source variability quickly and its velocity increases with the X-ray flux, suggesting a high-density (\(10^{9}\)-\(4.5\times 10^{12}\) cm\({}^{-3}\)) and radiatively driven outflow, launched from the region within a distance of 98-6600 \(R_{\rm g}\) from the black hole. The kinetic energy of the UFO is conservatively estimated (\(L_{\rm UFO}\sim 4.4\%L_{\rm Edd}\)), reaching the theoretical criterion to affect the evolution of the host galaxy. We also find emission lines, from a large-scale region, have a blueshift of 2700-4500 km/s in the spectra of Mrk 1044, which is rarely observed in AGN. By comparing with other sources, we propose a correlation between the blueshift of emission lines and the source accretion rate, which can be verified by a future sample study.
keywords: accretion, accretion discs - black hole physics - galaxies: Seyfert - X-rays: individual: Mrk 1044
## 1 Introduction
It is well accepted that active galactic nuclei (AGN) are powered by the accretion of matter onto supermassive black holes (SMBHs) in the hearts of galaxies. The energetic output of AGN can impact the evolution of their host galaxies, an effect that is referred to AGN feedback (e.g. Fabian, 2012, and references therein). The enormous amount of energy and momentum, released in the form of matter and radiation, can expel or heat the surrounding interstellar medium (ISM). This may delay the gas cooling and further leads to the star formation (SF) quenching (Zubovas and King, 2012). In the early phases of feedback, AGN outflows can also trigger the star formation within the compressed gas (e.g. Maiolino et al., 2017). Ultra-fast outflows (UFOs) with a wide solid angle are now considered one of the main mechanisms of AGN feedback for their mildly relativistic speeds (\(\geq 10000\) km/s or \(0.03c\)) and powerful kinetic energy (\(\geq 0.05L_{\rm Edd}\)). Such a huge kinetic output matches the theoretical predictions of effective AGN feedback models (e.g. Di Matteo et al., 2005; Hopkins and Elvis, 2010), offering an interpretation of the observed AGN-host galaxy relations (e.g. \(M_{\rm BH}-\sigma\), Kormendy and Ho, 2013, and references therein).
UFOs are commonly detected by identified blueshifted Fe xxv/xxvi absorption lines above 7 keV in the X-ray band (e.g. Chartas et al., 2002; Cappi, 2006; Tombesi et al., 2010, 2013; Gofford et al., 2013; Matzeu et al., 2022). The measured velocities of UFOs can reach up to \(\sim 0.3c\) (e.g. APM 08279+5255 and PDS 456, Chartas et al., 2002; Reeves et al., 2003), implying that they likely originate from the inner region of the accretion disk within several hundred gravitational radii from the black hole. Thanks to the high spectral resolution of the Reflection Grating Spectrometer (RGS, Den Herder et al., 2001) onboard _XMM-Newton_(Jansen et al., 2001) and the High Energy Transmission Gratings (HETG, Canizares et al., 2005) onboard _Chandra_(Weisskopf et al., 2002), UFOs are also
detectable in soft X-ray bands and distinguishable from slow, ionized outflows, the so-called warm absorbers.
Under the investigation of the past two decades, UFOs show variable signatures, i.e., variable velocities and transient features, based on multi-epoch deep observations (e.g. Dauser et al., 2012; Matzeu et al., 2017; Igo et al., 2020). However, the exact nature of UFO variability and their launching mechanisms are not well understood. They could be driven either by the radiation pressure (e.g. Proga et al., 2000; Sim et al., 2010; Hagino et al., 2016) or by magneto-rotational forces (MHD models, e.g. Kato et al., 2004; Fukumura et al., 2010, 2015) or a combination of both. Variability might be key to determining UFO launching mechanisms. It has been found that the Fe K absorption features in the spectrum of IRAS 13244-3809 and 1H 0707-495 weaken with the increasing X-ray luminosity, implying an over-ionization of the gas (Parker et al., 2017; Pinto et al., 2018; Xu et al., 2012). The velocity of the UFO in PDS 456 and IRAS 13229-3809 increases with the source luminosity (Matzeu et al., 2017; Pinto et al., 2018). The above discoveries support that UFOs in high-accretion systems are mainly accelerated by the strong radiation field. Interestingly, Xu et al. (2021) found, instead, an anti-correlation between the UFO velocity and X-ray luminosity in 1H 0707-495, challenging our understanding of the UFO driving mechanism. It was explained by the supercritical flow expanding at high-accretion states, resulting in larger launching radii (i.e. at lower velocities) within the disk. Therefore, it is worth investigating the dependence of UFOs on the source luminosity and accretion rate in other sources to better understand the nature of UFOs.
Mrk 1044 is a nearby (\(z=0.016\)) and luminous (\(L_{\rm 1\mu m-2keV}=1.4\times 10^{44}\) erg/s, Grupe et al., 2010) narrow-line Seyfert 1 AGN, hosting a central SMBH with a reverberation-mapped mass of \(M_{\rm BH}=2.8\times 10^{6}\,M_{\odot}\)(Du et al., 2015) or a mass, determined through the FWHM(H\(\beta\)) and \(L_{\rm 5100\AA}\), of \(M_{\rm BH}=2.1\times 10^{6}\,M_{\odot}\)(Grupe et al., 2010). Mrk 1044 shows a soft X-ray excess in the spectrum (Dewangan et al., 2007). It was interpreted by relativistic reflection from a high-density accretion disk in Mallick et al. (2018), although in general a warm corona model also provides a statistically acceptable description of the soft excess below 2 keV (e.g. Petrucci et al., 2018; Garcia et al., 2019; Petrucci et al., 2020; Xu et al., 2021). In the _XMM-Newton_/RGS spectrum, based on a series of narrow absorption lines, Krongold et al. (2021) found four distinctive UFOs, explained by a shocked outflow scenario. From the multi-wavelength observations, Mrk 1044 was reported to have multi-phase outflows in optical and UV bands as well, including two unresolved and one resolved ionized gas outflows traced by [O iii] in the optical band as well as two Ly-\(\alpha\) absorbing components in the ultra-violet (UV) energy range (Fields et al., 2005; Winkel et al., 2022).
In this paper, we will present the high-resolution spectroscopic analysis on four _XMM-Newton_/RGS observations of Mrk 1044 (PI: C. Jin). In section 2, we present the four _XMM-Newton_ observations and our data reduction process. Details on our analysis and results are shown in section 3, where we expand the work by Krongold et al. (2021), find an additional blueshifted photoionized emission component, and further study the relation between the properties of winds and the source luminosity. We discuss the results and provide our conclusions in section 4 and section 5, respectively.
## 2 Data reduction and products
Mrk 1044 has been observed with a large _XMM-Newton_ program (PI: C. Jin) for three orbits in 2018 and one orbit in 2019. The details of the analyzed observations in this work are listed in Tab.1. _XMM-Newton_ consists of the European Photon Imaging Camera (EPIC), including two EPIC-MOS CCDs (Turner et al., 2001) and an EPIC-pn (Struder et al., 2001), RGS, and the Optical Monitor (OM, Jansen et al., 2001). This work focuses on the RGS and we use the EPIC and OM data mainly to determine the shape of the broadband spectral energy distribution (SED), for which the MOS results are redundant as pn has a significantly higher effective area in the hard band.
### Data reduction
The data sets are processed with the _XMM-Newton_ Science Analysis System (SAS v20.0.0) and calibration files available by September 2022, following the standard SAS threads. We reduced EPIC-pn data using reprox package and produced calibrated photon events files. The filter of the background flare contamination is set at 0.5 counts/sec in 10-12 keV. We extracted the source spectra from a circular region of radius 30 arcsec, and the background spectra from a nearby source-free circular region with the same radius. No significant pile-up effect is found with the task epzplot. The EPIC-pn spectra are grouped to over-sample the instrumental resolution at least by a factor of 3 and each energy bin has a minimum of 25 counts to maximize the S/N. We employed the rogrproc package to process the RGS data with a filter of 0.3 counts/sec to exclude the background flares. The first-order RGS spectra are extracted from a cross-dispersion region of 1 arcmin width. The background spectra are selected from photons beyond 98% of the source point-spread function. The RGS1 and RGS2 spectra are combined and grouped to over-sample the resolution at least by a factor of 3. During the observation, Mrk 1044 was also monitored by OM in the UVW1 (2910A) filter. We reduced OM data with omichain tool including all necessary calibration processes. The response file is retrieved from the ESA webpage1. The UVW1 flux is less variable than the X-ray flux, i.e., almost stable in 2018 and drops by 13% in 2019.
Footnote 1: [https://www.cosmos.esa.int/web/xmm-newton/om-response-files](https://www.cosmos.esa.int/web/xmm-newton/om-response-files)
### Light curve
By using the task epiclccorr, we present the background-substracted and deadtime-corrected light curves extracted from the
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Obs. ID & Date & Instrument & Net exp. & Net count rate \\ & & & (ks) & (cts/s) \\ \hline \multirow{3}{*}{0824080301} & \multirow{3}{*}{2018-08-03} & EPIC-pn & 95 & 32 \\ & & RGS & 134 & 1.07 \\ & & EPIC-pn & 97 & 24 \\ & & RGS & 133 & 0.79 \\ & & EPIC-pn & 93 & 25 \\ & & RGS & 131 & 0.84 \\ & & EPIC-pn & 90 & 20 \\ & & RGS & 126 & 0.63 \\ \hline \end{tabular}
\end{table}
Table 1: General overview of the analyzed observations on Mrk 1044
EPIC-pn (0.3-10 keV) data in Fig.1. It reveals that Mrk 1044 is bright and variable during the observations. The corresponding hardness ratio (HR=H/H+S, H: 2-10 keV; S: 0.3-2 keV), plotted in the bottom panel, shows a softer-when-brighter behavior. To investigate the variability of the UFO with the luminosity, we divide three consecutive observations in 2018 into three flux levels, marked by different colors. The reason why we exclude the 2019 observation is to ensure the causality between the variations of the UFO and the luminosity, i.e., we are studying the response of the same absorber to the source. The thresholds are set to make the number of counts of each level comparable. The good time interval (GTI) files for each level are generated with targetgen task. The flux-resolved EPIC-pn and RGS spectra at the same flux level are extracted and stacked following the steps described in Sec.2.1. The observations in 2018 are also stacked into one single spectrum, named 2018. In this work, we will perform the flux-/time-resolved spectroscopy for a total of 8 spectra, where the time-resolved spectra are referred to as T1...T4 chronologically (e.g. T1 refers to Obs. 0824080301) and the flux-resolved spectra are referred to as F1, F2, F3 from the lowest to the highest state.
## 3 Results
### Continuum Modelling
We start the broadband X-ray spectroscopy from the stacked 2018 EPIC-pn and RGS spectra, due to their high statistics, in the XSPEC (v12.12.1) package (Arnaud, 1996). The instrumental differences are accounted for by adopting a variable cross-calibration factor. In this paper, we use the \(\chi^{2}\) statistics and estimate the uncertainties of all parameters at the default 90% confidence level (i.e. \(\Delta\chi^{2}=2.71\)), but \(1\sigma\) (\(\Delta\chi^{2}=1\)) error bars are shown in plots. We consider the RGS spectra between 0.4-1.77 keV and the EPIC-pn spectra between 1.77-10 keV in our analysis, not only because of their consistency in the soft X-ray band, but also due to the influence of the lower resolution but higher count rate of EPIC-pn on the detection of atomic features. The luminosity calculations in this paper are based on the assumptions of \(H_{0}=70\,\rm{km/s/Mpc}\), \(\Omega_{\Lambda}=0.73\) and \(\Omega_{M}=0.27\).
The broadband X-ray model for Mrk 1044 was proposed by Mallick et al. (2018) based on the archival _XMM-Newton_ data in 2013. In this work, we adopt a similar model combination: tbabs*zashift*(ntHcomp+relxill1pCp), to explain those spectral components. The model takes into account the galactic hydrogen absorption (tbabs) with the solar abundance calculated by Lodders et al. (2009), the redshift of Mrk 1044 (zashift), the soft excess in form of a warm Comptonization component (ntHcomp), and the hot coronal continuum like a power-law plus a lamppost-geometry relativistic reflection (relxill1lpCp, RELXLL v1.4.3, Garcia et al., 2014). The Galactic column density, \(N_{\rm H}^{\rm Gal}\), is allowed to vary due to the discrepancy between \(N_{\rm H}^{\rm Gal}=2.9\times 10^{20}\rm{cm}^{-2}\)(HI4PI Collaboration et al., 2016) and \(N_{\rm H}^{\rm Gal}=3.6\times 10^{20}\rm{cm}^{-2}\)(NHtot tool, Willingale et al., 2013). The choice of the solar abundance calculated by Lodders et al. (2009) instead of Wilms et al. (2000) is to keep consistent with the subsequently used photoionization models in Sec.3.3, although it does not affect our conclusions (only a \(\Delta\chi^{2}\sim 10\) difference around O K-edge, \(\sim 0.53\) keV, region). Instead of using the high-density relativistic reflection model adopted in Mallick et al. (2018), here we choose the warm Comptonization model plus a standard relativistic reflection component, of which the disk density is fixed at \(\log(n_{\rm e}/\rm{cm}^{-3})=15\), in our analysis. It is because we find the fitting of the relativistic reflection model is much poorer (\(\Delta\chi^{2}\sim 670\)) than the warm Comptonization scenario when we include the RGS data, probably due to a thick inner disk distorted by strong radiation pressure, breaking the thin-disk assumption of the reflection model. The seed photon of the warm Comptonization is fixed at a disk temperature of 10 eV, which is the value obtained by including the OM data (see Sec.3.3).
The fitted parameters of the stacked 2018 spectrum are listed in the third column of Tab. 2. The data/model ratio in the RGS band is shown in the upper panel of Fig.2, featuring a broad absorption feature
Figure 1: The EPIC-pn (0.3–10 keV) light curve (_upper_) and corresponding hardness ratio (_lower_) of the observations of Mrk 1044, where the observation dates (T1:T4) are marked. The colors represent the different flux intervals (F1-F3) with comparable counts.
above 1 keV. The results reveal a primary continuum with a slope of \(\Gamma=2.26^{+0.01}_{-0.01}\) and a plasma temperature above \(>196\) keV, a warm Comptonization characterized by a temperature of \(0.23^{+0.01}_{-0.01}\) keV and a soft photon index \(\Gamma_{\rm WC}=2.52^{+0.06}_{-0.06}\), and a relativistic reflection component with a reflection ratio of \(f_{\rm RCHI}=0.19^{+0.03}_{-0.02}\). The corresponding optical depth of the warm corona is \(\tau_{\rm Fe}^{\rm WC}\sim 30\)(Zdziarski, 1985). The spin of the black hole cannot be constrained and is thus fixed at \(a_{\star}=0.998\). The inner radius of the disk is within \(R_{\rm in}<23\,R_{\rm ISCO}\), where \(R_{\rm ISCO}\) is the innermost stable circular orbit (ISCO). The inclination angle, ionization parameter, and iron abundance of the accretion disk are derived to be \(i=34^{+1}_{-2}\) (deg), \(\log(\xi/{\rm erg\,cm\,s^{-1}})=3.4^{+0.2}_{-0.1}\), and \(A_{\rm Fe}=3.6^{+0.5}_{-0.6}\) (in units of solar abundance), respectively. The hot corona, if assumed in a lamppost geometry, is measured at the height of \(h=47^{+26}_{-25}\,R_{\rm Horizon}\) above the accretion disk, where \(R_{\rm Horizon}\) is the vertical event horizon of the Kerr black hole. The marginal differences between our results and Mallick et al. (2018) based on the archival 2013 observation (\(i=46.4^{+1.9}_{-5.0}\) deg, \(\log(\xi/{\rm erg\,cm\,s^{-1}})=2.96^{+0.04}_{-0.11}\), \(A_{\rm Fe}=2.2^{+0.5}_{-0.6}\) in their fit) may come from the different explanations for the soft excess and the intrinsic variability of the source. We apply the best-fit model to time-/flux-resolved spectra as well, with several invariable properties (i.e. \(N_{\rm H}^{\rm Gal}\), \(i\) and \(A_{\rm Fe}\)) on short-term timescales linked to those of the 2018 results. The results are listed in Tab.2. There is no significant change in the broadband continuum during the 2018 observations, T1, T2, T3, within their uncertainties, confirming the prerequisite of the flux-resolved spectroscopy. The spectral slopes derived in flux-resolved spectra verify the softer-when-brighter behavior observed in Fig.1.
### Gaussian Line Scan
To better visualize and identify the atomic features upon the continuum, we launch a blind Gaussian line scan over the spectra. We fit an additional Gaussian line with a logarithmic grid of energy steps over 0.4-10 keV band upon the continuum model and record the \(\Delta\chi^{2}\) improvements. The energy centroid and the line width are fixed at each step, while the normalization is free. We adopt three line widths \(\sigma_{v}\) of 500, 1500, 4500 km/s and the corresponding numbers of energy steps are 2000, 700, 300, respectively, in order to match the RGS spectral resolution power (\(R_{\rm RGS}\sim 150\)-800).
The scan results provide a rough estimate of the single trial detection significance of each Gaussian line, in the form of the square root of \(\Delta\chi^{2}\) times the sign of the normalization (Cash, 1979). The scan results over the 2018 spectrum in the RGS band are shown in the bottom panel of Fig.2. The rest-frame energies of the known strong ionic transition lines in the soft X-ray band are marked by the vertical blue _dashed_ lines. We identify the O vii and O viii emission lines close to their rest-frame positions as well as several emission features in the Ne ix/x and Fe xvii-xx region. No absorption features are found at/close to their rest frames. The strongest absorption feature is located around 1.2 keV with a broad line width, likely from blueshifted Fe and Ne ionic absorption lines.
The same approach is then applied to the time-/flux-resolved spectra with the same primary settings, of which results are shown in Fig.3 and Fig.4, separately. There are no significant differences among the results of time-resolved spectra, except for the T4 spectrum. Due to its low flux, the T4 result has a weaker detection significance of lines and the strongest absorption feature becomes around 0.7 keV, suggesting a different ionization state of the absorber in T4 observation. In the other three spectra, the emission (O vii, O viii and 0.8-1 keV emission) and absorption (\(\sim 1.2\) keV line) features observed in the 2018 spectrum are all obvious. The absorption feature in T1 spectrum, the brightest observation, seems
Figure 3: Similar to the bottom panel of Fig.2 but the scan is performed on the time-resolved spectra.
Figure 2: The data/model ratio (_upper_) and single trial significance (_lower_) obtained from the Gaussian line scan with different line widths (500, 1500, 4500 km/s) over the rest-frame stacked 2018 spectrum in RGS band. The vertical dashed lines represent the rest-frame positions of the known ion transitions as a reference. The grey region marks the significance of \(3\sigma\).
to be more blueshifted than others. In addition, the line width of the 1.2 keV trough is not as broad as that of the 2018 spectrum, due to the stack effect of the variable feature. Comparing the flux-resolved results, we notice a decreasing significance of the O vii and O viii lines and an increasing blueshift of the absorption feature with the source luminosity, implying the existence of possible wind-luminosity relations. We thus fit the absorption feature with a Gaussian model and the parameters are listed in Tab.3. The energy centroid of the absorption feature increases with the flux and is highlighted in _red_ in Fig.4. The best-fit parameters for the O viii emission line are also listed in Tab.3 and depicted in _purple_ in Fig.4, indicating a slightly increasing blueshift.
### Search for outflows
To study the emission/absorption lines discovered in Sec.3.2, we employ the physical photoionization model, pion, in the SPEX package (Kaastra et al. 1996). This code self-consistently calculates the photoionization equilibrium and synthetic spectra of the gas irradiated by a given radiation field.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Description & Parameter & 2018 & T1 & T2 & T3 & T4 & F1 & F2 & F3 \\ \hline tbabs & \(N_{\rm H}^{\rm Gal}\) (\(10^{20}\) cm\({}^{-2}\)) & & & & \(4.09^{+0.03}_{-0.05}\) & & & & \\ \hline zashift & \multicolumn{6}{c}{0.016\({}^{*}\)} \\ \hline nthComp & \(\Gamma_{\rm WC}\) & \(2.52^{+0.06}_{-0.06}\) & \(2.49^{+0.04}_{-0.05}\) & \(2.51^{+0.05}_{-0.04}\) & \(2.51^{+0.07}_{-0.06}\) & \(2.42^{+0.13}_{-0.26}\) & \(2.57^{+0.05}_{-0.05}\) & \(2.50^{+0.05}_{-0.06}\) & \(2.56^{+0.05}_{-0.02}\) \\ & \(kT_{\rm e}\) (keV) & \(0.23^{+0.01}_{-0.01}\) & \(0.23^{+0.01}_{-0.01}\) & \(0.22^{+0.01}_{-0.01}\) & \(0.22^{+0.01}_{-0.01}\) & \(0.20^{+0.02}_{-0.02}\) & \(0.24^{+0.01}_{-0.01}\) & \(0.22^{+0.01}_{-0.01}\) & \(0.25^{+0.01}_{-0.01}\) \\ & \(N_{\rm WC}\) (\(10^{-3}\)) & \(6.0^{+0.4}_{-0.5}\) & \(7.5^{+0.7}_{-0.4}\) & \(5.5^{+0.3}_{-0.4}\) & \(5.3^{+0.4}_{-0.5}\) & \(3.7^{+0.4}_{-0.5}\) & \(4.6^{+0.4}_{-0.4}\) & \(5.9^{+0.3}_{-0.5}\) & \(9.3^{+0.5}_{-0.6}\) \\ \hline relxill1pCp & \(h\) (\(R_{\rm H_{\rm location}}\)) & \(47^{+26}_{-25}\) & \(>46\) & \(>40\) & \(33^{+21}_{-28}\) & \(-17^{+18}_{-12}\) & \(>37\) & \(26^{+28}_{-20}\) & \(>24\) \\ & \(a_{\star}\) (\(cJ/GM^{2}\)) & & & & \(0.998\)* & & & & \\ & \(i\) (deg) & & & & \(34^{+1}_{-2}\) & & & & \\ & \(R_{\rm in}\) (\(R_{\rm ISCO}\)) & \(<23\) & \(<82\) & \(<97\) & \(<30\) & \(15^{+11}_{-11}\) & \(<49\) & \(<29\) & \(<42\) \\ & \(\Gamma\) & \(2.26^{+0.01}_{-0.01}\) & \(2.29^{+0.01}_{-0.03}\) & \(2.23^{+0.01}_{-0.02}\) & \(2.23^{+0.02}_{-0.02}\) & \(2.22^{+0.03}_{-0.03}\) & \(2.18^{+0.01}_{-0.02}\) & \(2.27^{+0.02}_{-0.02}\) & \(2.31^{+0.02}_{-0.02}\) \\ & \(\log(\xi/\rm{erg\,cm\,s^{-1}})\) & \(3.4^{+0.2}_{-0.1}\) & \(3.6^{+0.2}_{-0.3}\) & \(3.4^{+0.1}_{-0.2}\) & \(3.3^{+0.1}_{-0.1}\) & \(3.2^{+0.2}_{-0.2}\) & \(3.3^{+0.1}_{-0.1}\) & \(3.3^{+0.2}_{-0.1}\) & \(3.2^{+0.2}_{-0.2}\) \\ & \(A_{\rm Fe}\) & & & & \(3.6^{+0.5}_{-0.6}\) & & & & \\ & \(kT_{\rm e}\) (keV) & \(>196\) & \(>51\) & \(>30\) & \(>23\) & \(>18\) & \(>38\) & \(>25\) & \(>21\) \\ & \(R_{\rm FeII}\) & \(0.19^{+0.03}_{-0.02}\) & \(0.23^{+0.05}_{-0.05}\) & \(0.20^{+0.04}_{-0.08}\) & \(0.29^{+0.06}_{-0.06}\) & \(0.35^{+0.14}_{-0.09}\) & \(0.25^{+0.03}_{-0.05}\) & \(0.28^{+0.05}_{-0.05}\) & \(0.32^{+0.10}_{-0.09}\) \\ & \(N_{\rm ref}\) (\(10^{-5}\)) & \(9.4^{+0.2}_{-0.2}\) & \(9.9^{+1.0}_{-1.1}\) & \(7.8^{+0.3}_{-0.3}\) & \(8.8^{+1.0}_{-0.8}\) & \(8^{+1.0}_{-1}\) & \(6.9^{+0.5}_{-0.3}\) & \(10^{+8}_{-9}\) & \(13^{+17}_{-1}\) \\ \hline broadband & \(\chi^{2}\)/d.o.f. & 1319/733 & 987/731 & 922/731 & 956/731 & 750/730 & 950/734 & 933/734 & 939/733 \\ \hline zabs\_xs & \(N_{\rm H}\) (\(10^{21}\) cm\({}^{-2}\)) & \(2.3^{+0.5}_{-0.4}\) & \(2.2^{+3.3}_{-0.4}\) & \(2.0^{+1.2}_{-0.4}\) & \(1.9^{+1.1}_{-0.4}\) & \(0.04^{+0.02}_{-0.02}\) & \(1.8^{+1.0}_{-0.3}\) & \(2.1^{+1.7}_{-0.3}\) & \(5.4^{+4.0}_{-3.2}\) \\ & \(\log(\xi/\rm{erg\,cm\,s^{-1}})\) & \(3.72^{+0.8}_{-0.0}\) & \(3.73^{+0.23}_{-0.09}\) & \(3.74^{+0.18}_{-0.14}\) & \(3.74^{+0.30}_{-0.09}\) & \(2.01^{+0.31}_{-0.25}\) & \(3.74^{+1.7}_{-0.11}\) & \(3.75^{+0.2}_{-0.2}\) & \(4.0^{+0.2}_{-0.2}\) \\ & \(\sigma_{\rm v}\) (km/s) & \(11800^{+0.0000}_{-0.000}\) & \(86000^{+0.0000}_{-0.000}\) & \(95000^{+0.0000}_{-0.000}\) & \(1000^{+0.0000}_{-0.000}\) & \(9000^{+0.0000}\) & \(9000^{+0.0000}\) & \(90000^{+0.000}\) \\ & \(z_{\rm LOS}\) & \(-0.153^{+0.00}_{-0.016}\) & \(-0.181^{+0.00}_{-0.007}\) & \(-0.145^{+0.015}_{-0.021}\) & \(-0.143^{+0.01}_{-0.01}\) & \(-0.082^{+0.002}_{-0.002}\) & \(-0.146^{+0.013}_{-0.01
The intrinsic spectral energy distribution (SED) of Mrk 1044 inputted into pion is derived from the UV to hard X-ray energies. Due to the stability of the OM flux, we stack the OM spectra and model it with an additional di skbb component, characterized by a temperature of \(10^{+21}_{-6}\) eV. Such a temperature is relatively low for the accretion disk around an SMBH with a mass of \(\sim 3\times 10^{6}\,M_{\odot}\)(Shakura & Sunyaev, 1973), might be explained by the truncated disk (suggested by the inner radii \(R_{\rm in}\) in relxilllpCp, see Tab.2). The interstellar extinction (\(E_{\rm B-V}=0.031\), Marinello et al., 2016) is also considered. The SED of Mrk 1044 in 2018 is shown in Fig.5 compared with other Seyfert galaxies, where it shares a similar soft SED with 1H 1934-063. The observed data are shown on top of the SED, where the deviations from the best-fit SED come from the removal of the Galactic absorption, redshift, and dust-reddening components. By measuring the bolometric luminosity (\(10^{-3}\)-\(10^{3}\) keV) predicted by the model, \(L_{\rm Bol}\sim 1.4\times 10^{44}\) erg/s, we thus estimate the Eddington ratio of Mrk 1044 at \(\lambda_{\rm Edd}=L_{\rm Bol}/L_{\rm Edd}\sim 0.4\), adopting a SMBH mass of \(2.8\times 10^{6}\,M_{\odot}\)(Du et al., 2015), where \(L_{\rm Edd}=4\pi G\,M_{\rm BH}m_{\rm p}c/\sigma_{\rm T}\) is the Eddington luminosity. Although our estimated Eddington ratio is slightly different from the literature (\(\lambda_{\rm Edd}=0.59\), Grupe et al., 2010), due to the different masses adopted (\(M_{\rm BH}=2.1\times 10^{6}\,M_{\odot}\), Grupe et al., 2010), it still implies a high-accretion system and the value is comparable to that of 1H 1934-063 (\(\lambda_{\rm Edd}=0.40^{+0.91}_{-0.27}\), Xu et al., 2022) calculated from the same approach.
To take advantage of both the advanced reflection model (RELXILL), implemented in XSPEC, and the pion model in SPEX, we adopt the code used in Parker et al. (2019) to construct the tabulated model, which is an XSPEC version of pion, called pion_xs. In this paper, we only make use of the emission component of pion (i.e. solid angle \(\Omega=1\), and covering fraction \(C_{\rm F}=0\)), while the absorption component is explained by xabs\(\_\)xs, transferred from xabs in SPEX (\(C_{\rm F}=1\)). The pion and xabs models are characterized by four main parameters, including the column density \(N_{\rm H}\), the ionization parameter \(\log\xi\), the turbulence velocity \(\sigma_{\rm v}\) and the line-of-sight (LOS) redshift of gas \(z_{\rm LOS}\).
#### 3.3.1 Absorption
To locate the globally best-fit solution of the absorbing gas, we launch a systematic scan over a multi-dimension grid of the parameters (\(\log\xi,z_{\rm LOS},\sigma_{\rm v}\)) of xabs\(\_\)xs, following Xu et al. (2021, 2022). The range of \(\log\xi\) is 0-5 with a step of \(\Delta\log\xi=0.1\). The grid of the turbulent velocity \(\sigma_{\rm v}\) is the same as that of the Gaussian line scan (\(\sigma_{\rm v}=500,1500,4500\) km/s). The LOS velocity, z\({}_{\rm LOS}\), ranges from -0.35 to 0, with an increment depending on the choice of \(\sigma_{\rm v}\) (\(\Delta z_{\rm LOS}=500,700,1500\) km/s for \(\sigma_{\rm v}=500,1500,4500\) km/s respectively). The scan is performed upon the best-fit model obtained in Sec.3.1. The column density, \(N_{\rm H}\), and continuum parameters are left free. The \(\Delta\chi^{2}\)-statistics improvement is recorded at each grid to reveal the detection significance of the absorbing gas. One advantage of the scan is to show the location of all possible solutions in the parameter space, probably revealing multiphase outflows.
The scan result of the 2018 spectrum is shown in the left panel of Fig.6, where the best solution is marked with a red cross. Because of the consistent solutions with different turbulent velocities, we only present the result with \(\sigma_{\rm v}=4500\) km/s in this paper, which has the largest detection significance. The velocity on the X-axis is the relativistically corrected velocity according to the equation: \(v/c=\sqrt{(1+z_{\rm LOS})/(1-z_{\rm LOS})}-1\). It reveals a strong detection (\(\Delta\chi^{2}=103\)) of a highly ionized (\(\log\xi=3.72\)) and ultra-fast (\(z_{\rm LOS}=-0.15\)) absorber. If we allow the line width to vary, the solution of the direct fit (\(\sigma_{\rm v}\sim 12000\) km/s, \(N_{\rm H}=2.3\times 10^{21}\) cm\({}^{-2}\)) is listed in Tab.2, consistent with our scan result. The contribution of this absorber to the modeling is visible in the top panel of Fig.7, mainly around 1.2 keV from blueshifted Fe xxii-xxiv and Ne x, without any absorption features detected in the EPIC band probably due to its relatively low column density and the soft SED.
The same scan is also performed for the time-/flux-resolved spectra, shown in Fig.8, and their best-fit solutions are summarized
Figure 4: Similar to the bottom panel of Fig.2 but the scan is performed on the flux-resolved spectra. The vertical dashed red/purple lines and the red/purple shadowed areas indicate the position of the centroid of the absorption/emission feature and the corresponding uncertainty. (see Tab.3).
Figure 5: The averaged spectral energy distribution of Mrk 1044 in 2018 compared with other Seyfert galaxies (NGC 5548, Mehdipour et al., 2015; IRAS 13224-3809, Jiang et al., 2018; IH 0707-495, Xu et al., 2021; 1H 1934-063, Xu et al., 2022). The EPIC-pn, RGS, and OM data are shown as well, where the deviations from the best-fit SED come from the removal of the Galactic absorption, redshift, and dust-reddening components.
in Tab.2. The UFO detection significance in each spectrum is at least \(4\sigma\), i.e. \(\Delta\chi^{2}=24.5\) for 4 degrees of freedom (d.o.f.). We also calculate the X-ray flux between 0.4 and 10 keV with the cflux model, presented in Tab.2. The best-fit velocity of T3 is around \(-0.2c\) with a narrow line width of \(<178\) km/s, which is quite different from the absorber in the other two consecutive observations. The scan plot reveals another degenerate region below \(-0.15c\) with a broad line width of \(\sim 8500\) km/s and comparable statistics (\(\Delta\chi^{2}\sim 3\)). Therefore, to ensure we are tracing the same absorber, we adopt this slow and broad solution in our analysis.
Among the time-resolved spectra, apart from T4, the ionization state and column density of the UFO are consistent within their uncertainties. The velocity of the UFO indicatively has an increasing trend with the source flux. As for T4, instead of the 1.2 keV feature, the best-fit solution of T4 explains the blueshifted O viii line around 0.7 keV (see the bottom panel of Fig.3). It means that a completely different absorber dominates the T4 spectrum, which was observed one year apart from the others. Although according to the T4 scan plot, a similar high-ionization and fast region exists with a lower significance than that of the best fit, that solution is weakly detected (\(\Delta\chi^{2}/\mathrm{d.o.f.}=13/4\)) after including a primary absorber and an emitter (see Sec.3.3.2). Therefore, we do not consider that secondary absorption component in the following as the constraints on its parameters are too loose for meaningful discussions.
The flux-resolved spectroscopy is likely to smear/broaden the variable line features, which may lead to degenerate solutions. To reduce the influence of this effect, we fix the line width of xabs_xs at 9000 km/s, the average value of the time-resolved results, although the trend discovered below remains the same with a free line width. Among the flux-resolved spectra, we find that a faster, more ionized, and Compton-thicker plasma tends to appear in a brighter state. The corresponding contribution of the UFO to the modeling (\(\sim 1.2\) keV) is shown in Fig.7.
#### 3.3.2 Emission
The same systematic scan is applied to the pion_xs model over the continuum model to study the photoionization emission component. The only difference is the searched velocity grid, ranging from 0.1 to \(-0.1\), as we do not find strongly shifted emission lines in the bottom panel of Fig.2. The scan result of the 2018 spectrum is shown in the right panel of Fig.6 with a fixed line width of 1500 km/s. It reveals the highly significant detection (\(\Delta\chi^{2}=69\)) of a blueshifted (\(z_{\mathrm{LOS}}=-0.011\)) photoionized emitter with a modest ionization
Figure 6: Photoionization absorption (_left_) /emission (_right_) model search for the stacked 2018 spectrum of Mrk 1044 over the broadband model. The color illustrates the statistical improvement after adding an absorption/emission component. The best-fit solution is marked by a red cross.
Figure 7: The stacked 2018 (_first_) and flux-resolved (from _second_ to _fourth_) RGS spectra (_black_ dots with errors) of Mrk 1044. Each panel contains the fits with the baseline continuum model (_blue_) and the continuum plus a pion_xs and xabs_xs model (_red_). The rest-frame energies of the relevant ion transitions are marked by the vertical dashed lines.
Figure 8: Similar to the left panel of Fig.6 but the scan is performed on the time- (T, _left_) and flux- (F, _right_) resolved spectra.
state (\(\log\xi=2.5\)). The statistical improvement shown in Tab.2 is smaller (\(\Delta\chi^{2}=48\)) than that of the scan because some residuals fitted by pion_xs in the scan over the continuum model have been explained by the model including a xabs_xs. The primary emitter is not as highly ionized as the absorption component (perhaps related to UFO in T4), since, differently from the absorption component, the photoionization emission is expected to originate from gas located at a wide range of distances from the ionizing source, possibly from large distances. The potential secondary solution (\(\log\xi>3.5\)) is discussed in Sec.4.
We do not perform the scan over the time-/flux-resolved spectra as the velocity of the emission component is generally not as variable as the absorption (e.g. Kaspi et al., 2001; Reeves et al., 2016; Kosec et al., 2021). The best-fit parameters and the contributions of pion_xs to modeling are shown in Tab.2 and Fig.7 separately. Each solution, except T1, has at least \(3.5\sigma\) (i.e. \(\Delta\chi^{2}=20\)) detection significance. The unconstrained line width in T1 and T4 is fixed at 1500 km/s. In general, the line width in the emission component is narrower than that of the absorption, consistent with the expectation of less variable velocity and a larger distance. The column density, ionization state and velocity of the emitter are stable within their uncertainties among the time-resolved spectra, while those parameters are tentatively correlated with the source luminosity in the flux-resolved results. The velocities are all blueshifted at least more than 2700 km/s. Apart from F1, the pion_xs model mainly explains the O viii and Fe/Ne lines around 1 keV, while it models the O vii and O viii lines in F1 spectra (see Fig.7).
## 4 Discussion
By analyzing the RGS data of a large _XMM-Newton_ campaign on Mrk 1044 in 2018 and 2019, we find a highly-ionized UFO and a blueshifted photoionized emitter in the spectra. The UFO detection confirms the existence of the UFO1 reported in Krongold et al. (2021) from the 2013 _XMM-Newton_ observation, sharing a similar ionization state and velocity, although the UFO in their paper was associated with a fixed narrow profile (\(\sigma_{V}=10\) km/s). The reported multi-phase outflow is also marginally supported by the UFO detected in T4, which has similar parameters to their UFO2 component, although we do not find other cold UFO3 like their UFO3/4 phases in Mrk 1044. The emitter shows a much lower ionization state and column density than UFO, implying the average photoionization emission component originates from a different gas with respect to absorption, while the UFO in T4 perhaps is related to the emitter due to their similar column density, ionization state, and turbulent velocity.
In the scan of the photoionization emission model (see the right panel of Fig.6), we also discover a potential secondary emitter, which is trying to complement the blue wing of the Fe K emission. The best fit (\(\Delta\chi^{2}=37\)) of the secondary emitter requires an ultrafast (\(z_{\rm LOS}=-0.12^{+0.02}_{-0.02}\)) and highly ionized (\(\log\xi=3.7^{+0.1}_{-0.2}\)) plasma with a column density of \(N_{\rm H}=3.6^{+1.8}_{-1.5}\times 10^{21}\) cm\({}^{-2}\) and an unconstrained turbulent velocity fixed at \(\sigma_{V}=9000\) km/s. This emitter shares common properties with the absorber, indicating the same origin of the absorption. We plot the stacked 2018 EPIC spectra and the best-fit model including two emitters in the top panel of Fig.9, compared with the continuum plus one emitter. The corresponding data-to-model ratios are shown in the second and third panels. However, we are concerned about the requirement of this secondary emitter as the not well-explained Fe K profile might result from the imbalance between the statistics of the RGS and EPIC data, where the grating data have more bins (a factor of 4) than CCD data and the model is mainly adjusted to fitting soft X-ray residuals. To test this possibility, we fit the continuum model to only EPIC data and show it in Fig.9 as well as the corresponding ratio. Compared with the results in Tab.2, the continuum model only requires a harder spectral slope (\(\Gamma=2.16^{+0.03}_{-0.02}\)) and explains the blue wing of Fe emission well, while the other parameters remain unchanged within uncertainties. In terms of fitting the EPIC data, this continuum model is much better (\(\Delta\chi^{2}=86\)) than the continuum plus two emitters fitted to the RGS+EPIC data. It suggests that the additional emission component is spurious, although we cannot exclude the possibility of an intervening outflow contributing to a part of the Fe K profile. Therefore, we tend not to discuss the evolution of the secondary emitter in the following.
### Evolution of the wind components
In Sec.3.3, we have measured the properties of the absorption and emission components at different flux levels. To further investigate the relations between wind properties and source luminosity, we plot their column density, ionization parameter, and velocity versus the calculated fluxes in Fig.10. The blueshift of the absorption and emission feature, measured by the Gaussian model, are included as well. The absorption line is assumed to come from Ne x
Figure 9: The stacked 2018 EPIC spectra of Mrk 1044. The fits of the continuum plus one or two emitter(s) to the RGS+EPIC data are shown in _red_ dashed and _green_ solid lines, respectively. The fit of the continuum model to only EPIC data (1.77–10 keV for consistency with other fits) is shown in the _blue_ line. The corresponding data/model ratios are shown in the following panels.
(\(\rm E_{\rm rest}=1.022\,\rm keV\)), while the emission line is O viii (\(\rm E_{\rm rest}=0.6535\,\rm keV\)). We fit these parameters with a linear function in a logarithmic space. The same fit with a slope fixed at unity is also performed on the ionization parameter to show the expected behavior in photoionization equilibrium, according to the definition of the ionization parameter (\(\xi\equiv L_{\rm ion}/n_{\rm H}R^{2}\propto F_{\rm ion}\), where \(R\) is the distance from the ionizing source to the plasma and \(n_{\rm H}\) is the hydrogen volume density). All of the fits provide positive correlations between wind properties and the source luminosity.
#### 4.1.1 Absorbing gas
For the absorption component, the Pearson correlation coefficients of the best-fit values of \((N_{\rm H},F)\), \((\log\xi,F)\), \((v,F)\), \((v_{\rm gauss},F)\) points considering their uncertainties are 0.76, 0.86, -0.86 and -0.96 respectively (Curran, 2014), suggesting a moderate correlation between \((N_{\rm H},F)\) and strong correlation among the others. The Log/Log fits give:
\[\log\frac{N_{\rm H}}{10^{21}\,\rm cm^{-2}}=(-0.9\pm 0.8)+(1.59\pm 0.98)\log( \frac{F_{0.4-10}}{10^{-11}}), \tag{1}\]
\[\log\frac{\xi}{\rm erg\,cm/s}=(2.96\pm 0.33)+(1.08\pm 0.41)\log(\frac{F_{0.4-10 }}{10^{-11}}), \tag{2}\]
\[\log\frac{|v|}{\rm c}=(-1.12\pm 0.14)+(0.39\pm 0.16)\log(\frac{F_{0.4-10 }}{10^{-11}}), \tag{3}\]
\[\log\frac{|v_{\rm gauss}|}{\rm c}=(-1.40\pm 0.16)+(0.73\pm 0.19)\log(\frac{F_{0. 4-10}}{10^{-11}}). \tag{4}\]
The best-fit value of the slope in Eq.2 is consistent with one, which is the expected value through the definition, in spite of the large uncertainty, indicating that the absorbing gas responds to the variability of the source radiation instantaneously, which implies a high volume density. The weakly increasing trend of the column density is opposite to the relation shown in IRAS 13224-3809 (see Fig.7 in Pinto et al., 2018), where the column density slightly decreases along with the increasing ionization parameter (\(\log\xi\) up to 6) and luminosity. It means the UFO in Mrk 1044 may have not been over-ionized, suggested by the modest ionization state (\(\log\xi\sim 3.7-4.0\)), and still require a larger column density to visualize the absorption features at higher ionization state (e.g. see Fig. 10 in Pinto et al., 2020).
The positive correlation between the velocity and the X-ray flux suggests that the wind is radiatively driven, which is also observed in other high-accretion systems, IRAS 13224-3809 (Pinto et al., 2018) and PDS 456 (Matzeu et al., 2017). According to Eq.4 in Matzeu et al. (2017), the net radiative-driven (i.e. radiative minus gravitational force) outflow should have a dependence between the velocity, the luminosity \(L_{\rm ion}\) and the launching radius \(R_{\rm w}\),
\[v/c\propto k_{0.4-10}^{1/2}L_{0.4-10}^{1/2}R_{\rm w}^{-1/2}, \tag{5}\]
where \(k_{0.4-10}=L_{\rm bol}/L_{0.4-10}\) is the bolometric correction factor. The relation observed in Mrk 1044 (from xabs_xs instead of the phenomenological model) is consistent with the power index (0.5) in Eq.5 within uncertainties, at variance with the results derived from IRAS 13224-3809 (\(0.05\pm 0.02\)) and PDS 456 (\(0.22\pm 0.04\)).
#### 4.1.2 Emitting gas
For the emission component, the Pearson correlation coefficients of the best-fit values of \((N_{\rm H},F)\), \((\log\xi,F)\), \((v,F)\), \((v_{\rm gauss},F)\) points considering their uncertainties are 0.73, 0.68, -0.83, and -0.69, respectively, suggesting moderate correlations, except for a strong correlation between \((v,F)\). The fits provide:
\[\log\frac{N_{\rm H}}{10^{20}\,\rm cm^{-2}}=(-2.8\pm 2.3)+(3.68\pm 2.79)\log( \frac{F_{0.4-10}}{10^{-11}}), \tag{6}\]
\[\log\frac{\xi}{\rm erg\,cm/s}=(-0.49\pm 2.16)+(3.27\pm 2.64)\log(\frac{F_{0.4-1 0}}{10^{-11}}), \tag{7}\]
\[\log\frac{|v|}{0.01\rm c}=(-0.68\pm 0.14)+(0.92\pm 0.17)\log(\frac{F_{0.4-1 0}}{10^{-11}}), \tag{8}\]
\[\log\frac{|v_{\rm gauss}|}{0.01\rm c}=(-0.57\pm 0.17)+(0.80\pm 0.22)\log( \frac{F_{0.4-10}}{10^{-11}}). \tag{9}\]
However, the extremely large uncertainties of the fitted parameters, except for the velocity-related fits, preclude any authentic conclusions on the emitting gas. It is also noted that the timescale of the segments of the flux-resolved spectra, around 3 ks, provides a maximal distance of \(\sim 220\,R_{\rm g}\) for the causally linked correlations, unless the emitting gas is mainly located along with our LOS, which is rather unlikely but still possible (given its blueshift). However, the large range of locations of the emission, suggested by the moderate ionization state, a low column density, and a narrow line width, will result in a low coherence between the plasma and the source (Juranova et al., 2022), impeding the discovery of correlations. The observed variation of the velocity is, therefore, only probably contributed by a portion of the emitting gas near the central region.
### Outflow properties
It is expected that outflows carry out sufficient power to quench or trigger star formation in their hosts and affect the evolution of galaxies (e.g. Di Matteo et al., 2005; Hopkins & Elvis, 2010; Maiolino et al., 2017; Chen et al., 2022). According to simulations, the deposition of the kinetic energy larger than 0.5% of the Eddington luminosity into the ISM is sufficient to produce considerable feedback on the host galaxy. The kinetic power of UFO can be expressed as:
\[L_{\rm UFO}=\frac{1}{2}\dot{M}_{\rm out}v_{\rm UFO}^{2}=\frac{1}{2}\Omega R^{2} \rho v_{\rm UFO}^{3}C{\rm V}, \tag{10}\]
where \(\dot{M}_{\rm out}=\Omega R^{2}\rho v_{\rm UFO}C\) is the mass outflow rate; \(\Omega\) the opening angle; \(R\) the distance between the ionizing source and UFO; \(\rho\) the outflow mass density; \(C_{\rm V}\) the volume filling factor. The mass density is defined as \(\rho=n_{\rm H}m_{\rm p}\mu\), where \(n_{\rm H}\) is the hydrogen number density; \(m_{\rm p}\) the proton mass and \(\mu=1.2\) the mean atomic mass assuming solar abundances. \(n_{\rm H}R^{2}\) can be replaced by obtained parameters \(L_{\rm ion}/\xi\), according to the definition of the ionization parameter (\(\xi\equiv L_{\rm ion}/n_{\rm H}R^{2}\)). We estimate the ionizing luminosity (1-1000 Rydberg) from the SED presented in Fig.5 at \(L_{\rm ion}\sim 3.9\times 10^{43}\,\rm erg/s\) and find,
\[L_{\rm UFO}=0.5v_{\rm UFO}^{3}m_{\rm p}\mu L_{\rm ion}\Omega C/\xi\sim 5.89 \times 10^{44}\Omega C_{\rm V}\,\rm erg/s \tag{11}\]
by inputting the results of the UFO obtained in the 2018 spectrum. Here we adopt a conservative value of the opening angle \(\Omega=0.3\) from the GRMHD simulations of radiative-driven outflows in high-accretion systems (Takeuchi et al., 2013). The filling factor \(C_{\rm V}=7\times 10^{-3}\) is derived from Eq.23 in Kobayashi et al. (2018) assuming
that the outflow mass rate is comparable with the accretion mass rate and the accretion efficiency is \(\eta=0.1\). The conservative value of the UFO kinetic energy is thus \(L_{\rm UFO}\sim 1.54\times 10^{43}\,{\rm erg/s}\sim 4.4\%L_{\rm Edd}\), surpassing the theoretical criterion, suggesting that the UFO in Mrk 1044 is very likely to influence the evolution of the host galaxy.
Based on the hypothesis that the UFO velocity is at least larger than its escape velocity, we can estimate the lower limit of the outflow location, \(R\geq 2GM_{\rm BH}/v_{\rm UFO}^{2}\geq 98R_{\rm g}\). It provides an upper limit of the outflow density, \(n_{\rm H}=L_{\rm ion}/\xi R^{2}<4.5\times 10^{12}\,{\rm cm^{-3}}\). On the other hand, by using the time-dependent photoionization model tpho (Roganti et al., 2022), we simulate the response of plasma with different densities to the source variability to estimate the lower limit of the plasma density and further the upper limit of the outflow location. The duration of the low, middle, and high states of the source are 3 ks, 1.5 ks, 3 ks respectively, provided by the timescale of segments of flux-resolved spectra, shown in the top panel of Fig.11. The time-dependent evolution of the ionic concentration of predominant absorption lines, i.e., Ne x and Fe xxi-xxiv, are shown in the lower four panels of Fig.11. Gases with a density above \(10^{9}\,{\rm cm^{-3}}\) respond quickly to the luminosity. If we assume the UFO in Mrk 1044 responds to the source instantaneously, suggested by Eq.2, the lower limit of the density is \(10^{9}\,{\rm cm^{-3}}\). The recombination timescale of the plasma at \(\log\xi=3.7\) can be evaluated through the rec_time code in SPEX. For example, the recombination time of the Fe xxiv line, the predominant line in UFO, is calculated \(t_{\rm rec}<19\,{\rm s}\), consistent with our assumption. The volume density of the UFO is thus estimated between \(n_{\rm H}=10^{9}\)-\(4.5\times 10^{12}\,{\rm cm^{-3}}\) and the corresponding location is \(R=\sqrt{L_{\rm ion}/n_{\rm H}\xi}=98\)-\(6600\,R_{\rm g}\).
For the emitting gas, we have to adopt another method to derive the upper limit of its location, since its response to the source is unconstrained. Through the assumption that the plasma thickness is smaller than its distance to the source, \(\Delta R=N_{\rm H}/C_{\rm V}n_{\rm H}\leq R\), the upper limit of the location can be estimated, \(R\leq C{\rm V}L_{\rm ion}/\xi N_{\rm H}\leq 7.8\times 10^{6}\,R_{\rm g}\). The lower limit could be calculated by the same method for absorbing gas, based on the same assumption of \(v_{\rm LOS}\geq v_{\rm esc}\), at \(>1.2\times 10^{4}\,R_{\rm g}\). However, since the emission comes from a wide range of distances, the observed velocity is an averaged value and may not be representative of the escape velocity of the emitting gas close to the center. If we assume the emitting gas shares the same origin with the UFO detected in T4, the lower limit of the location can be estimated at \(>320\,R_{\rm g}\), close to the maximal distance of the causal connection. Our constraints on the location of the emission components are rather loose and span the whole range of distances, \(3\times 10^{2}\)-\(7.8\times 10^{6}\,R_{\rm g}\), from the accretion disk to the interface between the outer disk and the broad line region (BLR), which scales with the black hole mass or Eddington ratio. The range of the corresponding density is \(1.2\times 10^{4}\)-\(7\times 10^{11}\,{\rm cm^{-3}}\).
### Comparison with other AGN
By comparison with UFOs discovered in other AGN, the ionization state (\(\log\xi\sim 3.7\)) and the velocity (\(v\sim 0.15\)c) of the UFO in Mrk 1044 are typical (\(\log\xi\sim 3\)-\(6\), \(v\sim 0.08\)-\(0.3\)c, e.g. Nardini et al., 2015; Kosee et al., 2020; Parker et al., 2021; Xu et al., 2021; Matzeu et al., 2022). The column density (\(N_{\rm H}\sim 2.3\times 10^{21}\,{\rm cm^{-2}}\)) is not as thick as typical UFOs discovered from Fe K absorption feature (\(\log(N_{\rm H}/{\rm cm^{-2}})\sim 22\)-\(24\)). However, the low column density is common in UFOs detected in the soft X-ray band (e.g. Longinotti et al., 2015; Pounds et al., 2016; Xu et al., 2022). Alternatively, another potential explanation is the relatively low
Figure 10: The column density (_top_), ionization parameter (_middle_) and velocity (_bottom_) of the photoionized absorbing (_left_) and emitting (_right_) plasmas versus the unabsorbed X-ray flux for the flux-resolved spectra. The blueshift of the main absorption/emission feature (i.e. Ne x/ O viii) measured by Gaussian is also included, where the corresponding flux is manually shifted for clarity. The linear function fits with (1:1 Log) and without (LogLog) a slope fixed at unity are performed in a logarithmic space. See details in Sec.4.1.
inclination angle of Mrk 1044 (\(i\sim 34^{\circ}\)) that we are therefore viewing a narrower wind region.
The correlation between the velocity of the UFO and the source luminosity is consistent with the phenomenon observed in PDS 456 (Mazeu et al., 2017) and IRAS 13224-3809 (Pinto et al., 2018), while different from 1H 0707-495 (Xu et al., 2021). The reason might come from their different Eddington ratios, as the former three (PDS 456, \(\lambda_{\rm Edd}\sim 0.77\), Nardini et al., 2015; IRAS 13224-3809, \(\lambda_{\rm Edd}=1\)-3, Alston 2019; Mrk 1044, \(\lambda_{\rm Edd}\sim 0.4\)) are not so highly accreting as 1H 0707-495 (\(\lambda_{\rm Edd}>0.7\), Xu et al., 2021, or \(\lambda_{\rm Edd}=140\)-260, Done & Jin, 2016) of which the structure of the accretion flow does not significantly deviate from the standard thin disk model (Shakura & Sunyaev, 1973).
Blue-shifted emission lines are rarely observed in AGN. To our knowledge, among Type 1 AGN, only four sources (1H 0707-495, Xu et al., 2021; ESO 323-G077, Jimenez-Bailon et al., 2008; NGC 4151, Armentrout et al., 2007; NGC 7469, Grafton-Waters et al., 2020) reveal blueshifted emission lines, as well as the partially absorbed emission lines in NGC 4051 (Pounds & Vaughan, 2011). Given the fact that blueshifted emission lines were also found in some Ultra-luminous X-ray (ULX) sources (e.g., NGC 55 ULX and NGC 247 X-1, Pinto et al., 2017, 2021; Kosec et al., 2021), we propose that blueshifted emission lines are related to high accretion rates and plot the blueshift of emission lines versus the Eddington ratios in Fig.12 (Jimenez-Bailon et al., 2008; Edelson et al., 2017; Mehdipour et al., 2018; Xu et al., 2021; Yuan et al., 2021). The Eddington ratio of 1H 0707-495 is assumed at its lower limit, 0.7, as we cannot constrain the upper limit and only know it is a super-Eddington AGN. The Pearson coefficient is 0.87, suggesting they are highly correlated. The linear fit gives:
\[v_{\rm EM}({\rm km/s})=(11560\pm 1756)\lambda_{\rm Edd}+(170\pm 240). \tag{12}\]
Although the fit seems to strongly support our hypothesis, due to the small size of the sample and the uncertainty on the \(\lambda_{\rm Edd}\), we are unable to confirm that correlation. The reason why there are no blueshifted emission lines in other high-Eddington AGN is probably a too-strong continuum, washing out the lines, or the small viewing angle close to face-on. The validation of this correlation requires a systematic analysis like what we have done in this paper on a large sample of AGN at different accretion rates.
Figure 11: _Top_ panel: The input light curve that we expect in Mrk 1044 (an approximation) for the trpho model. The low state corresponds to the luminosity of F1 and the high state to the luminosity of F3. The duration of the low state, the middle, and the high state are 3 ks, 1.5 ks, and 3 ks respectively, which are the average timescale of the segments of the flux-resolved spectra. _Middle_ and _Bottom_ panels: The time-dependent evolution of the concentration relative to hydrogen of Ne x and Fe xxi-xxrv for different gas densities and are compared with the ionic concentrations for a plasma in photoionization equilibrium (black stars).
Figure 12: The velocity of the emission lines versus the estimated Eddington ratio of Type 1 AGN. See more details in Sec.4.3.
### Future Mission
Future missions with unprecedented spectral resolution and effective area will provide incredible constraints on the nature of UFOs. Their large effective area will collect sufficient photons for spectroscopy within a short timescale, thus avoiding the risk of spectral broadening due to the flux-resolved spectroscopy. We will be able to trace the variability of UFO properties at different flux levels and put tighter constraints on the region of outflows through the variability (\(\Delta R=c\Delta t\)).
We, therefore, simulate spectra for the X-Ray Imaging and Spectroscopy Mission (_XRISM_, Tashiro et al., 2018) and the Advanced Telescope for High-Energy Astrophysics (_ATHENA_, Nandra et al., 2013) based on the best-fit model obtained in Sec.3.3. The data/model ratios with respect to the continuum model are shown in the middle and bottom panel of Fig.13 and that of the stacked _XMM-Newton_ 2018 spectrum is presented in the top panel for comparison. Compared with the detection significance of an outflow and an emitter in _XMM-Newton_ spectrum (\(\Delta\chi^{2}=203\)), we find _XRISM_ provides a comparable statistical improvement (\(\Delta\chi^{2}=205\)) with a quarter of _XMM-Newton_ exposure time (100 ks), while _ATHENA_ can reach a much stronger detection (\(\Delta\chi^{2}=1020\)) within over one order of magnitude fewer exposure time (10 ks).
Furthermore, the huge spectral resolution is likely to resolve the line profile, which conceals the information about the launching mechanism. According to Fukumura et al. (2022), the outflows driven by radiative lines have an asymmetric line shape of an extended red wing, while those driven by magnetic fields have a blue-extended wing. Such a difference is able to be distinguished with high-resolution missions. Therefore, given these two benefits, it is promising to deepen our understanding of AGN outflows and identify the UFO launching mechanism with future missions. Moreover, an emulation method recently developed by Matzeu et al. (2022b) can be applied to efficiently model the spectral data from future missions, which have the capability to collect photons orders of magnitude larger than that of current facilities.
## 5 Conclusions
In this work, through the time- and flux-resolved X-ray spectroscopy on four _XMM-Newton_ observations of Mrk 1044, we investigate the dependence of the wind properties on the source luminosity. We find that the absorbing gas quickly responds to the source variability, suggesting a high-density plasma (\(n_{\rm H}\sim 10^{9}\)-\(4.5\times 10^{12}\) cm\({}^{-3}\)). Furthermore, the UFO velocity is correlated with the X-ray luminosity, suggesting that the UFO in Mrk 1044 is accelerated by the radiation field. The emitting gas is located at a large range of distances from the SMBH and shows a blueshift of 2700-4500 km/s. By comparing with the discovered blueshifted emission lines in other AGN, we propose that the blueshift of emission lines is probably correlated with the source accretion rate, which can be verified with a large sample study. Our simulations demonstrate that the nature of AGN winds will be promisingly unveiled by future missions due to their large effective area and high spectral resolution.
## Acknowledgements
D.R. is supported by NASA through the Smithsonian Astrophysical Observatory (SAO) contract SV3-73016 to MIT for Support of the Chandra X-Ray Center (CXC) and Science Instruments. S.B. acknowledges financial support from the Italian Space Agency under the grant ASI-INAF 2017-14-H.O. E.K. acknowledges XRISM Participating Scientist Program for support under NASA grant 80NSSC20K0733. C.J. acknowledges the National Natural Science Foundation of China through grant 11873054, and the support by the Strategic Pioneer Program on Space Science, Chinese Academy of Sciences through grant XDA15052100.
## Data Availability
The _XMM-Newton_ data in this article are available in ESA's XMM-Newton Science Archive ([https://www.cosmos.esa.int/web/xmm-newton/xsa](https://www.cosmos.esa.int/web/xmm-newton/xsa)).
|
2301.03895
|
The generalized Clapeyron equation and its application to confined ice
growth
|
Most theoretical descriptions of stresses induced by freezing are rooted in
the (generalized) Clapeyron equation, which predicts the pressure that a solid
can exert as it cools below its melting temperature. This equation is central
for topics ranging beyond glaciology to geomorphology, civil engineering, food
storage, and cryopreservation. However, it has inherent limitations, requiring
isotropic solid stresses and conditions near bulk equilibrium. Here, we examine
when the Clapeyron equation is applicable by providing a rigorous derivation
that details all assumptions. We demonstrate the natural extension for
anisotropic stress states, and we show how the temperature and pressure ranges
for validity depend on well-defined material properties. Finally, we
demonstrate how the range of applicability of the (linear) Clapeyron equation
can be extended by adding higher-order terms, yielding results that are in good
agreement with experimental data for the pressure melting of ice.
|
Robert W. Style, Dominic Gerber, Alan W. Rempel, Eric R. Dufresne
|
2023-01-10T10:44:35Z
|
http://arxiv.org/abs/2301.03895v1
|
# The generalized Clapeyron equation and its application to confined ice growth
###### Abstract
Most theoretical descriptions of stresses induced by freezing are rooted in the (generalized) Clapeyron equation, which predicts the pressure that a solid can exert as it cools below its melting temperature. This equation is central for topics ranging beyond glaciology to geomorphology, civil engineering, food storage, and cryopreservation. However, it has inherent limitations, requiring isotropic solid stresses and conditions near bulk equilibrium. Here, we examine when the Clapeyron equation is applicable by providing a rigorous derivation that details all assumptions. We demonstrate the natural extension for anisotropic stress states, and we show how the temperature and pressure ranges for validity depend on well-defined material properties. Finally, we demonstrate how the range of applicability of the (linear) Clapeyron equation can be extended by adding higher-order terms, yielding results that are in good agreement with experimental data for the pressure melting of ice.
When water freezes in confined spaces, it can generate large stresses, often resulting in material damage. This is important across fields ranging from glaciology to geomorphology, food science, civil engineering and cryopreservation [1; 2; 3; 4; 5; 6; 7]. Broadly speaking, ice can generate stresses via two different mechanisms [8; 9]. The first is due to the expansion of water as it freezes: in a closed cavity, freezing will generate pressure (Figure 1a). The second is unrelated to the expansion of water and often dominates in porous materials [9]. Here, ice forms in open pores of a wet material (Figure 1b), but no pressure builds up during the initial ice-formation process (any pressure is relieved by water flow away from the growing ice). However, after the initial ice formation, unfrozen water is sucked back towards the ice crystals. When this water freezes onto the existing ice, it causes the ice to wedge open is confining pore. This _crysoucation_ process is aided by the presence of thin, mobile layers of water at the surface of ice (known as premelted films) [10]. These allow growth of the ice in all directions. In both cases ice will continue to grow, building up pressure, until the pressure reaches a maximum value given by a temperature-dependent stall pressure, \(P_{st}\)[9; 11]. \(P_{st}\) is very similar to the concept of crystallization pressure, found when confined crystals grow from supersaturated solutions [12; 13; 14] and to the concept of condensation pressure, when phase separation occurs in confinement [15; 16].
Theoretical descriptions of these stress-generation mechanisms are rooted in the (generalized) Clapeyron equation, a fundamental equation that describes equilibrium between a solid (ice) at pressure \(P_{s}\), and a reservoir of liquid
Figure 1: (i) Ice generates pressure as it grows in a closed cavity, due to the expansion of water upon freezing. (ii) Ice growing in an open pore is fed by nearby water, and this growth wedges open the cavity, generating stresses.
(water) at a different pressure \(P_{l}\)[17; 8; 18]:
\[\frac{(P_{s}-P_{0})}{\rho_{s}}-\frac{(P_{l}-P_{0})}{\rho_{l}}=\frac{q_{m}(T_{m}-T )}{T_{m}}. \tag{1}\]
Here, \(\rho_{l}\) and \(\rho_{s}\) are the densities of water and ice respectively, \(q_{m}\) is the specific latent heat of freezing of ice, and \(T_{m}\) is the melting temperature at a reference pressure, \(P_{0}\) (often taken as atmospheric pressure). For the freezing mechanisms described above, this equation can be used to predict \(P_{st}\) as a function of the temperature, \(T\), as at this point, the ice is in equilibrium with the nearby water. For case (i) with ice growing in a closed cavity, the ice and water are both at the same pressure (\(P_{s}=P_{l}=P_{st}\)), so
\[(P_{st}-P_{0})\left(\frac{1}{\rho_{s}}-\frac{1}{\rho_{l}}\right)=\frac{q_{m}(T _{m}-T)}{T_{m}}. \tag{2}\]
Using values from Table 1, we find that ice can exert pressures of \(\sim 11\,\)MPa per degree of undercooling (\(T_{m}-T\)).
For case (ii), the ice and water need no longer have the same pressure. If the water reservoir is held at the reference pressure \(P_{l}=P_{0}\), then \(P_{st}=P_{s}\) and
\[\frac{(P_{st}-P_{0})}{\rho_{s}}=\frac{q_{m}(T_{m}-T)}{T_{m}}. \tag{3}\]
In this case, ice can exert pressures of \(\sim 1\,\)MPa per degree of undercooling.
Even when ice is not in equilibrium (e.g. it is growing), the Clapeyron equation gives us useful information. During growth, there is no macroscopic equilibrium, but water immediately adjacent to the ice/water interface can often be considered to be in equilibrium with the ice [8]. Then, the Clapeyron equation relates the local hydrodynamic pressure in the water, \(P_{l}\), to the local pressure that has been built up in the ice (\(P_{l}\) is just the pressure of a hypothetical, reservoir of water in thermodynamic equilibrium with the ice). Water flows along nonhydrostatic gradients in \(P_{l}\), so the Clapeyron equation allows us to predict how water is transported towards (or away from) ice, and thus gives ice growth/melting rates [20; 21; 22; 23; 8].
The various applications of the Clapeyron equation make it a key tool for understanding freezing processes [e.g. 1; 6; 8; 11]. However, it makes a number of assumptions. For example, it assumes that ice can be described by an isotropic pressure, whereas ice is often characterized by an anisotropic stress state, \(\sigma_{ij}\)[24]. It also uses linear approximations that are valid only near the bulk melting point of ice (see later). Thus, several key questions arise. In particular: What is the appropriate extension of the Clapeyron equation for anisotropically stressed ice? Over what range of conditions should the Clapeyron equation be applicable?
Surprisingly, we are not aware of a systematic derivation of the Clapeyron equation that would allow us to address these questions. However, there are several related works. For example, several authors have established the thermodynamic relations that govern the dissolution of anisotropically stressed solids into adjacent fluids [25; 26; 27], with notable applications to recrystallization and pressure solution processes [e.g. 28]. Although melting was not a focus of these works, some of the consequences for ice melting were recognized by Nye [29]. He argued that the phenomenon of wire regelation requires a generalization of equation (2) where \(P_{s}\) replaced by the normal stress \(-\sigma_{nn}\), and not by the mean of the principal stresses \(-\mathrm{Tr}(\sigma)/3\), as had been argued by others. A further, unflinching critique of alternative incorrect theories is given by Kamb [27]. Finally, Sekerka and Cahn [30] examined the special case of a solid with \(\sigma_{nn}=-P_{l}\), to show that anisotropically stressed solids in equilibrium with their melt will recrystallize to form a isotropically-stressed state.
\begin{table}
\begin{tabular}{l l l} Density of ice & \(\rho_{s}\) & 917 kg m\({}^{-3}\) \\ Density of water & \(\rho_{l}\) & 997 kg m\({}^{-3}\) \\ Latent heat of fusion & \(q_{m}\) & 334 kJ kg\({}^{-1}\) \\ Melting temperature & \(T_{m}\) & 273.15 K \\ Heat capacity of ice & \(c_{s}^{p}\) & 2093 J (kg K)\({}^{-1}\) \\ Heat capacity of water & \(c_{l}^{p}\) & 4184 J (kg K)\({}^{-1}\) \\ Bulk modulus of ice & \(K_{s}\) & 11.33 GPa \\ Bulk modulus of water & \(K_{l}\) & 1.96 GPa \\ Coeff. thermal expansion, ice & \(\alpha_{s}\) & \(51\times 10^{-6}\) K\({}^{-1}\). \\ Coeff. thermal expansion, water & \(\alpha_{l}\) & \(-50\times 10^{-6}\) K\({}^{-1}\). \\ \end{tabular}
\end{table}
Table 1: Ice/water parameter values at atmospheric pressure and 273.15K [19].
Here, we provide a first-principles derivation of the generalized Clapeyron equation, along similar lines to Paterson [28]. We clearly lay out all the underlying assumptions, and present the appropriate extension for the melting behavior of anisotropically stressed ice.
### Deriving the Generalized Clapeyron Equation
We consider thermodynamic equilibrium for the two scenarios shown in Figure 1, in both of which the temperature is held fixed at \(T<T_{m}\). In case (i), water freezes in a closed cavity, so that the ice and and water both have the same pressure, \(P_{s}=P_{l}\). In case (ii), ice has frozen in an open cavity, and is in equilibrium with neighbouring bulk water, which has pressure \(P_{l}\). At the same time, the ice exerts a normal stress \(-\sigma_{nn}\) on the walls of the cavity, but negligible shear forces, due to the presence of premelted films which lubricate the ice/cavity interface [11]. The ice cannot grow through the small, connecting pore throat into the neighboring water due to capillarity (i.e. the Gibbs-Thomson effect [31; 32]).
For each scenario, we establish equilibrium behavior by minimizing the relevant free energy of the ice/water system. The relevant free energy, \(G_{\rm sys}\) satisfies \(\Delta G_{\rm sys}=\Delta U_{\rm sys}-T\Delta S_{\rm sys}+W\), where \(U_{\rm sys}\) is the internal energy of the ice/water system, \(S_{\rm sys}\) is its entropy, and \(W\) is the work done by the system on its surroundings. For case (i),
\[\Delta G_{\rm sys}=0=\Delta U_{\rm sys}-T\Delta S_{\rm sys}+P_{l}(\Delta V_{s }+\Delta V_{l}), \tag{4}\]
for case (ii),
\[\Delta G_{\rm sys}=0=\Delta U_{\rm sys}-T\Delta S_{\rm sys}-\sigma_{nn}\Delta V _{s}+P_{l}\Delta V_{l}, \tag{5}\]
where \(V_{s}\) and \(V_{l}\) are the volumes of ice and water, respectively. The first case is just a specialized version of the second, where \(-\sigma_{nn}=P_{l}\). Thus, without loss of generality, we can proceed with equation (5), and the result will describe both cases.
We consider a small perturbation in the system in Figure 1b, where a small mass of ice, \(\Delta m\), melts and flows into the reservoir. Thus, the volumes of ice and water change as \(\Delta V_{s}=-v_{s}\Delta m\), and \(\Delta V_{l}=v_{l}\Delta m\), where \(v_{s}(\sigma_{ij},T)\) and \(v_{l}(P_{l},T)\) are the specific volumes of the ice and water, respectively. Then equation (5) becomes
\[u_{l}\Delta m-u_{s}\Delta m-T(s_{l}-s_{s})\Delta m+\sigma_{nn}v_{s}\Delta m+P_ {l}v_{l}\Delta m=0. \tag{6}\]
Here, \(u_{s}(\sigma_{ij},T)\) and \(u_{l}(P_{l},T)\) are the specific internal energies of the ice and water, respectively, and \(s_{s}(\sigma_{ij},T)\) and \(s_{l}(P_{l},T)\) are the respective specific entropies. Dividing through by \(\Delta m\), we obtain
\[-(\sigma_{nn}v_{s}+P_{l}v_{l})=(u_{l}-u_{s})-T(s_{l}-s_{s}). \tag{7}\]
In principle, equation (7) completely describes equilibrium between ice and water. _i.e._ one could use tabulated values of \(u,v\), and \(s\) to find \(-\sigma_{nn}(P_{l},T)\). However, a more convenient form is found by expressing the equation relative to the pressure and temperature under bulk melting reference conditions, \((P_{0},T_{m})\). Here, \(-\sigma_{nn}=P_{l}=P_{0}\) so equation (7) becomes
\[P_{0}(v_{s}^{o}-v_{l}^{o})=(u_{l}^{o}-u_{s}^{o})-T_{m}(s_{l}^{o}-s_{s}^{o}). \tag{8}\]
The superscript \({}^{o}\) indicates reference conditions. Subtracting equations 7 and 8, we find
\[g_{l}-g_{l}^{o}=g_{s}-g_{s}^{o}, \tag{9}\]
where the specific free energies \(g_{l}(T,P_{l})=u_{l}-Ts_{l}+P_{l}v_{l}\), and \(g_{s}(T,\sigma_{ij})=u_{s}-Ts_{s}-\sigma_{nn}v_{s}\). These can be Taylor-expanded to obtain the Clapeyron equation (e.g. [1; 33]):
\[g_{l}(T,v)=g_{l}^{o}(T_{m},P_{0})+\left(\frac{\partial g_{l}}{\partial T} \right)_{P_{l}}(T-T_{m})+\left(\frac{\partial g_{l}}{\partial P_{l}}\right)_{T} (P_{l}-P_{0}) \tag{10}\]
and
\[g_{s}(T,\sigma_{ij})=g_{s}^{o}(T_{m},P_{0})+\left(\frac{\partial g_{s}}{ \partial T}\right)_{\sigma_{ij}}(T-T_{m})+\left(\frac{\partial g_{s}}{\partial \sigma_{ij}}\right)_{T}(\sigma_{ij}+P_{0}\delta_{ij}), \tag{11}\]
where \(\delta_{ij}\) is the identity matrix. To evaluate the derivatives, we note that \(\Delta g_{l}=-s_{l}\Delta T+v_{l}\Delta P_{l}.\) Thus, at reference conditions,
\[\left(\frac{\partial g_{l}}{\partial T}\right)_{P_{l}}=-s_{l}^{o},\quad\left( \frac{\partial g_{l}}{\partial P_{l}}\right)_{T}=v_{l}^{o}, \tag{12}\]
and similarly in the solid at reference conditions (\(\sigma_{ij}=-P_{0}\delta_{ij}\))
\[\left(\frac{\partial g_{s}}{\partial T}\right)_{\sigma_{ij}}=-s_{s}^{o}. \tag{13}\]
To calculate the final derivative, we notice that \(g_{s}=(f_{s}+P_{0}v_{s})-\bar{\sigma}_{nn}v_{s}\), where \(f_{s}\) is the specific Helmholtz free energy of the solid, and \(\bar{\sigma}_{ij}=\sigma_{ij}+P_{0}\delta_{ij}\). Here, \((f_{s}+P_{0}v_{s})/v_{s}^{o}\) is the free-energy per unit volume of deformations in an atmosphere at constant pressure, \(P_{0}\), and thus is the elastic energy per volume of ice in the reference state. As such, we can use linear elasticity to write
\[g_{s}=\frac{1}{2}\bar{\sigma}_{ij}\epsilon_{ij}v_{s}^{o}-\bar{\sigma}_{nn}v_{ s}^{o}\left(1+\frac{\text{Tr}(\bar{\sigma})}{3K_{s}}\right), \tag{14}\]
where \(K_{s}\) is now the bulk modulus of the solid. \(\epsilon_{ij}\) is the strain of the ice relative to its shape in the reference state \((T_{m},P_{0})\), and satsifies the linear-elastic constitutive relationship:
\[\epsilon_{ij}=\frac{1}{E_{s}}\left[(1+\nu_{s})\bar{\sigma}_{ij}-\nu_{s}\delta _{ij}\text{Tr}(\bar{\sigma})\right], \tag{15}\]
where \(E_{s}=3K_{s}(1-2\nu_{s})\) is the Young's modulus of the ice and \(\nu_{s}\) is its Poisson ratio. For small strains, \(v_{s}=v_{s}^{o}(1+\text{Tr}(\epsilon))\), and we use this in the second term of equation (14).
With the two equations above, we can evalulate the remaining derivative at \((P_{0},T_{m})\):
\[\left(\frac{\partial g_{s}}{\partial\sigma_{ij}}\right)_{T}(\bar{\sigma}_{ij} =0)=-n_{i}n_{j}v_{s}^{o}. \tag{16}\]
Here, \(n_{i}\) is the normal vector to the surface of the ice, so that \(\bar{\sigma}_{nn}=n_{i}\bar{\sigma}_{ij}n_{j}\).
Finally, we can insert these first-derivative expressions into equations (9-11) to obtain the Clapeyron equation for anisotropically stressed solids:
\[-\frac{(\sigma_{nn}+P_{0})}{\rho_{s}^{o}}-\frac{(P_{l}-P_{0})}{\rho_{l}^{o}}= \frac{q_{m}(T_{m}-T)}{T_{m}}. \tag{17}\]
Here, \(\rho_{l}^{o}=1/v_{l}^{o}\), and \(\rho_{s}^{o}=1/v_{s}^{o}\) are the densities of water and ice respectively at the bulk melting point, and \(q_{m}\equiv(s_{l}^{o}-s_{s}^{o})T_{m}\). Consistent with the regelation analysis of Nye [29], this version of the Clapeyron equation is identical to equation (1), but with \(P_{s}\) replaced by \(-\sigma_{nn}\), and not \(-\text{Tr}(\sigma)/3\), as one might naively assume.
### Field data supporting the anisotropic Clapeyron equation
Glaciological field data supporting the form of equation (17) comes from simultaneous measurements of temperatures and liquid pressures in glacier boreholes. These measurements show that temperatures increase when changes in the hydrologic system cause borehole pressures, \(P_{l}\), to decrease (e.g. [34; 35]).
The anisotropic Clapeyron equation indeed recovers this correlation. Along borehole walls, \(\sigma_{nn}=-P_{l}\). Inserting this into equation (17), we find that changes in temperature are correlated with changes in borehole pressure by:
\[\Delta T=-\frac{T_{m}}{q_{m}}\left(\frac{1}{\rho_{s}^{o}}-\frac{1}{\rho_{l}^{ o}}\right)\Delta P_{l}\approx\left(-7.16\times 10^{-8}\,\text{K}\,\text{Pa}^{-1} \right)\Delta P_{l}\;, \tag{18}\]
in agreement with the field data.
By contrast, naively extending the isotropic Clapeyron equation (1), by replacing \(-P_{s}=\text{Tr}(\sigma)/3\), does not match the experimental data. The classic analysis of Nye [36] gives the complete stress tensor at the surface of an idealized cylindrical borehole containing liquid at pressure \(P_{l}\). Far from the borehole, the ice has a far-field isotropic ice pressure \(P_{\infty}\), and creeps according to Glen's flow law with exponent \(n=3\)[37; 38]. In this case, \(-\text{Tr}(\sigma)/3=P_{l}+\left(P_{\infty}-P_{l}\right)/n\)
Substituting \(P_{s}=-\text{Tr}(\sigma)/3\) into the isotropic Clapeyron equation (1) and treating the far-field ice pressure as constant leads to
\[\Delta T=-\frac{T_{m}}{q_{m}}\left(\frac{1}{\rho_{s}^{0}}-\frac{1}{\rho_{l}^{0}}- \frac{1}{n\rho_{s}^{0}}\right)\Delta P_{l}\approx\left(2.26\times 10^{-7}\,\text{K }\,\text{Pa}^{-1}\right)\Delta P_{l}\;. \tag{19}\]
This predicts the opposite of the correlation seen in the field data.
### Errors in the Clapeyron equation
In deriving this version of the Clapeyron equation, we have had to make two main assumptions. Firstly, strains in the ice are small, so we can use linear elasticity [30]. This is reasonable as the stresses in the ice (which are \(O\)(MPa) - see introduction) are much less than the ice's elastic moduli \(E_{s},K_{s}=O\)(GPa), so strains will be small.
Secondly, we assume that higher-order terms in the expansions of \(g_{l}\) and \(g_{s}\) are negligible. We can test this by reverting to the case of isotropically-stressed ice (\(\sigma_{ij}=-P_{s}\delta_{ij}\)). Then, we Taylor-expand equation (9) in \(T,P_{l}\) and \(P_{s}\) to obtain the second-order version of the Clapeyron equation:
\[\frac{(P_{s}-P_{0})}{\rho_{s}^{o}}-\frac{(P_{l}-P_{0})}{\rho_{l}^ {o}}= \frac{q_{m}(T_{m}-T)}{T_{m}}\] \[-\frac{c_{l}^{p}-c_{s}^{p}}{2T_{m}}(T_{m}-T)^{2}-\frac{1}{2\rho_ {l}^{o}K_{l}}(P_{l}-P_{0})^{2}+\frac{1}{2\rho_{s}^{o}K_{s}}(P_{s}-P_{0})^{2}\] \[-\frac{\alpha_{l}}{\rho_{l}^{o}}(T_{m}-T)(P_{l}-P_{0})+\frac{ \alpha_{s}}{\rho_{s}^{o}}(T_{m}-T)(P_{s}-P_{0}).\]
Here, we use the following identities [39]:
\[\frac{\partial^{2}g}{\partial T^{2}}=-\frac{c^{p}}{T_{m}},\quad\frac{\partial ^{2}g}{\partial P^{2}}=-\frac{1}{K\rho^{o}},\quad\frac{\partial^{2}g}{ \partial P\partial T}=\frac{\alpha}{\rho^{o}}. \tag{21}\]
\(c^{p}\) is the heat capacity at constant pressure, \(K\) is again the isothermal bulk modulus, and \(\alpha\) is the coefficient of thermal expansion.
We can now predict the pressure-melting curve for different freezing scenarios. For bulk ice/water equilibrium (Figure 1a), \(P_{s}=P_{l}\), and we take atmospheric pressure, \(P_{a}\), as the reference pressure. Figure 2a compares the isotropic Clapeyron equation (1) (red, dashed) with experimental data (black, dotted) [40]. There is a significant error between the two results for an undercooling of more than \(\sim 3\,^{\circ}\)C. However, when we use the full, second-order Clapeyron equation (20) (blue), we find good agreement down to an undercooling of at least \(15\,^{\circ}\)C. In this situation, the terms that are quadratic in pressure dominate the error, and to excellent approximation (Figure 1a, orange dash-dotted):
\[\left(\frac{1}{\rho_{s}^{o}}-\frac{1}{\rho_{l}^{o}}\right)(P_{s}-P_{0})+ \left(\frac{1}{2\rho_{l}^{o}K_{l}}-\frac{1}{2\rho_{s}^{o}K_{s}}\right)(P_{s}-P _{0})^{2}=\frac{q_{m}(T_{m}-T)}{T_{m}}. \tag{22}\]
Comparing the first two terms in this equation, we see that the linear equation (17) is only appropriate when:
\[|P_{s}-P_{a}|\ll\Delta P^{*}=\left|\left(\frac{1}{\rho_{s}^{o}}-\frac{1}{ \rho_{l}^{o}}\right)\left(\frac{1}{2\rho_{s}^{o}K_{s}}-\frac{1}{2\rho_{l}^{o} K_{l}}\right)^{-1}\right|\approx 420\text{MPa}. \tag{23}\]
We can perform a similar analysis for freezing in an open system (Figure 1b). We let \(P_{l}=P_{0}=P_{a}\), and assume that the ice exerts an isotropic pressure, \(P_{s}\) on its surroundings. Figure 2b compares the prediction of the Clapeyron equation (1) (red, dashed) with that obtained when we keep the extra quadratic terms (20) (blue). We are not aware of any experimental data precise enough to validate the theory [11]. However, here, the higher-order theory agrees well with the linear Clapeyron equation down to large undercoolings. The difference is dominated by the term in equation (20) that is quadratic in undercooling. Thus, to excellent approximation (Figure 1a, orange dash-dotted):
\[P_{s}-P_{a}=\frac{\rho_{s}^{o}q_{m}(T_{m}-T)}{T_{m}}-\frac{\rho_{s}^{o}(c_{l}^ {p}-c_{s}^{p})}{2T_{m}}(T_{m}-T)^{2}. \tag{24}\]
Comparing terms on the right-hand side, shows that we only recover the linear Clapeyron equation (1) if
\[|T_{m}-T|\ll\Delta T^{*}=\left|\frac{2q_{m}}{c_{l}^{p}-c_{s}^{p}}\right|\approx 3 20\text{K}. \tag{25}\]
This requirement is certainly satisfied for most terrestrial temperatures. Thus, there is some justification for use of the linear Clapeyron equation down to relatively large undercoolings to model this type of freezing scenario.
To summarize, our results suggest that the linearized Clapeyron equation will be valid, provided that \(|P_{l}-P_{0}|\) and \(|P_{s}-P_{0}|\) are both small relative to \(\Delta P^{*}\), while \(|T_{m}-T|\ll\Delta T*\). At larger pressures/undercoolings, the quadratic terms in equation (20) should be included.
### Conclusions
In conclusion, we have derived the linear, Clapeyron equation describing equilibrium between water and ice, clearly laying out all the assumptions involved. In particular, this equation is derived using a Taylor expansion around a reference temperature and pressure, and ignoring higher-order terms. Thus, it is only valid for a range of pressures and and temperatures around the reference conditions. Fortunately, for most naturally-occurring terrestrial freezing scenarios, the linear form of the Clapeyron equation should be adequate. For example, at the base of a glacier, pressures are typically close to hydrostatic, and thus \(O\)(MPa) [41] - this is small enough to lie within the range of applicability of the Clapeyron equation. However, more extreme conditions are expected in extraterrestrial settings [e.g. 40; 42]. There, the linearized Clapeyron equation will not accurately predict melting temperatures, which could
Figure 2: Evaluating the accuracy of the Clapeyron Equation. A) The pressure of ice in bulk ice/water equilibrium in a closed cavity (\(P_{l}=P_{s}=-\sigma_{nn}\)), as a function of undercooling. The black, dotted curve shows experimental data [40]. B) The stress exerted by ice in an open pore, as a function of undercooling. Both figures show the linear Clapeyron equation (dashed red), full 2nd-order theory (Eq. 20, blue), and simplified 2nd-order theory (Eqs. 22,24, orange dash-dotted).
lead to significant errors in models of ice dynamics (as predicted flow rates are typically based on the departure from bulk melting conditions [24]). In this case, the accuracy of the Clapeyron equation can be improved by retaining higher-order terms in the Taylor expansion.
We have also demonstrated the correct form of the Clapeyron equation for the case where ice is anisotropically stressed. This is identical to the isotropic form of the Clapeyron equation, but with ice pressure, \(P_{s}\) replaced by the normal stress exerted by ice on its surroundings, \(-\sigma_{nn}\). One consequence of this, is that differently stressed faces of ice (for example in a polycrystal) will have different melting temperatures.
While our analysis has focused on ice and water, the results should apply to any processes involving solid/liquid equilibrium, for example in the melting and deformation of rocks in geological processes (e.g. [43]). Note however, that there are two, key further effects that will likely be important to include in real-world applications. Firstly, we have neglected the presence of solutes, which are known to strongly affect the solid/liquid equilibrium [44; 45; 46; 47]. Secondly, we we have ignored the surface energy of the ice [8; 48]. However, we anticipate that both of these effects can be incorporated into the results presented here, by including colligative and capillary effects in the analysis above.
###### Acknowledgements.
RWS and DG acknowledge support from an ETH Research Grant (Grant No. ETH-38 18-2), and from the Swiss National Science Foundation (Grant No. 200021-212066); AWR received funding from NSF-2012468 and a UO Faculty Research Award.
|
2307.09950
|
Prompting for Automatic Log Template Extraction
|
Log parsing, which involves log template extraction from semi-structured logs
to produce structured logs, is the first and the most critical step in
automated log analysis. However, current log parsers suffer from limited
effectiveness for two reasons. First, traditional data-driven log parsers
solely rely on heuristics or handcrafted features designed by domain experts,
which may not consistently perform well on logs from diverse systems. Second,
existing supervised log parsers require model tuning, which is often limited to
fixed training samples and causes sub-optimal performance across the entire log
source. To address this limitation, we propose DivLog, an effective log parsing
framework based on the in-context learning (ICL) ability of large language
models (LLMs). Specifically, before log parsing, DivLog samples a small amount
of offline logs as candidates by maximizing their diversity. Then, during log
parsing, DivLog selects five appropriate labeled candidates as examples for
each target log and constructs them into a prompt. By mining the semantics of
examples in the prompt, DivLog generates a target log template in a
training-free manner. In addition, we design a straightforward yet effective
prompt format to extract the output and enhance the quality of the generated
log templates. We conducted experiments on 16 widely-used public datasets. The
results show that DivLog achieves (1) 98.1% Parsing Accuracy, (2) 92.1%
Precision Template Accuracy, and (3) 92.9% Recall Template Accuracy on average,
exhibiting state-of-the-art performance.
|
Junjielong Xu, Ruichun Yang, Yintong Huo, Chengyu Zhang, Pinjia He
|
2023-07-19T12:44:59Z
|
http://arxiv.org/abs/2307.09950v3
|
# Prompting for Automatic Log Template Extraction
###### Abstract
Log parsing, the initial and vital stage in automated log analysis, involves extracting log templates from semi-structured logs to generate structured logs. Nonetheless, current log parsers are limited in effectiveness due to two primary reasons. Firstly, traditional data-driven log parsers heavily rely on heuristics or manually crafted features provided by domain experts, which may not consistently yield optimal performance when applied to diverse log systems. Secondly, existing deep learning-based log parsers necessitate model tuning, which is typically confined to training samples and leads to suboptimal performance across the entire log source. To overcome these limitations, we propose a precise log parsing framework named LogDiv, which leverages the in-context inference capability of large language models. Specifically, LogDiv extracts the hidden semantics from multiple log examples through prompt demonstrations. Without the need for model tuning, LogDiv can directly generate a log template for the target log message by leveraging the semantics provided in the prompt context. Additionally, we introduce a simple yet effective prompt format for extracting the output and enhancing the quality of the generated log templates. To validate the performance of LogDiv, we conducted experiments using 16 widely-used public datasets. The results show that LogDiv achieves state-of-the-art performance with an average parsing accuracy of 97.7%, precision template accuracy of 88.1%, and recall template accuracy of 90.8%.
## I Introduction
Modern software systems, including online services such as Google Search and Bing Search, and system software such as Android and Windows, have become an essential part of our lives, serving millions of users globally. These systems produce valuable software logs continuously, providing a rich resource for maintainers to perform downstream tasks, such as anomaly detection [1, 2, 3, 4], root cause analysis [5, 6, 7], and program verification [8, 9]. The first step of log analysis is log parsing, _i.e._, converting semi-structured log messages into structured log messages. Manual log parsing is impractical due to the enormous volume of logs generated [10]. Therefore, numerous data-driven automatic log parsers have been proposed, including traditional unsupervised parsers [11, 12, 13, 14] and DL-based supervised parsers [15, 16, 17]. Existing log parsers primarily distinguish between _constants_ and _variables_ in a log message without the guide of logging statements. As shown in Fig. 1, constants are the tokens written by developers in the logging statements (_e.g._, a description of a software operation), while variables are tokens that record run-time environments (_e.g._, a directory path). These constants make up a _log template_, while variables are treated as _parameters_.
However, existing log parsers still suffer from limited robustness [17, 18], leading to unsatisfactory accuracy in diverse logs for two reasons: _First_, unsupervised log parsers (_e.g._, Logram [13] and Drain [14]) utilize specially designed features or heuristics (_e.g._, n-gram, prefix tree) based on domain knowledge to extract common patterns for log parsing. As a result, they often fall short in log sources whose template design does not match well with the handcrafted features. For example, Drain [14] assumes that leading tokens are constants, and logs from the same template share the same length. However, they are inadequate for logs in Proxifier [19], _e.g._, "ss.bding.com:80 close, 89652 bytes (87.5 KB) sent, 599249 bytes (585 KB) received, lifetime 37:51", where the prefix tokens can be variables and log length is flexible due to the optional tokens in brackets. _Second_, existing supervised log parsers (_e.g._, LogPPT [17]) typically need to train or tune models to mine the data characteristics in target log samples. This process may restrict parsers in training data, leading to sub-optimal performance across the overall dataset, especially when there is a large sample diversity in the log source. For instance, LogPPT requires sampling logs to mine semantics within their variable tokens, and then conducting supervised model tuning to learn how to extract variables from logs. However, for datasets such as Mac [19], with large variations between templates (_e.g._, token lengths ranging from 1 to 78, and variable number ranging from 0 to 36) and unbalanced distribution (_e.g._, log frequency
Fig. 1: A simple example of log parsing. The logging statements are typically not accessible in industrial scenarios.
ranging from 1 to 166, and more than half of log templates only contains one log message), the tuned model might be constrained to the features of sampled logs and could not generalize across the entire log source. In addition, several recent studies have indicated that the logging statements are ever-changing in software development, resulting in the instability of log templates [20, 4, 21], which may amplify the potential threat of training-based methods caused by data diversity, and further damage their effectiveness. Therefore, there is a pressing need for more effective log parsing techniques.
To this end, this paper proposes LogDiv, an effective and tuning-free log parsing framework that enables precisely dividing constants and variables in log messages. LogDiv adopts in-context learning (ICL), which utilizes the analogy ability [22] of large language models (LLMs) to learn generating the expected log templates from related log examples in the prompt demonstration. Specifically, LogDiv employs GPT-3 [23], a pre-trained LLM that first exhibits ICL ability, as the backbone. During log parsing, LogDiv first selects a small fraction of logs via explicitly maximizing their diversity and constructs a candidate set of potential prompt demonstration examples. When presented with a log message, it retrieves the top five most relative log samples from the set based on their similarity and organizes them in a special prompt format, which is designed to enhance LLM's ICL performance on the log parsing task. By incorporating both the log examples and their corresponding templates from the prompt, LogDiv generates the relevant log templates for the query through LLM inference, without necessitating model tuning. As LogDiv tailors the demonstration example selection for each log message and directly extract log semantics from the prompt context in a tuning-free manner, it can generate templates based on data's characteristics without limited to training data, and ensures high-quality log parsing performance.
LogDiv has been evaluated on 16 public datasets from LogPAI [18]. The results show that LogDiv achieves (1) 97.7% Parsing Accuracy, (2) 88.1% Precision Template Accuracy, and (3) 90.8% Recall Template Accuracy when using the prompt examples from the same log source of the target log message, outperforms the current best method by 6.1%, 20.7%, 16.7%, respectively. Moreover, LogDiv has a low standard deviation of 3.86%, 12.24%, and 9.16% on the three metrics, exhibiting a stable performance among all benchmarks.
This paper makes the following main contributions:
* It proposes LogDiv, the first general log parsing framework that exploits the in-context learning ability of an LLM to perform high-quality and robust log parsing by mining common patterns in prompt examples.
* It designs a general prompt format to explicitly control the output structure and ensure the quality of the generated log template from LLMs, which can be generalized to enhance other approaches with in-context learning.
* It presents the evaluation of LogDiv on 16 public datasets using three different metrics. The results show that LogDiv outperforms all existing log parsing tools.
## II Motivation And Background
### _Log Parsing_
Log parsing is the initial and most critical step in automated log analysis [10, 18], which aims to extract log templates from semi-structured logs to produce structured logs. Specifically, as shown in Fig. 1, in log parsing, a log parser is required to first extract the log headers, typically consisting of timestamps (_i.e._, specific date and time) and verbosity level (_e.g._, _ERROR_, _INFO_, and _DEBUG_). Since log headers are automatically generated by the logger, their format is fixed and can usually be extracted easily. Therefore, log parsers primarily focus on extracting constants from log messages as log templates. An intuitive method to extract constants from log messages is to manually design regular expressions for each log template. However, this method is impractical for modern software systems due to the continuously increasing volume of log templates in industrial scenarios [20, 4], resulting in an unaffordable effort to maintain log parsing rules [10]. Therefore, numerous data-driven log parsers have been proposed, including unsupervised parsers [11, 12, 13, 14] and supervised parsers [15, 16, 17]. However, they suffer from unsatisfactory effectiveness in diverse log sources due to two reasons in design:
_First_, unsupervised parsers utilize specially designed features or heuristics based on domain knowledge for pattern extraction, leading to failure in logs that do not match well with the features. For example, He _et al._[14] designed Drain, an advanced unsupervised log parser, based on two assumptions: (1) logs from the same template probably share the same length, and (2) the prefix tokens are likely to be constants. However, as for logs in Proxifier [18] like "ss.bdimg.com:80 close, 89652 bytes (87.5 KB) sent, 599249 bytes (585 KB) received, lifetime 37:51" from E8 event, none of these assumptions are valid (_i.e._, the prefix tokens are variables, and tokens in brackets are optional). Consequently, LogPPT only achieves a Grouping Accuracy (GA) of 52.7% on Proxifier, much lower than its average GA of 86.5% [20].
_Second_, supervised parsers typically require model tuning on target log samples to learn features, making it open to the pitfalls of limited in training data, resulting in sub-optimal performance on logs with a large sample diversity. For instance, Le _et al._[17] proposed LogPPT, a log parser that tuned on a pre-trained language model, RoBERTa. LogPPT samples several logs to mine semantics within their variable tokens and conducts model tuning to learn how to extract variables from logs in a fully supervised manner. However, for a dataset like Mac, which has a large variation between templates (_e.g._, E253 event only contains one token without variable, while E277 event contains 78 tokens with 36 variables) and unbalanced log sample selection (_e.g._, log frequency ranging from 1 to 166, with over 50% of templates containing only one log sample), the learned semantics are constrained in training data, and might not be representative to the entire dataset.
Therefore, logPPT achieved a Parsing Accuracy (PA) of 67.3% on Mac, lower than its average PA of 91.6%. Additionally, recent studies have demonstrated that the logging statements in open-source software are ever-changing [21], and the log templates in industrial services may be unstable [4], which may amplify the potential issue of training-based approaches due to the data diversity, and cause a greater challenge to the effectiveness. Therefore, we believe an effective log parsing methodology that is suitable for diverse logs is needed.
### _Large Language Model and In-Context Learning_
A large language model (LLM) is a large-scale deep learning model consisting of a neural network with numerous parameters (_e.g._, more than billions of weights) and trained on large quantities of the unlabelled language corpus via self-supervised learning [24]. LLMs typically adopt the Transformer [25] architecture or its sub-structures, such as _encoder_ structure or _decoder_ structure. Common LLMs include BERT [26], GPT-1,2,3 [23, 27, 28], T5 [29], _etc_. They are highly valued in natural language processing (NLP) and are also widely used in multiple domain areas. Since the advent of LLMs, NLP has ushered into a new learning paradigm: _pre-training & fine-tuning_. People can efficiently fine-tune a pre-trained LLM with task-specific data for multiple downstream tasks, rather than training a language model from scratch [23, 28, 29] in a highly costly manner. Fine-tuning is more cost-saving than pre-training, requiring much fewer computation resources. In the software engineering area, LLMs [23, 30, 31] together with the fine-tuning techniques have made tremendous achievements in tasks such as log parsing [17], code review generation [32], fuzzing test case generation [33], automated program repair [34, 35], and root cause analysis [36].
With the significant growth in parameter scale and corpus size, LLMs have become much more powerful. For example, some researchers [37] have found that LLM's capacity can be further unleashed with the help of a textual prompt. This prompt-based approach can fully exploit the knowledge from the large-scale pre-train corpus, to complete various downstream tasks without model fine-tuning. The approach is known as _pre-training, prompting, and predicting_. [37] For example, when classifying the sentiment of the sentence "I feel good.", we can append a prompt "I am experiencing \([MASK]\) emotion." and ask the LLM to fill the blank \([MASK]\) with a polar word.
Recently, LLMs have demonstrated a new ability to learn contextual information provided in the prompt demonstrations, _i.e._, _in-context learning_ (ICL) ability [22, 23]. Specifically, if a prompt contains (1) a clear instruction specifying a particular task, (2) a few examples with ground-truth labels providing task-specific knowledge, and (3) a query from the same task, the LLM can generate an answer to the query by mining the semantics between the examples and the query based on the instruction. For instance, Fig. 2 illustrates the basic workflow of ICL on sentiment classification. Multiple studies have demonstrated that LLMs perform well on various complex problems under the ICL paradigm, such as fact retrieval [38] and mathematical reasoning [39]. People typically believe that the essence of ICL is that LLMs have gained the ability to complex reason from a large amount of pre-training data, which enables LLMs to generate the expected answer of the query based on the demonstration examples in the prompt [22].
Since ICL directly uses the knowledge obtained from pre-trained LLMs along with the information in the instruction and prompt demonstration, there is no significant gap between pre-training and downstream tasks [40], making the output more consistent with the human expectation. It avoids the issues of over-reliance on specific data (_e.g._, overfit the pattern of dominant log templates in the training set) and enables generating high-quality predictions on the overall target log sources. Moreover, log parsers always unstable log template issues in industry development [4, 21], making them needed to be trained periodically on new log data. However, LogDiv only needs to maintain a candidate set of log examples for ICL inference without requiring retraining. To this end, we proposed LogDiv, the first attempt to complete the log parsing task with the help of ICL.
## III LogDiv
### _Overview_
Our method, LogDiv, performs high-quality log parsing under the in-context learning paradigm. Specifically, it uses GPT-3 [23], a pre-trained LLM that excels in multiple natural language tasks, as the backbone. During log parsing, LogDiv first samples a minimal ratio of log samples by optimizing and maximizing their diversity, and attaches a ground-truth label to each sample to construct a candidate set of prompt demonstration examples. When given a new log message as query, LogDiv extracts the 5 most similar examples from the candidate set, arranges them in ascending order of similarity, and assembles them into a prompt in special format for LLM inference. With the guidance of the log examples along with their log templates provided in the prompt, LogDiv generates the query's corresponding log templates without requiring model tuning. The workflow of LogDiv is illustrated in Fig.3, and we will introduce the details in the following sections.
Fig. 2: The workflow of ICL: using a sentiment classification case as example.
### _Problem Definition_
This paper explicitly defines the log parsing task as a log template generation task for log messages. Specifically, for a raw log message \(x\) containing \(n\) tokens after tokenization (denoted as \(x=[t_{1}^{r},t_{2}^{r},...,t_{n}^{r}]\)) as input, our proposed method, LogDiv, are required to generate a sequence \(y\) consisting \(n\) tokens (denoted as \(y=[t_{1}^{p},t_{2}^{p},...,t_{n}^{p}]\)) within the locator pair (defined in Sec. III-G) as the log template of \(x\). The difference between \(x\) and \(y\) is that all variables in \(x\) are substituted by wildcard \(\langle*\rangle\) in \(y\). The relationship \(\mathcal{F}\) of \(t_{i}^{r}\) and \(t_{i}^{p}\) can be written as follows:
\[t_{i}^{p}=\mathcal{F}(t_{i}^{r})=\left\{\begin{array}{ll}\text{``}\langle* \rangle\text{''}&\text{if }t_{i}^{r}\text{ is variable}\\ t_{i}^{r}&\text{if }t_{i}^{r}\text{ is constant}\end{array}\right. \tag{1}\]
For example, as shown in Fig. 1, for a raw log message "Setting block size to 1919810", the generated log template is "Setting block size to \(\langle*\rangle\)". In the online parsing (inference) stage, the generated sequence will be considered as the log template, and the different tokens between the raw log and the generated sequence will be served as parameters.
### _Model Backbone_
The performance of the large language model is a key factor in the success of in-context learning. Considering that log messages are semi-structured sentences that are mainly composed of natural language descriptions (_i.e._, log template), we chose GPT-3 [23], an LLM that learned an extremely large amount of semantic information from the open-source corpus, as the backbone for LogDiv. We will not delve into the architectural details of GPT-3 in this paper. However, it is worth noting that from GPT-3 onwards, it was gradually discovered that LLMs emerged with the ability to perform ICL, which enables LLMs to explore new knowledge from a small number of demonstration examples and to exploit it for the query to generate an appropriate answer in the prompt [22]. It is the reason that motivates us to adopt GPT-3 as our backbone, as stated in Sec. II-B. Since LogDiv utilizes the LLM in a black-box manner, the backbone can be replaced at will as long as the model or relevant API is accessible.
### _Candidate Sampling_
Before log parsing starts, LogDiv first needs to sample a minimal ratio of log messages from the target log source as candidates of demonstration examples. To enhance the robustness of LogDiv, we use DPP [41], a classic algorithm for improving sampling diversity, to select a small, fixed ratio of logs from the raw log dataset that achieves maximum diversity for the candidate set construction. By explicitly optimizing the diversity of the candidate set, LogDiv can achieve an equitable selection of log message samples from different templates, thereby reducing the potential risk of inductive bias that may arise from the unbalanced candidate samples for the LLM. Moreover, using DPP for sampling yields stable results for each given log source, which could mitigate the impact of randomness in the entire LogDiv workflow.
Fig. 3: The basic workflow of LogDiv framework. The candidate set needs to be constructed before log parsing.
The detailed algorithm is shown in Algo. 1. Specifically, LogDiv first encodes log samples in the provided log dataset \(\mathcal{X}=\{x_{i}\}_{i=1}^{N}\) from text sequence \(x_{i}\) to vector \(v_{i}\) via OpenAI embedding model, and then concatenates them to an embedding matrix \(V=\epsilon(\mathcal{X})=[v_{1},v_{2},...,v_{N}]\). Then, LogDiv calculates the similarity matrix \(L=\text{sim}(V,V)\) (_i.e._, _kernel matrix_ in original paper [41]). Based on the similarity matrix \(L\), LogDiv can iteratively select samples that _maximize the marginal gain in total dissimilarity_ for the current candidate set in a greedy manner until satisfies the maximum candidate set capacity \(K\). In our implementation, we use the cosine distance to evaluate the similarity between any two tensors \(\mathbf{v}_{i}\) and \(\mathbf{v}_{j}\), which is shown in Eq. 2
\[\text{sim}(\mathbf{v}_{i},\mathbf{v}_{j}):=\cos(\mathbf{v}_{i},\mathbf{v}_{j})=\frac{\mathbf{v}_{i}^ {T}\mathbf{v}_{j}}{\|\mathbf{v}_{i}\|_{2}\|\mathbf{v}_{j}\|_{2}}, \tag{2}\]
Therefore, we can obtain a stable candidate set \(C\) for the following in-context inference stage.
### _Example Selection_
During log parsing, LogDiv is designed to select several log examples in the candidate set as the prompt demonstration for each target log message (_i.e._, query). To strengthen the accuracy of LogDiv, we adopt a simple \(k\)NN, a simple clustering algorithm, to select 5 log examples that are most similar to the query as demonstration examples. By deliberately selecting log samples that are more similar to the query as examples, LogDiv can learn the semantics and patterns of the query from data that is much closer to the query's characteristics. Moreover, the examples selected from \(k\)NN are more similar to each other, resulting in a more compact data distribution in the prompt, which makes it easier for the LLM to learn the semantics of the logs in the context.
The detailed algorithm is shown in Algo. 2 Specifically, LogDiv also begins by encoding all log candidates \(x_{i}\in\mathcal{C}\) as well as the query \(x_{q}\) into embedded vectors \(v_{i}\) and \(v_{q}\). Then, for each vectorized query \(v_{q}\), it calculates the similarity \(\text{sim}(v_{q},v_{i})\) between it and all candidates \(v_{i}\). After iterating over the entire candidate set, LogDiv generates a distance map \(\mathcal{D}\) to record the similarity between the query vector \(v_{q}\) and all candidate queries \(v_{i}\). Subsequently, by querying the distance map \(\mathcal{D}\), the top-\(k\) most similar log samples can be extracted as demonstration examples \(\mathcal{E}\). In our implementation, the similarity metric \(\text{sim}(v_{i},v_{j})\) also represents the cosine distance as shown in Eq. 2.
### _Example Permutation_
To further strengthen the parsing performance, LogDiv deliberately arranges the selected examples \(\mathcal{E}\) in the _ascending order_ based on their similarity to the query \(\text{sim}(v_{q},v_{i})\). This is because recent research [38] suggests that LLM's ICL exhibits a _recency bias_ characteristic, whereby the LLM is more susceptible to the inductive bias of the examples closer to the query when generating an answer prediction. By ordering examples based on similarity, the example closer to the query is more similar to the query, making it easier for the LLM to learn the hidden relations between the last example and its label, and generate a prediction closer to the query's ground-truth label. The advantages of this approach will be further elaborated in subsequent experimental results V-B.
### _Prompt Format_
To aid the LLM in comprehending the log parsing task, a prompt consisting of a clear and concise instruction along with several relevant demonstration examples are necessary. However, providing a simple prompt without output restriction could be impractical for log parsing since the LLM may produce extraneous text beyond the query answer, potentially compromising parsing quality. For instance, in the sentiment classification task depicted in Fig.2, the output might include explanatory content, such as "It is negative. Because awful is not a positive word", whereas only the word "negative" is relevant to the task. Thus, even if the anticipated response is present in the generated text, extracting it precisely is still challenging.
To address this, we include a locator pair ("\(\langle\)START\(\rangle\)" and "\(\langle\)END\(\rangle\)") in the prompt demonstration to limit the answer we require within the locators. Specifically, in LogDiv, each example's log template is enclosed by the locator pair. With the
Fig. 4: An example of the complete prompt. The yellow block is fixed instruction, while the blue and green block represent demonstration examples and the query, respectively.
guide of the restricted format for demonstration, LLM is inclined to generate the log template within the same locator pair via analogy [22]. Then, LogDiv can simply utilize regular expressions to extract the expected log template in the output. An example of a complete prompt is presented in Fig.4, where the last example is the most similar one to the query, and the expected raw output is "(START) JVM with ID: jvm_(*) given task: attempt_(*) (END)". Notably, even with locators, the output often generates redundant text after the expected answer. However, we can easily extract the expected answer with the help of locators and just simply ignore the rest of redundant text.
## IV Experiment Setup
### _Research Question Design_
We conducted extensive experiments on 16 public datasets to answer the following research questions:
* RQ1: How effective is LogDiv?
* RQ2: How do different settings affect LogDiv?
Specifically, RQ1 investigates LogDiv's parsing effectiveness (_i.e._, accuracy and robustness [17, 18]) on various log sources through comparison to other cutting-edge log parsers. RQ2 further discusses the contribution of each component of LogDiv through ablation study. Furthermore, it also explores the impact of different configurations through comparative experiments.
### _Environment and Implementation_
In our experiments, we utilized HTTP requests to invoke the OpenAI APIs and interact with the LLM (_i.e._, GPT-3 curie) and encoder (_i.e._, text-search-babbage-query-001) to get the raw output response of the origin prompt. Additionally, we employed Python 3.9 to implement Algo. 1, Algo. 2, and various evaluation scripts to construct the complete prompt construction in a local machine with Ubuntu 20.04.5 LTS.
### _Datasets_
Our experiments are conducted on 16 public log datasets from LogPAI [18], the most widely-used benchmark in log analysis. The LogPAI datasets cover a variety of log types, including distributed systems, standalone software, supercomputers, PC operating systems, mobile systems, and microservices. In each dataset, all log messages are labeled with ground-truth log templates with a unique Event ID. However, Kan _et al._[42] recently points out that there are multiple labeling errors in the LogPAI datasets.
### _Metrics_
Following recent studies [17, 42], our evaluation uses three metrics: Parsing Accuracy (PA), Precision Template Accuracy (PTA), and Recall Template Accuracy (RTA), where the last two metrics are also known as Template Accuracy (TA) [42]. Specially, we use PA to evaluate the parsing performance at the message level and use PTA and RTA to evaluate the parsing performance at the template level. The concepts of these metrics are as follows:
#### Iv-D1 Parsing Accuracy (PA)
Parsing Accuracy (PA), also known as Message Level Accuracy (MLA) [15], is defined as the ratio of "correctly parsed" log messages over the total number of log messages, where a log is considered to be "correctly parsed" if and only if all constants and variables are exactly divided in the log template. Since PA is not related to the number of templates, but only to the total number of parsed log messages, we use PA as a message-level evaluation metric to assess the most basic parsing capability of the parsers.
#### Iv-D2 Template Accuracy (TA)
Template Accuracy (TA) is proposed by Kan _et al._[42], which gives template-level guidance of parsing quality. Specifically, TA comprises two metrics, Precision Template Accuracy (PTA) and Recall Template Accuracy (RTA). PTA measures the ratio of correctly identified templates to the total number of identified templates, while RTA measures the ratio of correctly identified templates to the total number of ground-truth templates. The concept of "correctly identified" templates means that all log messages from this log template are "correctly parsed" (defined in the PA context). Therefore, it is obvious that PTA and RTA are more stringent metrics than PA, and we aim to use them to showcase the strong parsing ability of LogDiv.
Additionally, we do not continue to adopt Grouping Accuracy (GA), which is widely used for unsupervised parser evaluation in the past. This is mainly due to two reasons. _Firstly_, GA has a broader definition of "correct parsing," which reduces its reference value for supervised parsers. Specifically, GA only requires that log messages belonging to the same template be correctly clustered, without considering whether constants and variables in the logs are distinguished correctly. _Secondly_, although GA aims to evaluate the clustering ability of logs at the _template_ level, its value is affected by the _message_ number in each cluster, rather than the total number of templates [42]. Compared to GA, TA can provide more effective evaluation capabilities at the template level.
### _Baselines_
We selected LenMa [43], Spell [12], Drain [14], and Logram [13], the top-performing unsupervised log parsers, and LogPPT [17], the current state-of-the-art supervised log parser, as our baselines for comparison. Specifically, these unsupervised log parsers are based on handcrafted features, _i.e._, log length, longest common subsequence, prefix tree, and n-gram, while LogPPT fine-tunes a pre-trained language model, RoBERTa [44], and adapts it to log parsing. Notably, supervised parsers typically perform better than unsupervised parsers, and the only parser that LogDiv can compare in an _apple-to-apple_ manner is LogPPT. For fairness, we adopt all baselines' implementation from their public repositories [13, 17, 18] without changing any settings or hyperparameters. Consequently, we have reproduced all baselines' results, which are consistent with the previous research [17].
### _RQ1: How effective is LogDiv?_
This section compares the accuracy and robustness of LogDiv with other state-of-the-art log parsers on 16 publicly available log datasets. The study evaluates Parsing Accuracy (PA), Precision Template Accuracy (PTA), and Recall Template Accuracy (RTA) to demonstrate the accuracy of the parsers. Following the previous research [17, 18], the distribution of metrics across the 16 datasets is also reported to assess the methods' robustness.
It is also worth noting that LogDiv and LogPPT [17] require labeled log samples for supervision. Specifically, LogDiv uses DPP to extract 200 labeled logs as candidates and select five candidate for prompting, and LogPPT uses Adaptive Random Testing algorithm to extract 32 labeled logs as training samples. Therefore, they were evaluated on a test set consisting of the remaining data after excluding these samples. For those unsupervised parsers, we use the complete set of each dataset as a test set.
#### Iv-A1 Accuracy
Accuracy is the most critical characteristic to evaluate the effectiveness of a log parser [18]. Any minor parsing errors can lead to a significant impact on the performance of downstream tasks of log analysis [45]. In this part, we focus on evaluating the accuracy metrics of the log parser on each dataset, as well as its overall average accuracy metrics. These metrics provide a distinct representation of the global strengths and weaknesses of each parser for each metric, as well as the relative strengths and weaknesses of each parser on each dataset.
The results are shown in Table. I. To visualize the advantage of LogDiv, we marked the best results in each evaluation metric on each dataset, as well as their average value on all datasets in bold font. It is clear that LogDiv demonstrates state-of-the-art Parsing Accuracy (PA), Precision Template Accuracy (PTA), and Recall Template Accuracy (RTA) on all 16 LogPAI log datasets, outperforming the best DL-based parser, LogPPT [17], which is proposed recently. Specifically, regarding the widely used PA metric, LogDiv achieved an average PA of 97.7% across 16 datasets, with 100% PA on 4 datasets and PA exceeding 95% on 13 datasets, and no dataset had a PA below 80%. This outperforms any previous log parser by far. For instance, LogDiv outperformed the current best log parser LogPPT on 14 out of 16 datasets in terms of PA, with less than 4% PA gap on the only dataset where it was not superior, and another dataset tied at 100% accuracy. Additionally, LogDiv's advantage is more evident in PTA and RTA metrics. It achieved an average PTA and RTA of 88.1% and 90.8%, across 16 datasets, with 9 and 10 datasets having PTA and RTA metrics not lower than 90%, respectively. This performance significantly exceeds that of LogPPT by 21.0% and 16.5%, respectively, where LogPPT only achieved an average PTA and RTA of 67.4% and 74.1%. These findings demonstrate the significant accuracy advantage of LogDiv over current parsers.
Furthermore, LogDiv's accuracy advantage over other traditional log parsers is even more apparent. For instance, compared to the strongest traditional log parser, Drain, LogDiv exceeded its average PA, PTA, and RTA by 49.8%, 41.9%, and 50.3%, respectively. We believe that this phenomenon is due to the fact that traditional log parsers, which are based on expert-designed heuristic rules or handcrafted features, struggle to correctly distinguish between variables and constants in different types of log data, and can only coarsely cluster log messages with the same template. Therefore, to fairly demonstrate LogDiv's accuracy, we primarily compared it with the current SOTA supervised log parser, LogPPT.
#### Iv-A2 Robustness
Robustness is an important characteristic to measure the versatility of a log parser [18]. A practical log parser should perform consistently on logs generated in multiple production environments [18]. In this part, we focus on evaluating the robustness of each log parser through the analysis of the distribution of metrics among 16 log datasets.
By analyzing the distribution of metrics for each log parser, we can identify the stability of their parsing ability on different datasets.
The results are shown in Fig. 5. The results demonstrate that LogDiv outperforms existing log parsers by exhibiting the narrowest distribution range among the PA, PTA, and RTA metrics. This indicates that LogDiv possesses strong robustness across various datasets. Specifically, LogDiv has a standard deviation of only 3.86%, 12.24%, and 9.16% on the three metrics, respectively, outperforming the currently most robust log parser, LogPPT, which has standard deviations of 9.2%, 16.56%, and 14.57%. In addition, compared to traditional unsupervised log parsers, LogDiv has a more pronounced advantage. We believe this advantage of LogDiv stems from two reasons: (1) LogDiv learns features of different log sources from relative log examples in prompts and generates the expected template via these features, without the need for manual feature selection or heuristic design. (2) Since LogDiv does not train or fine-tune a model but rather leverages LLM's ICL capabilities, it is not subject to the training set, which can reduce model performance stability on various log datasets.
### _RQ2: How do different settings affect LogDiv?_
This section conducts ablation experiments to discuss the contributions of each key component and the impact of different configurations in LogDiv. Specifically, we use the Mac dataset as the benchmark for the following experiments, as its complex and diverse data (as illustrated in Sec.II-A) can better illustrate the contributions of different settings to log parsing, compared to other simpler datasets.
#### Iii-B1 Components
As shown in Fig. 3, LogDiv comprises three main components: DPP sampling, \(k\)NN comparison, and prompt locators. To evaluate the individual contributions of these components to LogDiv's log parsing capability, we removed each component sequentially and measured its necessity and importance by observing the extent of the resulting accuracy drop. Specifically, we (1) replaced the DPP algorithm with random sampling to construct the candidate set, (2) replaced the \(k\)NN algorithm with random selection to choose demonstration examples, and (3) removed the locators in the prompt demonstration, relying solely on the first-line output of LLM as the log template prediction. To mitigate the impact of random bias, we repeated each experimental configuration five times and calculated their mean as the final result.
The results are shown in Table. II. It is clear that removing DPP sampling, \(k\)NN selection, or prompt locators from LogDiv leads to a significant decrease in all three accuracy metrics. For instance, without DPP sampling or prompt locators, LogDiv achieves a 9.7% and 19.0% lower PA than the full version, respectively. The reason for this drop is that DPP sampling enhances the diversity of the candidate set, providing more related examples to assist in generating the log template, while prompt locators help to extract the expected log template from the raw output. Notably, \(k\)NN selection plays the most crucial role in LogDiv; without it, LogDiv can hardly continue log parsing, achieving only 16.8% PA, 1.8% PTA, and 6.8% RTA. This is because there are enormous dissimilarities between each template and the scarcity of log samples in each template
Fig. 5: Robustness comparison with the cutting-edge log parsers on LogPAI dataset (%).
Fig. 6: Analysis of example number on Mac dataset (%).
in the Mac dataset. Without similar log messages in prompt context, LLMs can hardly mine valid semantics and common patterns among irrelevant log examples through analogy. Thus, we conclude that these components are indispensable for the effective functioning of LogDiv.
#### V-B2 Configurations
In addition to the above-mentioned three key components, there are several configurations that significantly affect LogDiv's parsing performance, _i.e._, (1) the number of examples, (2) the permutation method, and (3) the model backbone. Specifically, we first analyze the impact of LogDiv's performance with different numbers of examples ranging from 1 to 9. Then, we compare the effectiveness of different permutation methods, _i.e._, ascending, descending, and random order. Finally, we discuss how different model backbones in GPT-3 [23] series (_i.e._, Ada, Babbage, and Curie) affect parsing performance. We also repeat each experiment five times and report their mean as the final result.
**Example Number:** As shown in Fig. 6, all metrics show an increasing trend when the number of demonstration examples is less than 5. However, when the number exceeds 5, PA slightly decreases and then stabilizes, while PTA and RTA keep increasing and decreasing, respectively. The reason for this is that too few examples do not provide enough in-context samples for LLM to learn log patterns, and too many examples include irrelevant log messages that do not provide effective guidance for parsing the target log message. Thus, a small number of demonstration examples are sufficient for LogDiv, and more examples do not help improvement.
**Example Permutation:** As shown in Fig. 7, using ascending order performs better on all metrics than descending order and random order. This is consistent with the analysis in Sec. III-E, which suggests that due to _recency bias_[38], LLMs are more likely to learn from examples that are closer to the query. Hence, if the most recent examples have patterns similar to the query, it enhances parsing accuracy. Additionally, it's worth noting that although random order has a slightly higher PA than descending order, PTA and RTA are significantly lower. This is because random order sometimes coincidentally creates an ascending order, resulting in better message-level metrics than descending order. However, due to its unstable randomness, incorrectly parsed log messages may be evenly distributed among multiple templates, leading to inferior template-level performance. Conversely, descending order's stable ordering may not significantly affect the parsing ability on log messages belonging to some templates, resulting in slightly higher PTA and RTA.
**Model Backbone:** As shown in Table.III, the accuracy increases with the LLM scale in LogDiv framework. Specifically, GPT-3 consists of four different versions: Ada-350M, Babbage-3B, Curie-13B, and Davinci-175B. Due to limited research resources, we were only able to test LogDiv's performance with the first three versions as backbones. Nevertheless, we believe that with the emergence of more powerful LLMs, LogDiv's capability for log parsing will further excel.
## VI Discussion
### _Efficiency and Practicality_
Efficiency is critical for log parsers, especially when dealing with large-scale log data. Typically, researchers measure a log parser's efficiency by analyzing the execution time required for finishing parsing. However, in this paper, we adopt OpenAI's API to interact with LLM and the encoder, resulting in an unmeasurable efficiency due to network latency (_i.e._, the parsing time for each log message is almost exactly twice the network latency). Thus, we can only qualitatively discuss potential efficiency issues in deployment scenarios.
Specifically, LogDiv's time cost consists of (1) constructing candidate sets, (2) selecting demonstration examples, (3) encoding log messages, and (4) LLM inference. The cost of (1) and (2) mainly comes from the DPP and the \(k\)NN, which are efficient and classic algorithms. The time complexities are \(O(K^{2}N)\) and \(O(KM)\) respectively, where \(K\), \(N\), and \(M\) are the log scale of (i) candidate set \(\mathcal{C}\), (ii) initial logs from target source \(\mathcal{X}\), and (iii) incoming logs in online parsing stage. The cost of (3) and (4) mainly depends on the model inference speed. For large enterprises and organizations that may face large volumes of log data, distilling a smaller model from the original LLM or increasing the parallel computing resources might be a potential solution to mitigate the efficiency threat in LLM inference. Furthermore, in industrial development scenarios, LogDiv only need to collect the newly produced logs as candidates for ICL inference. It does not require the additional resource consumption for retraining the models on these new log data. Therefore, we believe that the LogDiv framework can still be efficient and practical.
### _Threat to Validity_
We identified the following major threats to validity:
Fig. 7: Analysis of permutation method on Mac dataset (%).
* **Randomness:** Randomness may affect the performance through two aspects: (1) the randomness in LLMs and (2) the randomness introduced during the experiments in Sec. V-B, including a random sampling of candidates, selection of examples, and permutation of examples. To mitigate the former threat, we reduced the randomness of the LLM by configuring it to generate consistent outputs for the same input text (_i.e._, initialize temperature=0). To mitigate the latter threat, we conducted five independent experiments for each experimental setting and used the mean of the results as the final outcome.
* **Baselines:** To ensure a fair evaluation of the effectiveness of LogDiv, we selected multiple top-performing open-source log parsers as baselines for comparison. However, these parsers were implemented differently. To preserve their original performance reported in their respective papers, we used the data and hyperparameters provided in their publicly available replication packages without any modifications. The final results were consistent with the data reported in their original papers [17] or code repositories [46].
## VII Related Works
Log parsing methods can be categorized into unsupervised and supervised methods according to the parsing algorithms.
### _Unsupervised log parsers_
Unsupervised log parsers leverage manually designed features to extract log templates, which have been widely explored in the past. There are three main categories of unsupervised log parsers: frequent pattern mining-based, clustering-based, and heuristic-based methods. (1) _Frequent pattern mining-based_ methods regard the mined frequent patterns (_e.g._, n-grams) as log templates. For example, SLCT [47], LFA [48], LogCluster [49], and Logram [13] try to use different methods to extract frequent patterns in logs. (2) _Clustering-based_ methods aim to group similar logs first, assuming that logs in the same cluster share the same template, and then extract the common tokens as the template in each cluster. Some clustering-based methods can perform in an online manner because they adopt an online grouping strategy rather than clustering all the offline logs at once. Specifically, LKE [50], LogSig [51], LogMine [52] are offline methods, SHISO [53], and LenMa [43] are online methods. (3) _Heuristic-based_ methods encode expert domain knowledge into general and effective heuristic rules. For example, AEL [11], IPLoM [54], Spell [12], and Drain [14] utilize different heuristic rules to extract templates from logs. In particular, Drain [14] achieved SOTA in all open-source traditional unsupervised parsers with a parse tree structure to perform log parsing in an online manner. Based on Drain's architecture, there are multiple updated log parsers: POP [55] improves Drain and provides a parallel implementation on Spark for distributed deployment. SPINE [20] improves Drain and proposes a progressive clustering step for log template correction via human feedback.
### _Supervised log parsers_
Supervised log parsers are a recently emerging class of parsers. They typically use deep learning and use data sampled from a target log source to supervise mine patterns in log templates. For example, UniParser [15] utilizes a contrast learning strategy to overcome the pattern difference in heterogeneous log sources based on a BiLSTM [56] model. SemParser [16] utilizes a special semantic miner for template extraction along with the concept-instance pair using a BiLSTM [56] model. Both of the above log parsers require training a neural network from scratch using labeled training data. Recently, LogPPT [17] has proposed a new paradigm for log parsing, which uses template-free prompt-tuning [57] to fine-tune the pre-trained language model, RoBERTa [44]. By mining the semantic information contained in the pre-trained model itself and fine-tuning it with a small amount of log data (_i.e._, using 32 log samples for each dataset), LogPPT is able to achieve the current SOTA performance on multiple datasets and multiple metrics, outperforming all existing log parsers in accuracy and robustness. These results show that mining the pre-trained model for log parsing is feasible and effective.
Unlike all the above log parsers, LogDiv neither needs to design heuristic rules or handcrafted features based on domain knowledge, nor needs to fine-tune the model to the specific training data, but instead, directly generates the target log's template by mining semantics from a small number of related log examples provided in demonstration examples.
## VIII Conclusion
In conclusion, our paper introduces a new log parsing framework, named LogDiv, that utilizes the in-context learning capability of large language models to enhance log parsing effectiveness. Our method mines semantics from several log examples provided in a prompt, enabling precise log template generation without requiring model tuning. Additionally, we design a prompt format to constrain the output and ensure the quality of the generated log templates from LLMs, which we believe can generalize to other ICL applications. Through experiments on 16 public datasets using three different metrics, we demonstrate that LogDiv outperforms current log parsers by 3.86%, 20.4%, and 16.7%, in terms of PA, PTA, and RTA, respectively. In a broader sense, we believe this framework has significant potential to improve the effectiveness and practicality of various downstream automated log analysis applications.
|
2304.01684
|
Mass spectra of hidden heavy-flavor tetraquarks with two and four heavy
quarks
|
Inspired by the observation of the $X(6900)$ by LHCb and the $X(6600)$ (with
mass $6552\pm 10$ $\pm 12$ MeV) recently by CMS and ATLAS experiments of the
LHC in the di-$J/\Psi $ invariant mass spectrum, we systemically study masses
of all ground-state configurations of the hidden heavy-flavor tetraquarks
$q_{1}Q_{2}\bar{q}_{3}\bar{Q}_{4}$ and $Q_{1}Q_{2}\bar{Q}_{3}\bar{Q}_{4}$
($Q=c,b$;$q=u,d,s$) contaning two and four heavy quarks in the MIT bag model
with chromomagnetic interaction and enhanced binding energy. Considering
color-spin mixing due to chromomagnetic interaction, our mass computation
indicates that the observed $X(6600)$ is likely to be the $0^{++}$ ground
states of hidden-charm tetraquark $cc\bar{c}\bar{c}$ with computed masses
$6572$ MeV, which has a $0^{++}$ color partner around $6469$ MeV. The fully
bottom system of tetraquark $bb\bar{b}\bar{b}$ has masses of 19685 MeV and
19717 MeV for the the $0^{++}$ ground states. Further computation is given to
the tetraquark systems $sc\bar{s}\bar{c}$, $sb\bar{s}\bar{b}$,
$cb\bar{c}\bar{b}$, $nc\bar{n}\bar{c}$ and $nb\bar{n}\bar{b}$, suggesting that
the $Z_{c}(4200)$ is the tetraquark $nc\bar{n}\bar{c}$ with $J^{PC}=1^{+-}$.
All of these tetraquarks are above their lowest thresholds of two mesons and
unstable against the strong decays.
|
Ting-Qi Yan, Wen-Xuan Zhang, Duojie Jia
|
2023-04-04T10:21:25Z
|
http://arxiv.org/abs/2304.01684v1
|
# Mass spectra of hidden heavy-flavor tetraquarks with two and four heavy quarks
###### Abstract
Inspired by the observation of the \(X(6900)\) by LHCb and the \(X(6600)\) (with mass \(6552\pm 10\pm 12\) MeV) recently by CMS and ATLAS experiments of the LHC in the di-J/\(\Psi\) invariant mass spectrum, we systemically study masses of all ground-state configurations of the hidden heavy-flavor tetraquarks \(q_{1}Q_{2}\bar{q}_{3}\bar{Q}_{4}\) and \(Q_{1}Q_{2}\bar{Q}_{3}\bar{Q}_{4}\) (\(Q=c,b_{\bar{\imath}q}=u,d,s\)) contaning two and four heavy quarks in the MIT bag model with chromomagnetic interaction and enhanced binding energy. Considering color-spin mixing due to chromomagnetic interaction, our mass computation indicates that the observed \(X(6600)\) is likely to be the \(0^{++}\) ground states of hidden-charm tetraquark \(cc\bar{cc}\) with computed masses 6572 MeV, which has a \(0^{++}\) color partner around 6469 MeV. The fully bottom system of tetraquark \(bb\bar{b}\bar{b}\) has masses of 19685 MeV and 19717 MeV for the the \(0^{++}\) ground states. Further computation is given to the tetraquark systems \(sc\bar{sc}\), \(sb\bar{s}\bar{c}\), \(cb\bar{c}\bar{b}\), \(nc\bar{c}\) and \(nb\bar{n}\bar{b}\), suggesting that the \(Z_{c}(4200)\) is the tetraquark \(nc\bar{n}\bar{c}\) with \(J^{PC}=1^{+-}\). All of these tetraquarks are above their lowest thresholds of two mesons and unstable against the strong decays.
PACS number(s):12.39Jh, 12.40.Yx, 12.40.Nn
Key Words: Heavy pentaquark, Spectroscopy, Quantum number
## I Introduction
All known strongly interacting particles (mesons and baryons) could be classified as bound states made of a quark-antiquark pair or three quarks for a long time based on the conventional scheme of the quark model by Gell-Mann[1] and Zweig[2]. Meanwhile, they also suggested possible existence of the hadron states of multiquarks like tetraquarks (with quark configuration \(q^{2}\bar{q}^{2}\)) and pentaquarks (\(q^{4}\bar{q}\)). In the 1970s, multiquark states (the exotic light mesons like the \(a_{0}\) and \(f_{0}\)) are calculated by Jaffe based on the dynamical framework of the MIT bag model [3; 4]. Despite that multiquarks are considered to be exotic in the sense that they go beyond the conventional scheme of quark model, they are, in principle, allowed by the quantum chromodynamics (QCD), the theory of the strong force that binds quarks into hadrons.
Since observation of the first exotic hadron \(X(3872)\)[5] in 2003 by the Belle, many (more than 20) tetraquark candidates have been observed among charmonium-like or bottomonium-like \(XYZ\) states, which include the charmonium-like states the \(Z_{c}(3900)\)[6], the \(Z_{c}(4200)\)[7], the \(Z_{c}(4430)\)[8; 9; 10; 11]. Some of the observed \(XYZ\) states, like the charged state \(Z_{c}(3900)\)[6], are undoubtedly exotic. In 2020, a candidate of fully charm tetraquark, the \(X(6900)\), has been observed by LHCb in the di-\(J/\Psi\) invariant mass spectrum around the mass of 6905 MeV, which is later confirmed by CMS and ATLAS of the LHC at CERN [12; 13; 14]. Meanwhile in the same di-\(J/\Psi\) invariant mass spectrum, a new structure, the \(X(6600)\), are also found by CMS with mass of \(6552\pm 10\pm 12\) MeV, which is very likely to be the fully charm tetraquark.
The purpose of this work is to use the MIT bag model with enhanced binding energy to systemically study the ground-state masses of the hidden heavy-flavor tetraquarks containing two or four heavy quarks. Based on color-spin wavefunctions constructed for the hidden heavy-flavor tetraquarks, we solve the bag model and diagonalize the chromomagnetic interaction (CMI) to take into account the possible color-spin mixing of the states with same quantum numbers. We find that the computed masses of the fully charmed tetraquark \(cc\bar{c}\bar{c}\) is in a good agreement with the mass measurement by the CMS experiment[13]. Further mass computation is performed for hidden heavy-flavor systems of the tetraquarks \(bb\bar{b}\bar{b}\), \(cb\bar{c}\bar{b}\), \(sc\bar{s}\bar{c}\), \(sb\bar{s}\bar{b}\), \(nc\bar{n}\bar{c}\), \(nb\bar{n}\bar{b}\), with a suggestion that the particle \(Z_{c}(4200)\) reported by [7] is likely to be the hidden-charm tetraquark \(nc\bar{n}\bar{c}\) with \(J^{PC}=1^{+-}\).
In Section 2, we present the allowed wavefunctions of hidden heavy-flavor tetraquarks with two or four heavy quarks. In Section 3, We describe the framework of MIT bag model to be used in this work. The mass matrix evaluation for the CMI and its diagonalization are detailed in section 4. The masses of the hidden heavy-flavor tetraquarks are computed numerically for the systems \((cc\bar{c}\bar{c}\), \(bb\bar{b}\bar{b})\), \(cb\bar{c}\bar{b}\), \((sc\bar{s}\bar{c}\), \(sb\bar{s}\bar{b})\) and \((nc\bar{n}\bar{c}\), \(nb\bar{n}\bar{b})\) in Section 5. We end with conclusions and remarks in Section 6.
Wavefunctions of Hidden-Flavor Tetraquarks
We consider hidden heavy-flavor tetraquarks containing two or four heavy quarks(\(q_{1}Q_{2}\bar{q}_{3}\bar{Q}_{4}\) and \(Q_{1}Q_{2}\bar{Q}_{3}\bar{Q}_{4}\), \(Q=c,b\), \(q=u,d,s\)), which include seven flavor combinations of four quark systems: \(cc\bar{c}\bar{c}\), \(bb\bar{b}\bar{b}\), \(sc\bar{s}\bar{c}\), \(sb\bar{s}\bar{b}\), \(cb\bar{c}\bar{b}\), \(nc\bar{n}\bar{c}\), \(nb\bar{n}\bar{b}\), with \(n=u\), \(d\). In this section, we describe the wavefunctions of the hidden-flavor tetraquarks in the flavor and the color-spin space.
In the flavor space, we utilized \(\delta^{S}_{12}\) if \(q_{1}q_{2}\) is symmetric and \(\delta^{A}_{12}\equiv 1-\delta^{A}_{12}\) if \(q_{1}q_{2}\) is antisymmetric to restrict the wavefunction. If the wavefunction has no flavor symmetry (beyond the isospin symmetry \(SU(2)_{I}\)) under the exchange of \(q_{1}\) and \(q_{2}\), then \(\delta^{S}_{12}=\delta^{A}_{12}=1\).
In color space, the hidden heavy-flavor tetraquark \(q_{1}q_{2}\bar{q}_{3}\bar{q}_{4}\) can be in two color states: \(6_{c}\otimes\bar{6}_{c}\) and \(\bar{3}_{c}\otimes 3_{c}\), with the respective wave functions (superscript stands for color representation),
\[\phi^{T}_{1}=\left|(q_{1}q_{2})^{6}(\bar{q}_{3}\bar{q}_{4})^{\bar{6}}\right\rangle,\quad\phi^{T}_{2}=\left|(q_{1}q_{2})^{\bar{3}}(\bar{q}_{3}\bar{q}_{4})^{3} \right\rangle. \tag{1}\]
With the help of the color \(SU(3)_{c}\) symmetry, one can write the two configurations \(\phi^{T}_{1,2}\) here in terms of the fundamental representations, i.e., of the color bases \(c_{n}=|r\rangle\), \(|b\rangle\) and \(|g\rangle\) of the \(SU(3)_{c}\) group (see Appendix A).
In the spin space, there are six states of a tetraquark state allowed (Appendix A), with the wavefunctions (subscript stands for spin),
\[\begin{array}{ll}\chi^{T}_{1}=\left|(q_{1}q_{2})_{1}(\bar{q}_{3}\bar{q}_{4} )_{1}\right\rangle_{2},&\chi^{T}_{2}=\left|(q_{1}q_{2})_{1}(\bar{q}_{3}\bar{q}_ {4})_{1}\right\rangle_{1},\\ \chi^{T}_{3}=\left|(q_{1}q_{2})_{1}(\bar{q}_{3}\bar{q}_{4})_{1}\right\rangle_{ 0},&\chi^{T}_{4}=\left|(q_{1}q_{2})_{1}(\bar{q}_{3}\bar{q}_{4})_{0}\right\rangle _{1},\\ \chi^{T}_{5}=\left|(q_{1}q_{2})_{0}(\bar{q}_{3}\bar{q}_{4})_{1}\right\rangle_{ 1},&\chi^{T}_{6}=\left|(q_{1}q_{2})_{0}(\bar{q}_{3}\bar{q}_{4})_{0}\right\rangle _{0}.\end{array} \tag{2}\]
Based on the Pauli's principle, one can construct twelve color-spin wavefunctions for the lowest S-wave (in coordinate space) tetraquarks:
\[\begin{array}{ll}\phi^{T}_{1}\chi^{T}_{1}=\left|(q_{1}q_{2})^{6}_{1}(\bar{q} _{3}\bar{q}_{4})_{1}\right\rangle_{2}^{6}\delta^{A}_{12}\delta^{A}_{34},\\ \phi^{T}_{2}\chi^{T}_{1}=\left|(q_{1}q_{2})^{\bar{3}}_{1}(\bar{q}_{3}\bar{q}_{4 })_{1}^{3}\right\rangle_{2}^{6}\delta^{S}_{12}\delta^{S}_{34},\\ \phi^{T}_{1}\chi^{T}_{2}=\left|(q_{1}q_{2})^{\bar{2}}_{1}(\bar{q}_{3}\bar{q}_{4 })_{1}^{6}\right\rangle_{1}^{6}\delta^{A}_{12}\delta^{A}_{34},\\ \phi^{T}_{2}\chi^{T}_{2}=\left|(q_{1}q_{2})^{\bar{3}}_{1}(\bar{q}_{3}\bar{q}_{4 })_{1}^{3}\right\rangle_{1}^{6}\delta^{S}_{12}\delta^{S}_{34},\\ \phi^{T}_{1}\chi^{T}_{3}=\left|(q_{1}q_{2})^{\bar{2}}_{1}(\bar{q}_{3}\bar{q}_{4 })_{1}^{6}\right\rangle_{0}^{A}\delta^{A}_{12}\delta^{A}_{34},\\ \phi^{T}_{2}\chi^{T}_{3}=\left|(q_{1}q_{2})^{\bar{3}}_{1}(\bar{q}_{3}\bar{q}_{4 })_{1}^{3}\right\rangle_{0}^{6}\delta^{S}_{12}\delta^{S}_{34},\\ \phi^{T}_{1}\chi^{T}_{4}=\left|(q_{1}q_{2})^{\bar{6}}_{1}(\bar{q}_{3}\bar{q}_{4 })_{1}^{3}\right\rangle_{1}^{6}\delta^{S}_{12}\delta^{S}_{34},\\ \phi^{T}_{2}\chi^{T}_{4}=\left|(q_{1}q_{2})^{\bar{6}}_{1}(\bar{q}_{3}\bar{q}_{4 })_{0}^{\bar{6}}\right\rangle_{1}^{6}\delta^{S}_{12}\delta^{A}_{34},\\ \phi^{T}_{1}\chi^{T}_{5}=\left|(q_{1}q_{2})^{\bar{6}}_{0}(\bar{q}_{3}\bar{q}_{4 })_{1}^{3}\right\rangle_{1}^{6}\delta^{S}_{12}\delta^{A}_{34},\\ \phi^{T}_{2}\chi^{T}_{5}=\left|(q_{1}q_{2})^{\bar{3}}_{0}(\bar{q}_{3}\bar{q}_{4 })_{1}^{3}\right\rangle_{1}^{6}\delta^{S}_{12}\delta^{A}_{34},\\ \phi^{T}_{1}\chi^{T}_{6}=\left|(q_{1}q_{2})^{\bar{3}}_{0}(\bar{q}_{3}\bar{q}_{4 })_{0}^{\bar{6}}\right\rangle_{1}^{6}\delta^{S}_{12}\delta^{S}_{34},\\ \phi^{T}_{2}\chi^{T}_{6}=\left|(q_{1}q_{2})^{\bar{3}}_{0}(\bar{q}_{3}\bar{q}_{4 })_{0}^{\bar{3}}\right\rangle_{12}^{6}\delta^{A}_{34}.\end{array} \tag{3}\]
We choose these wavefunctions to be the bases (the first approximation) of the tetraquark eigenstates for which the chromomagnetic interaction (CMI) are ignored. We are going to employ these bases to take into account the chromomagnetic mixing due to the CMI. For example, for the \(J^{PC}=0^{++}\) state of the \(cc\bar{c}\bar{c}\) tetraquark, one can write two bases of the wavefunctions \(\phi^{T}_{2}\chi^{T}_{3}\), \(\phi^{T}_{1}\chi^{T}_{6}\) in Eq. (3) as a zero-order approximation, which satisfy the required symmetry in the color-spin space and can lead to mixing of the color-spin states when the CMI added.
With respect to given flavor compositions of the tetraquarks, one can write the allowed color-spin states that may mix due to the CMI for each choice of the quantum number \(J^{PC}\) in Table 1, where \(Q^{\prime}\) denoting heavy quark differing with \(Q\). Note that for the flavor composition \(QQ\bar{Q}\bar{Q}\) with quantum numbers \(J^{PC}=1^{+-}\) and \(2^{++}\), there is only one color-spin state for each of them, that is, the \(\phi^{T}_{2}\chi^{T}_{2}\) associated with \(1^{+-}\) and \(\phi^{T}_{2}\chi^{T}_{1}\) associated with \(2^{++}\), for which not mixing occurs in reality.
## III The MIT bag model
We use the MIT bag model which includes enhanced binding energy and the CMI in the interaction correction \(\Delta M\). The mass formula for the MIT bag model is[15]
\[M\left(R\right)=\sum_{i}\omega_{i}+\frac{4}{3}\pi R^{3}B-\frac{Z_{0}}{R}+ \Delta M, \tag{4}\]
\[\omega_{i}=\left(m_{i}^{2}+\frac{x_{i}^{2}}{R^{2}}\right)^{1/2}, \tag{5}\]
with the first term describes (relativistic) kinetic motion of each quark \(i\) in tetraquark, the second is the volume energy of bag with bag constant \(B\), the third is the zero-point-energy with coefficient \(Z_{0}\) and \(R\) the bag radius to be determined variationally. In Eq. (5), the dimensionless parameters \(x_{i}=x_{i}(mR)\) are related to \(R\) through an transcendental equation
\[\tan x_{i}=\frac{x_{i}}{1-m_{i}R-\left(m_{i}^{2}R^{2}+x_{i}^{2}\right)^{1/2}}. \tag{6}\]
In Eq. (4), we denote the sum of the first three terms to be \(M(T)\). The interaction correction \(\Delta M\) includes the enhanced binding energy \(M_{B}\) among the quarks in tetraquark and the mass splitting \(M_{CMI}\) corresponding to the CMI:
\[\Delta M=M_{B}+M_{CMI}=\sum_{i<j}B_{ij}+\left\langle H_{CMI}\right\rangle, \tag{7}\]
where \(B_{ij}\) stands for the binding energy[16; 17] between quarks \(i\) and \(j\), described below at the end of this section, and the chromomagnetic interaction \(H_{CMI}\) is given by
\[H_{CMI}=-\sum_{i<j}\left(\lambda_{i}\cdot\lambda_{j}\right)\left(\sigma_{1} \cdot\sigma_{j}\right)C_{ij}. \tag{8}\]
where \(\lambda_{i}\) and \(\sigma_{i}\) are the Gell-Mann and Pauli matrices of the quark \(i\), respectively, and \(C_{ij}\) the CMI coupling parameters, given by[18]
\[C_{ij}=3\frac{\alpha_{s}\left(R\right)}{R^{3}}\bar{\mu}_{i}\bar{\mu}_{j}I_{ij}, \tag{9}\]
with \(\alpha_{s}(R)\) is the running coupling given in Ref. [15], \(\bar{\mu}_{i}\) the reduced magnetic moment of quark \(i\),
\[\alpha_{s}(R)=\frac{0.296}{ln\left[1+\left(0.281R\right)^{-1}\right]}. \tag{10}\]
\[\bar{\mu}_{i}=\frac{R}{6}\frac{4\omega_{i}R+2\lambda_{i}-3}{2\omega_{i}R\left( \omega_{i}R-1\right)+\lambda_{i}}, \tag{11}\]
and
\[I_{ij}=1+2\int_{0}^{R}\frac{dr}{r^{4}}\bar{\mu}\bar{\mu}_{j}=1+F\left(x_{i},x_ {j}\right). \tag{12}\]
where \(\lambda_{i}\equiv m_{i}R\). The function \(F\left(x_{i},x_{j}\right)\) is given by
\[F\left(x_{i},x_{j}\right)=\left(x_{i}\sin^{2}x_{i}-\frac{3}{2}y _{i}\right)^{-1}\left(x_{j}\sin^{2}x_{j}-\frac{3}{2}y_{j}\right)^{-1} \tag{13}\] \[\left\{-\frac{3}{2}y_{i}y_{j}-2x_{i}x_{j}sin^{2}x_{i}sin^{2}x_{j} +\frac{1}{2}x_{i}x_{j}\left[2x_{i}Si\left(2x_{i}\right)\right.\right.\] \[\left.\left.+2x_{j}Si\left(2x_{j}\right)-\left(x_{i}+x_{j}\right) Si\left(2\left(x_{i}+x_{j}\right)\right)\right.\right.\] \[\left.\left.-\left(x_{i}-x_{j}\right)Si\left(2\left(x_{i}-x_{j} \right)\right)\right]\right\}\]
where \(y_{i}=x_{i}-\cos\left(x_{i}\right)\sin\left(x_{i}\right)\), \(x_{i}\) is the solution of Eq. (6), and
\[Si(x)=\int\limits_{0}^{x}\frac{\sin(t)}{t}dt. \tag{14}\]
Note that the functional of the running coupling \(\alpha_{s}(R)\) in Eq. (9) and other parameters (the quark mass \(m_{i}\), zero-point energy coefficient \(Z_{0}\), bag constant \(B\)) are evaluated in Ref. [15] via mapping the model mass prediction to the ground-state masses of the observed mesons and baryons. The obtained values for these model parameters are[15]
\[\left\{\begin{array}{cc}m_{n}=0\,\text{GeV},&m_{s}=0.279\,\text{GeV},\\ m_{c}=1.641\,\text{GeV},&m_{b}=5.093\,\text{GeV},\\ Z_{0}=1.84,&B^{1/4}=0.145\,\text{GeV}.\end{array}\right\} \tag{15}\]
\begin{table}
\begin{tabular}{c c c} State & \(J^{PC}\) & Allowed states for mixing \\ \hline _QQQQ_ & \(0^{++}\) & \(\left(\phi_{2}^{T}\chi_{3}^{T},\phi_{1}^{T}\chi_{6}^{T}\right)\) \\ & \(1^{+-}\) & \(\left(\phi_{2}^{T}\chi_{2}^{T}\right)\) \\ & \(2^{++}\) & \(\left(\phi_{2}^{T}\chi_{1}^{T}\right)\) \\ & \(0^{++}\) & \(\left(\phi_{2}^{T}\chi_{3}^{T},\phi_{2}^{T}\chi_{6}^{T},\phi_{1}^{T}\chi_{3}^{T },\phi_{1}^{T}\chi_{6}^{T}\right)\) \\ & \(1^{++}\) & \(\left(\frac{1}{\sqrt{2}}\left(\phi_{2}^{T}\chi_{4}^{T}+\phi_{2}^{T}\chi_{5}^{T} \right),\frac{1}{\sqrt{2}}\left(\phi_{1}^{T}\chi_{4}^{T}+\phi_{1}^{T}\chi_{3}^{T }\right)\right)\) \\ & \(1^{+-}\) & \(\left(\phi_{2}^{T}\chi_{2}^{T},\frac{1}{\sqrt{2}}\left(\phi_{2}^{T}\chi_{4}^{T} -\phi_{2}^{T}\chi_{5}^{T}\right),\phi_{1}^{T}\chi_{2}^{T},\frac{1}{\sqrt{2}} \left(\phi_{1}^{T}\chi_{4}^{T}-\phi_{1}^{T}\chi_{5}^{T}\right)\right)\) \\ & \(2^{++}\) & \(\left(\phi_{2}^{T}\chi_{1}^{T},\phi_{1}^{T}\chi_{1}^{T}\right)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Allowed state mixing of the hidden heavy-flavor tetraquarks due to chromomagnetic interaction.
We will use these parameters to analyze the heavy tetraquarks in this work, with the bag radius \(R\) determined variationally via the MIT bag model.
The binding energy \(M_{B}\) in Eq. (7) measures the short-range chromoelectric interaction between quarks and/or antiquarks. For the massive quarks of \(i\) and \(j\), this energy, which scales like \(-\alpha_{s}(r_{ij})/r_{ij}\), becomes sizable when both quarks(\(i\) and \(j\)) are massive, moving nonrelativistically. We treat this energy as the sum of the pair binding energies, \(B_{QQ^{\prime}}(B_{Qs})\), between heavy quarks (\(Q\) and \(Q^{\prime}\)) and between heavy quarks \(Q\) and the strange quarks \(s\)[16; 17]. This leads to five binding energies \(B_{cs}\), \(B_{cc}\), \(B_{bs}\), \(B_{bb}\), and \(B_{bc}\) for any quark pair in the color configuration \(\bar{3}_{c}\), which are extractable from heavy mesons and can be scaled to other color configurations.
Assuming two quarks \(QQ^{\prime}\) to be in the color anti-triplet \(\bar{3}_{c}\) inside baryon, the binding energy \(B_{QQ^{\prime}}\equiv B_{QQ^{\prime}}[\bar{3}_{c}]\) are extracted in the MIT bag model[15] (appendix A) for the combination of \(QQ^{\prime}=cc\), \(bb\), \(bc\), \(bs\) and \(cs\), so that a unified parameter setup was established for the ground states of meson, baryons and heavy hadrons (including doubly baryon and tetraquark- ks). The results are[15]
\[\begin{cases}B_{cs}=-0.025\,\text{GeV},&B_{cc}=-0.077\,\text{GeV},\\ \\ B_{bs}=-0.032\,\text{GeV},&B_{bb}=-0.128\,\text{GeV},\\ \\ B_{bc}=-0.101\,\text{GeV}.\end{cases} \tag{16}\]
## IV Color and spin factors for tetraquarks
To determine the mass splitting \(M_{CMI}=\langle H_{CMI}\rangle\) via the CMI Hamiltonian \(H_{CMI}\) in Eq. (8), one has to evaluate the chromomagnetic matrices \(H_{CMI}\) of the tetraquarks \(T\) for a given quantum number \(J^{PC}\). For this, one can firstly work out the color factors \(\left\langle\mathbf{\lambda}_{i}\cdot\mathbf{\lambda}_{j}\right\rangle\) and spin factors \(\left\langle\mathbf{\sigma}_{i}\cdot\mathbf{\sigma}_{j}\right\rangle\) as matrices over the color and spin bases, respectively,the allowed states of tetraquarks with given \(J^{PC}\) in Table 1. In this section, we present the color and spin factors as a matrix elements in the color and spin space, and give an unified expressions for binding energy \(M_{B}=\sum_{i<j}F_{c}B_{ij}(\bar{3}_{c})\) for the both color :
Color factor in the color states \(|n\rangle\) and \(|m\rangle\):
\[\left\langle\mathbf{\lambda}_{i}\cdot\mathbf{\lambda}_{j}\right\rangle_{mn}=\sum_{a=1 }^{8}Tr\left(c_{in}^{\dagger}\lambda^{a}c_{im}\right)Tr\left(c_{in}^{\dagger} \lambda^{a}c_{jm}\right), \tag{17}\]
and spin factor in the spin states \(|x\rangle\) and \(|y\rangle\):
\[\left\langle\mathbf{\sigma}_{i}\cdot\mathbf{\sigma}_{j}\right\rangle_{xy}=\sum_{a=1 }^{3}Tr\left(\chi_{ix}^{\dagger}\sigma^{a}\chi_{y}\right)Tr\left(\chi_{jx}^{ \dagger}\sigma^{a}\chi_{jy}\right), \tag{18}\]
where \(c_{in}\) stands for color basis (three colors \(r\), \(g\), and \(b\)) of a given quark \(i\), and \(\chi_{ix}\) represents its spin basis (with two spin components of \(\uparrow\) and \(\downarrow\)).
In color-spin wavefunction of the tetraquark \(T\), one can compute explicitly the expectation values of \(H_{CMI}\),
\[\langle T|H_{CMI}|T\rangle=-\sum_{i<j}\left\langle\mathbf{\lambda}_{i}\cdot\mathbf{ \lambda}_{j}\right\rangle_{TT}\!\left\langle\mathbf{\sigma}_{i}\cdot\mathbf{\sigma}_{ j}\right\rangle_{TT}\!C_{ij}, \tag{19}\]
to obtain the color and spin factor, writing the mass formula for \(M_{CMI}\) in terms of the CMI couplings \(C_{ij}\), which are given further by Eq. (9) in the MIT bag model. Here the state of \(T\) are the mixed states listed in Table 1, with the mixed weight \(w=(w_{1},w_{2},\cdots,w_{f})\) solved (as eigenvector during the CMI diagonalization) numerically in Table 2,5-7 in the section 5.
Given the two formula (17) and (18), one can compute the color factors \(\left\langle\phi_{1}^{T},\phi_{2}^{T}|\mathbf{\lambda}_{i}\cdot\mathbf{\lambda}_{j}| \phi_{1}^{T},\phi_{2}^{T}\right\rangle\) as 2 by 2 matrix in the color subspace of \(\left(\phi_{1}^{T},\phi_{2}^{T}\right)\), via applying Eqs. (30) and (31) in appendix A. The result are obtained to be
\[\left\langle\mathbf{\lambda}_{1}\cdot\mathbf{\lambda}_{2}\right\rangle=\left\langle\bm {\lambda}_{3}\cdot\mathbf{\lambda}_{4}\right\rangle=\begin{bmatrix}\frac{4}{3}&0\\ \\ 0&-\frac{8}{3}\end{bmatrix},\]
\[\left\langle\mathbf{\lambda}_{1}\cdot\mathbf{\lambda}_{3}\right\rangle=\left\langle\bm {\lambda}_{2}\cdot\mathbf{\lambda}_{4}\right\rangle=\begin{bmatrix}-\frac{10}{3}&2 \sqrt{2}\\ \\ 2\sqrt{2}&-\frac{4}{3}\end{bmatrix}, \tag{20}\]
\[\left\langle\mathbf{\lambda}_{1}\cdot\mathbf{\lambda}_{4}\right\rangle=\left\langle\bm {\lambda}_{2}\cdot\mathbf{\lambda}_{3}\right\rangle=\begin{bmatrix}-\frac{10}{3}&-2 \sqrt{2}\\ \\ -2\sqrt{2}&-\frac{4}{3}\end{bmatrix}.\]
From the above matrices, we see that the color configurations \(\phi_{1}^{T}\) and \(\phi_{2}^{T}\) may mix for a tetraquark state \(T\) due to the chromomagnetic interaction.
We further consider the binding energy \(M_{B}\) based on Eq. (16), which corresponds to the binding energy \(B_{ij}\equiv B_{ij}[\bar{3}_{c}]\) in baryons with the quark pair \((i,j)\) in \(\bar{3}_{c}\). Let us then consider the binding energy \(M_{B}\) for a given color configurations of the tetraquark \(T=(q_{1}q_{2})^{R}(\bar{q}_{3}\bar{q}_{4})^{R}(\text{with representation }R=6_{c}\) and \(\bar{3}_{c})\). First of all, one can scale the pair binding energy \(B_{ij}\equiv B_{ij}[\bar{3}_{c}]\) of the pair in baryon to \(F_{c}[R]B_{ij}[\bar{3}_{c}]\) of the pair in tetraquark \(T\), where \(F_{c}[R]\) is the ratio of the color factor in Eq. (20) to the color factor \(\left\langle\mathbf{\lambda}_{i}\cdot\mathbf{\lambda}_{j}\right\rangle_{B}=-8/3\) for baryon with each of quark pair \((i,j)\) in \(\bar{3}_{c}\). At last, applying to all quark pair \((i,j)\) of the tetraquark \(T\) with configurations \(\phi_{1}^{T}\) and \(\phi_{2}^{T}\), one can obtain the pair binding energies \(F_{c}[R]B_{ij}\), whose sums are,
\[M_{B}(\phi_{1}^{T})=-\frac{1}{2}B_{12}-\frac{1}{2}B_{34}+\frac{5}{4}B_{13}+\frac{5 }{4}B_{14}+\frac{5}{4}B_{23}+\frac{5}{4}B_{24}, \tag{21}\]
\[M_{B}(\phi_{2}^{T})=B_{12}+B_{34}+\frac{1}{2}B_{13}+\frac{1}{2}B_{14}+\frac{1}{2}B_ {23}+\frac{1}{2}B_{24}, \tag{22}\]
for the tetraquark \(T\), respectively, where \(B_{ij}\) is the binding energy with \((i,j)\) in \(\bar{3}_{c}\).
For color sextets of the pair \((1,2)\) and \((3,4)\), for instance, the binding energy is \(-B_{12}/2\) and \(-B_{34}/2\), respectively, with \(F_{c}=(4/3)/(-8/3)=-1/2\). For any of representation of the quark \(i\) and antiquark \(j\), the binding energies in \(T\) are either \(-5B_{ij}/4\) or \(B_{ij}/2\). We note that \(B_{ij}\) vanishes if both of quark \(i\) and \(j\) are light quarks or one of them is non-strange light quark (\(B_{nQ}=0\), \(B_{m}=0\), \(B_{n\bar{n}}=0\), \(B_{s\bar{s}}=0\)) since the short range interactions between \((i,j)\) quarks are small and thereby ignorable averagely for quark pair \((i,j)=(n,n)\) or \((i,j)=(s,s)\), due to their relativistic motion.
We come to consider the spin factors, which is given by \(\left\langle\chi^{T}|\mathbf{\sigma}_{i}\cdot\mathbf{\sigma}_{j}|\chi^{T}\right\rangle\). In the subspace spanned by \(\{\chi_{1-6}^{T}\}\) in Eq. (32), the direct computation yields the following matrices,
\[\left\langle\mathbf{\sigma_{1}}\cdot\mathbf{\sigma_{2}}\right\rangle_{\chi_{1}^{T}}= \begin{bmatrix}1&0&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&-3&0\\ 0&0&0&0&0&-3\end{bmatrix}, \tag{23}\]
\[\left\langle\mathbf{\sigma_{1}}\cdot\mathbf{\sigma_{3}}\right\rangle_{\chi_{2}^{T}}= \begin{bmatrix}1&0&0&0&0&0\\ 0&-1&0&\sqrt{2}&-\sqrt{2}&0\\ 0&0&-2&0&0&-\sqrt{3}\\ 0&\sqrt{2}&0&0&1&0\\ 0&-\sqrt{2}&0&1&0&0\\ 0&0&-\sqrt{3}&0&0&0\end{bmatrix}, \tag{24}\]
\[\left\langle\mathbf{\sigma_{1}}\cdot\mathbf{\sigma_{3}}\right\rangle_{\chi_{2}^{T}}= \begin{bmatrix}1&0&0&0&0&0\\ 0&-1&0&-\sqrt{2}&\sqrt{2}&0\\ 0&0&-2&0&0&-\sqrt{3}\\ 0&-\sqrt{2}&0&0&1&0\\ 0&\sqrt{2}&0&1&0&0\\ 0&0&-\sqrt{3}&0&0&0\end{bmatrix}, \tag{25}\]
\[\left\langle\mathbf{\sigma_{2}}\cdot\mathbf{\sigma_{3}}\right\rangle_{\chi_{4}^{T}}= \begin{bmatrix}1&0&0&0&0&0\\ 0&-1&0&-\sqrt{2}&\sqrt{2}&0\\ 0&0&-2&0&0&-\sqrt{3}\\ 0&-\sqrt{2}&0&0&1&0\\ 0&\sqrt{2}&0&1&0&0\\ 0&0&-\sqrt{3}&0&0&0\end{bmatrix}, \tag{26}\]
\[\langle\mathbf{\sigma_{3}}\cdot\mathbf{\sigma_{4}}\rangle_{\lambda_{6}^{T}}=\begin{bmatrix} 1&0&0&0&0&0\\ 0&1&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&-3&0&0\\ 0&0&0&0&1&0\\ 0&0&0&0&0&-3\end{bmatrix}. \tag{28}\]
Combing the spin factors in Eqs. (23)-(28) with Eqs. (20), we are the position to use Eqs. (19), Eq. (17) and Eq. (18) to compute the mass splitting \(M_{CMI}\) duo to chromomagnetic interaction. Using Eqs. (21) and (22), one can compute the mass sum \(\Delta M=M_{B}+M_{CMI}\) in Eq. 7 and further obtain, via adding mass of the bag \(M_{bag}=\sum_{i}\omega_{i}+(4/3)\pi R^{3}B-Z_{0}/R\), a complete mass formula for the hidden heavy-flavor tetraquark systems \(T\) addressed in this work,
\[M(T)=M_{bag}+M_{B}+M_{CMI}\,(C_{ij}), \tag{29}\]
in which \(M_{CMI}\,(C_{ij})\) are linear functions of the CMI couplings \(C_{ij}\), with the linear coefficients given by the color and spin factors shown in this section.
## V Masses of hidden heavy-flavor tetraquarks
Given the input parameters in Eqs. (15), one can numerically solve Eq. (4) variationally, with the mass splitting \(M_{CMI}\) and the CMI couplings \(C_{ij}\) given by Eqs. (9), (10), (11) and (12), to obtain bag radius \(R\) and numerically give the masses \(M(T)\) of the hidden heavy-flavor tetraquarks \(T\). Meanwhile, we show the numerical corresponding results for the bag radius \(R_{0}\), the mixing weights (eigenvectors of the CMI matrix \(H_{CMI}\) ), the tetraquark masses \(M(T)\) and thresholds of two mesons as a final states in the Tables 2, 5-7. In the following, we present the results and discussions with respect to the tetraquark systems addressed below in order.
### Fully heavy tetraquark systems
In the case of fully charmed systems of the tetraquarks \(cc\bar{c}\bar{c}\), we show the numerical results for \(R_{0}\), the state-mixing weights (eigenvectors of \(H_{CMI}\) ), the tetraquark masses \(M(T)\) and thresholds of two mesons final states in the Table 2, with the later two plotted in Fig. 1. We see that for \(J^{PC}=0^{++}\) there are two states of the tetraquarks \(cc\bar{c}\bar{c}\) with the masses of \(6572\,\mathrm{MeV}\) and \(6469\,\mathrm{MeV}\), splitted by \(103\,\mathrm{MeV}\). The tetraquark (\(cc\bar{c}\bar{c}\)) states with \(J^{PC}=1^{+-}\) and \(J^{PC}=2^{++}\) have the masses within a similar mass region, as shown in Fig 1. We find that all these \(cc\bar{c}\bar{c}\) states relatively far above their two mesons thresholds shown explicitly. For instance, the \(0^{++}\) state are all above the thresholds of the \(J/\psi J/\psi\) and \(\eta_{c}\eta_{c}\), about \(275-605\,\mathrm{MeV}\), indicating that they are not stable against strong decays through quark rearrangement to the final state of \(J/\psi J/\psi\) as well as \(\eta_{c}\eta_{c}\). For the \(1^{+-}\) state, there is one state, and its mass is above the thresholds of the two mesons \(\eta_{c}J/\psi\) and \(J/\psi J/\psi\) about \(325-440\,\mathrm{MeV}\), unstable against the strong decay to the later. In the case of the \(2^{++}\) state, there is one state with the mass above the threshold(\(J/\psi J/\psi\)) about \(350\,\mathrm{MeV}\), also strongly unstable. We also compare our calculations with other works cited and list the results in Table 3.
For fully bottom systems of the tetraquarks \(bb\bar{b}\bar{b}\), the solved results of the model are shown in Table 2. We find that all these \(bb\bar{b}\bar{b}\) states (with \(J^{PC}=0^{++}\), \(1^{+-}\) and \(2^{++}\) ) are close to each other and strongly unstable as they are far above their two mesons final states shown. For instance, two of the \(0^{++}\) states have the masses of \(19717\,\mathrm{MeV}\) and \(19685\,\mathrm{MeV}\) (with mass splitting \(32\,\mathrm{MeV}\)). As seen in Fig 2, the two of the \(0^{++}\) states are above thresholds (\(\Upsilon^{\Upsilon}\), \(\eta_{b}\eta_{b}\)) about \(764-919\,\mathrm{MeV}\). For the \(1^{+-}\) state of \(bb\bar{b}\bar{b}\), its mass is higher than the threshold(\(\eta_{b}\Upsilon\) and \(\Upsilon^{\Upsilon}\)) about \(780-840\,\mathrm{MeV}\). For the \(2^{++}\) state, it is above the threshold (\(\Upsilon^{\Upsilon}\)), about \(787\,\mathrm{MeV}\). By the way, our results for the \(bb\bar{b}\bar{b}\) systems are also compared to other works cited, as shown in Table 4.
Figure 1: The computed masses (MeV the solid lines) of the \(cc\bar{c}\bar{c}\) tetraquark system in their ground-states, as well as two meson thresholds (MeV the dotted lines).
### The bottom-charmed system(\(cb\bar{c}\bar{b}\))
For bottom-charmed systems of the tetraquarks \(cb\bar{c}\bar{b}\), we show in Table 5 the computed results for \(R_{0}\), the mixing weights (the CMI eigenvectors), the tetraquark masses \(M(T)\) and thresholds (two mesons), with the later two plotted in Fig 3. We find that there are four \(J^{PC}=0^{++}\) states for the \(cb\bar{c}\bar{b}\) systems, all above the thresholds (\(B_{c}^{*}B_{c}^{*}\), \(\Upsilon J/\psi\), \(B_{c}B_{c}\) and \(\eta_{b}\eta_{c}\)) about \(424-794\) MeV. There are four states of the \(cb\bar{c}\bar{b}\) systems with \(J^{PC}=1^{+-}\), all highly above the thresholds (\(B_{c}^{*}B_{c}^{*}\), \(B_{c}^{*}B_{c}\), \(\eta_{b}J/\psi\) and \(\Upsilon\eta_{c}\)) about \(1289-1567\) MeV, and two states of the \(cb\bar{c}\bar{b}\) systems with \(J^{PC}=1^{++}\), all highly above the thresholds (\(B_{c}^{*}B_{c}\) and \(\Upsilon J/\psi\)) about \(1347-1417\) MeV. There are also two states with \(J^{PC}=2^{++}\), both above the thresholds (\(B_{c}^{*}B_{c}^{*}\) and \(\Upsilon J/\psi\)) about \(506-608\) MeV. This indicates that the \(cb\bar{c}\bar{b}\) systems are unstable against strong decay to the final states of the mesons.
### The strange-heavy systems (\(sc\bar{s}\bar{c}\) and \(sb\bar{s}\bar{b}\))
For strange-charmed systems of the tetraquarks \(sc\bar{s}\bar{c}\), we show in Table 6 the computed results for \(R_{0}\), the mixing weights, the masses \(M(T)\) and thresholds (two mesons), with the later two plotted in Fig 4. We find that there are four \(J^{PC}=0^{++}\) states of the \(sc\bar{s}\bar{c}\) systems, all below the threshold of \(D_{s1}^{*}D_{s1}^{*}\), in which three states with masses \((4492,4378,4254)\) MeV are above the thresholds of \(D_{s}D_{s}\) and \(\phi(1020)\,J/\psi\) about \(137-556\) MeV and unstable against strong decay to them. The lowest state with mass of \(4091\) MeV is above the threshold of \(D_{s}D_{s}\) about \(155\) MeV while it is near to the threshold of \(\phi(1020)J/\psi\), far below the threshold of \(D_{s1}^{*}D_{s1}^{*}\). It is uncertain whether the lowest state is above or below the threshold of \(\phi(1020)\,J/\psi\) as the model uncertainty is as large as \(\pm 40\) MeV[15]. In the case of the \(J^{PC}=1^{+-}\) states, there are four states, with three of them having the mass of \((4529,4596,4638)\) MeV and all all below the thresh
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline State & \(J^{PC}\) & Eigenvector & \(R_{0}\)(GeV\({}^{-1}\)) & \(M(T)\)(MeV) & Threshold (MeV) \\ \hline \(cc\bar{c}\bar{c}\bar{c}\) & \(0^{++}\) & \((0.58,0.81)\) & 4.74 & 6572 & \(J/\psi J/\psi=6194\);\(\eta_{c}\eta_{c}=5967\) \\ & & \((-0.81,0.58)\) & 4.44 & 6469 & \\ & \(1^{+-}\) & 1.00 & 4.59 & 6519 & \(\eta_{c}J/\psi=6080\);\(J/\psi J/\psi=6194\) \\ & \(2^{++}\) & 1.00 & 4.66 & 6545 & \(J/\psi J/\psi=6194\) \\ \(bb\bar{b}\bar{b}\) & \(0^{++}\) & \((0.58,0.81)\) & 3.15 & 19717 & \(\Upsilon\Upsilon=18921\);\(\eta_{b}\eta_{b}=18798\) \\ & & \((-0.81,0.58)\) & 2.99 & 19685 & \\ & \(1^{+-}\) & 1.00 & 3.07 & 19700 & \(\eta_{b}\Upsilon=18859\);\(\Upsilon\Upsilon=18921\) \\ & \(2^{++}\) & 1.00 & 3.11 & 19708 & \(\Upsilon\Upsilon=18921\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The numerical results for the bag radius \(R_{0}\), the state-mixing weights (eigenvectors of \(H_{CMI}\)), the tetraquark masses \(M(T)\) and thresholds of two mesons final states for the hidden heavy-flavor tetraquarks(\(cc\bar{c}\bar{c}\), \(bb\bar{b}\bar{b}\))
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline State & \(J^{PC}\) & This work & [19] & [20] & [21] & [22] & [23] & [24] & [25] & [26] \\ \hline \((cc\bar{c}\bar{c}\bar{c})\) & \(0^{++}\) & 6469 & 6487 & 6477 & 6797 & \(6440-6820\) & 6437 & 6200 & 6192 & \(6038-6115\) \\ & \(0^{++}\) & 6572 & 6518 & 6695 & 7016 & \(6440-6820\) & 6383 &... &... &... \\ & \(1^{+-}\) & 6519 & 6500 & 6528 & 6899 & \(6370-6510\) & 6437 &... &... & \(6101-6176\) \\ & \(2^{++}\) & 6545 & 6524 & 6573 & 6956 & \(6370-6510\) & 6437 &... &... & \(6172-6216\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparision of our results for the \(cc\bar{c}\bar{c}\) systems with other calculations cited.All masses are in unit of MeV.
olds of \(D_{s}D_{s1}^{*}\) and \(D_{s1}^{*}D_{s1}^{*}\) about 110-1031 MeV and one state, with mass of 4843 MeV, above the threshold of \(D_{s}D_{s1}^{*}\) about 95 MeV but below the threshold of \(D_{s1}^{*}D_{s1}^{*}\) about 717 MeV. There are two \(sc\bar{s}\) systems with \(J^{PC}=1^{++}\), both of which are above the threshold of \(\phi\) (1020) \(J/\psi\) about 455 - 538 MeV and below the threshold of \(D_{s}D_{s1}^{*}\) about \(93-176\) MeV. There are two \(sc\bar{s}\) systems with \(J^{PC}=2^{++}\), both above the threshold of \(\phi\) (1020) \(J/\psi\) about \(304-333\) MeV and below the threshold of \(D_{s}^{*}D_{s1}^{*}\) about \(1110-1139\) MeV, unstable to strong decay to \(\phi\) (1020) \(J/\psi\).
For strange-bottom systems \(sb\bar{s}\bar{b}\), we show in Table 6 the computed results for \(R_{0}\), the mixing weights, the masses \(M(T)\) and thresholds, with the later two plotted in Fig 5. Similarly, there are four states for each of \(J^{PC}=0^{++}\) and \(J^{PC}=1^{+-}\), and two states for each of \(J^{PC}=1^{++}\) and \(J^{PC}=2^{++}\). All of the \(sb\bar{s}\bar{b}\) systems are above the thresholds except for the lowest one with mass of 10843 MeV which is near to thresholds (13 MeV) of the \(B_{s}^{*}B_{s}^{*}\). The \(0^{++}\) states of the \(sb\bar{s}\bar{b}\) systems are above the thresholds of \(B_{s}^{0}B_{s}^{0}\), \(B_{s}^{*}B_{s}^{*}\) and \(\phi\) (1020) \(\Upsilon\). Among them, the minimum mass of 10843 MeV can be strongly decayed into \(B_{s}^{0}B_{s}^{0}\) and \(\phi\) (1020) \(\Upsilon\). Because of the error in the model, it is uncertain whether it is above or below the thresh
\begin{table}
\begin{tabular}{c c c c c c} State & \(J^{PC}\) & Eigenvector & \(R_{0}\)(GeV\({}^{-1}\)) & \(M(T)\)(MeV) & Threshold (MeV) \\ \hline \(cb\bar{c}\bar{b}\) & \(0^{++}\) & (-0.21,-0.52,0.82,0.16) & 3.76 & 13076 & \(B_{c}^{*}B_{c}^{*}=12652;\Upsilon J/\psi=12557\) \\ & & (-0.77, 0.24,-0.16,0.58) & 3.90 & 13117 & \(B_{c}B_{c}=12550;\eta_{0}\eta_{c}=12382\) \\ & & (-0.14,-0.82,-0.55,0.01) & 4.02 & 13147 & \\ & & (0.59,-0.06,-0.05,0.80) & 4.12 & 13176 & \\ & 1\({}^{+-}\) & (-0.43,0.39,0.14,0.80) & 3.95 & 13941 & \(B_{c}^{*}B_{c}^{*}=12652;B_{c}^{*}B_{c}=12597\) \\ & & (0.63,0.74,-0.25,0.02) & 4.0 & 13959 & \(\eta_{a}J/\psi=12496;\Upsilon\eta_{c}=12444\) \\ & & (0.57,-0.26,0.71,0.32) & 4.04 & 13966 & \\ & & (0.30,-0.48,-0.65,0.51) & 4.18 & 14011 & \\ \(1^{++}\) & (-0.58,0.82) & 3.94 & 13944 & \(B_{c}^{*}B_{c}=12597;\Upsilon J/\psi=12557\) \\ & & (0.82,0.58) & 4.07 & 13974 & \\ \(2^{++}\) & (0.74,0.67) & 4.06 & 13158 & \(B_{c}^{*}B_{c}^{*}=12652;\Upsilon J/\psi=12557\) \\ & & (-0.68,0.74) & 4.08 & 13165 & \\ \end{tabular}
\end{table}
Table 5: Computed results for the bottom-charmed tetraquark states \(cb\bar{c}\bar{b}\). The thresholds of two mesons are also listed.
\begin{table}
\begin{tabular}{c c c c c c c} State & \(J^{PC}\) & This work & [19] & [21] & [27; 28] & [25] & [29] \\ \hline \((bb\bar{b}\bar{b})\) & \(0^{++}\) & 19685 & 19322 & 20155 & 18840 & 18826 & 18754 \\ & \(0^{++}\) & 19717 & 19338 & 20275 &... &... &... \\ & \(1^{+-}\) & 19700 & 19329 & 20212 & 18840 &... & 18808 \\ & \(2^{++}\) & 19708 & 19341 & 20243 & 18850 &... & 18916 \\ \end{tabular}
\end{table}
Table 4: Comparision of our results for the \(bb\bar{b}\bar{b}\) systems with other calculations cited.All masses are in unit of MeV.
old of \(B_{s}^{*}B_{s}^{*}\). The four \(J^{PC}=1^{+-}\) states are all highly above the thresholds of \(B_{s}^{0}B_{s}^{*}\), \(B_{s}^{*}B_{s}^{*}\) (about \(562-945\,\)MeV and \(514-897\,\)MeV, respectively).There are two states \(J^{PC}=1^{++}\), which are higher than \(B_{s}^{0}B_{s}^{*}\) and \(\phi(1020)\,\Upsilon\)(about \(626-666\,\)MeV and \(928-968\,\)MeV, respectively). \(J^{PC}=2^{++}\) has two states, which are higher than \(B_{s}^{*}B_{s}^{*}\) and \(\phi(1020)\,\Upsilon\) thresholds (about \(263-324\,\)MeV and \(613-674\,\)MeV), indicating they are unstable.
### The heavy-light(non-strange) systems (\(nc\bar{n}\) and \(nb\bar{b}\))
For hidden charmed systems of the tetraquarks \(nc\bar{n}\bar{c}\), we show the computed results for \(R_{0}\), the mixing weights, the masses \(M(T)\) and thresholds in Table 7, with the later two plotted in Fig 6. There are four states for each of \(J^{PC}=0^{++}\) and \(J^{PC}=1^{+-}\), and two states for each of \(J^{PC}=1^{++}\) and \(J^{PC}=2^{++}\). For the \(0^{++}\) states, two higher states (4259 MeV, 4127 MeV) are all above the thresholds of \(D^{0}D^{0}\), \(D^{*}D^{*}\), \(\omega\,(782)\,J/\psi\) and \(\pi^{0}\eta_{c}\) (about \(397-529\,\)MeV, \(110-242\,\)MeV, \(248-380\,\)MeV and \(1008-1140\,\)MeV). The lower state with mass 3954 MeV, which is above the thresh
Figure 4: Computed masses (Mev the solid lines) of the \(sc\bar{n}\) tetraquarks and corresponding two meson thresholds (MeV the dotted lines)
Figure 5: Computed masses (MeV the solid lines) of the \(sb\bar{b}\bar{b}\) tetraquarks and corresponding two meson thresholds (MeV the dotted lines)
Figure 3: Computed masses (MeV the solid lines) of the \(cb\bar{b}\bar{b}\) systems of tetraquarks in their ground states, and the thresholds (MeV the dotted lines) of the two meson final states.
Figure 2: The computed masses (MeV the solid lines) of the \(bb\bar{b}\bar{b}\) tetraquark system in their ground-states, as well as two meson thresholds (MeV the dotted lines).
\begin{table}
\begin{tabular}{c c c c c c} State & \(J^{PC}\) & Eigenvector & \(R_{0}(\mathrm{GeV^{-1}})\) & \(M(T)\)(MeV) & Threshold (MeV) \\ \hline \(sc\bar{s}\bar{c}\) & \(0^{++}\) & \((-0.18,-0.51,0.83,0.14)\) & 4.70 & 4091 & \(D_{s}D_{s}=3936;D_{s1}^{*}D_{s1}^{*}=5560\) \\ & & \((-0.76,0.21,-0.15,0.60)\) & 4.93 & 4254 & \(\phi\left(1020\right)J/\psi=4117\) \\ & & \((-0.12,-0.84,-0.54,0.01)\) & 5.11 & 4378 & \\ & & \((0.62,-0.06,-0.04,0.78)\) & 5.33 & 4492 & \\ \(1^{+-}\) & \((-0.33,0.57,0.03,0.75)\) & 5.21 & 4529 & \(D_{s}D_{s1}^{*}=4748;D_{s1}^{*}D_{s1}^{*}=5560\) \\ & & \((0.87,0.49,0.08,0.02)\) & 5.30 & 4596 & \\ & & \((0.18,-0.46,0.77,0.41)\) & 5.38 & 4638 & \\ & & \((0.32,-0.47,-0.64,0.52)\) & 5.46 & 4843 & \\ \(1^{++}\) & \((-0.58,0.82)\) & 5.22 & 4572 & \(\phi\left(1020\right)J/\psi=4117;D_{s}D_{s1}^{*}=4748\) \\ & & \((0.82,0.58)\) & 5.41 & 4655 & \\ \(2^{++}\) & \((0.55,0.83)\) & 5.39 & 4421 & \(D_{s1}^{*}D_{s1}^{*}=5560;\phi\left(1020\right)J/\psi=4117\) \\ & & \((-0.83,0.55)\) & 5.39 & 4450 & \\ \(sb\bar{s}\bar{b}\) & \(0^{++}\) & \((-0.36,-0.35,0.80,0.33)\) & 4.43 & 10843 & \(B_{s}^{0}B_{s}^{0}=10734;B_{s}^{*}B_{s}^{*}=10830\) \\ & & \((-0.58,0.39,-0.36,0.62)\) & 4.60 & 11023 & \(\phi\left(1020\right)\Upsilon=10480\) \\ & & \((0.35,0.81,0.47,0.10)\) & 4.74 & 11111 & \\ & & \((0.64,-0.30,-0.13,0.70)\) & 4.86 & 11158 & \\ \(1^{+-}\) & \((-0.41,0.67,-0.20,0.58)\) & 4.88 & 11344 & \(B_{s}^{0}B_{s}^{*}=10782;B_{s}^{*}B_{s}^{*}=10830\) \\ & & \((0.76,0.39,0.45,0.26)\) & 4.94 & 11388 & \\ & & \((-0.31,-0.47,0.63,0.53)\) & 4.88 & 11457 & \\ & & \((0.38,-0.43,-0.59,0.56)\) & 5.11 & 11727 & \\ \(1^{++}\) & \((0.82,0.58)\) & 4.98 & 11408 & \(B_{s}^{0}B_{s}^{*}=10782;\phi\left(1020\right)\Upsilon=10480\) \\ & & \((-0.58,0.82)\) & 4.85 & 11448 & \\ \(2^{++}\) & \((0.51,0.86)\) & 4.97 & 11093 & \(B_{s}^{*}B_{s}^{*}=10830;\phi\left(1020\right)\Upsilon=10480\) \\ & & \((-0.86,0.51)\) & 4.99 & 11154 & \\ \end{tabular}
\end{table}
Table 6: Computed results for the strange-heavy tetraquark states \(sc\bar{s}\bar{c}\) and \(sb\bar{s}\bar{b}\). The thresholds of two mesons are also listed.
olds of \(D^{0}D^{0}\), \(\omega\,(782)\,J/\psi\), \(\pi^{0}\eta_{c}\) and below the threshold of \(D^{*}D^{*}\), can strongly decay to the three former final states. The lowest state, which is below the thresholds of \(D^{*}D^{*}\), \(\omega\,(782)\,J/\psi\),\(D^{0}D^{0}\) and above the thresholds of \(\pi^{0}\eta_{c}\), can decay to two final states of \(\pi^{0}\eta_{c}\). Further, all states with \(J^{PC}=1^{+-}\), \(1^{++}\) and \(2^{++}\) are above the thresholds of \(D^{0}D^{*}\), \(D^{*}D^{*}\), \(\pi^{0}J/\psi\), \(\omega\,(782)\,J/\psi\), \(\omega\,(782)\,\eta_{c}\), can decay to the latters with same quantum numbers. For instance, the \(1^{++}\) states can decay to \(\omega\,(782)\,J/\psi\) and \(D^{0}D^{*}\), the \(2^{++}\) states can decay to \(D^{*}D^{*}\), \(\omega\,(782)J/\psi\).
For hidden-bottom systems of tetraquarks \(nb\bar{n}\bar{b}\), we show the computed results for \(R_{0}\), the mixing weights, the masses \(M(T)\) and thresholds in Table 7, with the later two plotted in Fig 7. We find from Fig 7 that all states of \(nb\bar{n}\bar{b}\) systems are above the thresholds of their final states of two mesons, except for the lowest state (\(10484\) MeV), which is below the thresholds of \(B^{0}B^{0}\) and \(B^{*}B^{*}\) only and it can decay into \(\omega\,(782)\,\Upsilon\), \(\pi^{0}\eta_{b}\). The possible decays are, for instance, the \(nb\bar{n}\bar{b}(0^{++})\) to \(B^{0}B^{0}\), \(B^{*}B^{*}\), \(\omega\,(782)\,\Upsilon\) and \(\pi^{0}\eta_{b}\), the \(nb\bar{n}\bar{b}(1^{+-})\) to \(B^{0}B^{*}\), \(B^{*}B^{*}\), \(\pi^{0}\Upsilon\) and \(\omega\,(782)\,\eta_{b}\), the \(nb\bar{n}\bar{b}(1^{++})\) to \(B^{0}B^{*}\), \(\omega\,(782)\,\Upsilon\), the \(nb\bar{n}\bar{b}(2^{++})\) to \(B^{*}B^{*}\), \(\omega\,(782)\,\Upsilon\).
## VI Summary
Stimulated by observations of the \(X(6900)\) by LHCb and the recent observations of the \(X(6600)\) by CMS and ATLAS experiments of the LHC, we have systematically investigated the ground-state masses of hidden heavy-flavor tetraquarks with two and four hidden heavy-flavor within a unified framework of MIT bag model which incorporates chromomagnetic interactions and enhanced binding energy. Based on color-spin wavefunctions constructed for the hidden heavy-flavor tetraquarks, we solve the MIT bag model and diagonalize the chromomagnetic interaction (CMI) to predict masses of the color-spin multiplets of hidden heavy-flavor tetraquarks in their ground states with spin-parity quantum numbers \(J^{PC}=0^{++}\), \(1^{++}\), \(2^{++}\), and \(1^{+-}\). We find that the fully charmed tetraquark \(cc\bar{c}\bar{c}\) with \(J^{PC}=0^{++}\) has mass about 6572 MeV and is very likely to be the \(X(6600)\) reported by CMS and ATLAS experiments of the LHC, with the measured mass \(6552\pm 10\pm 12\) MeV. We further computed masses of the tetraquark systems \(bb\bar{b}\bar{b}\), \(cb\bar{c}\bar{b}\), \(sc\bar{c}\bar{c}\), \(sb\bar{s}\bar{b}\), \(nc\bar{n}\bar{c}\) and \(nb\bar{n}\bar{b}\) in their color-spin multiplets and suggested that the particle \(Z_{c}(4200)\) reported by [7] is likely to be the hidden-charm tetraquark made of \(nc\bar{n}\bar{c}\) with \(J^{PC}=1^{+-}\).
Compared to two-meson thresholds determined via the final states in details, the most-likely strong decay channels are noted. Our mass computation shows that all of these hidden heavy-flavor tetraquarks are above the thresholds of the lowest two-mesons final states and unstable against strong decay to these final states. For the doubly heavy systems of the tetraquarks \(sb\bar{s}\bar{b}\), \(sc\bar{s}\bar{c}\), \(nb\bar{n}\bar{b}\) and \(nc\bar{n}\bar{c}\), there are a few states below thresholds except for their lowest final states, indicating that they may have longer lifetime compared to the fully heavy tetraquarks. We also find some near-threshold states for which coupled channel effects are possible. We hope that upcoming LHCb experiments with increased data can test the prediction in this work.
**Acknowledgments**
D. J. is supported by the National Natural Science Foundation of China under the no. 12165017.
Figure 6: Computed masses(MeV the solid lines) of the hidden-bottom tetraquarks \(nc\bar{n}\bar{c}\) and the two meson thresholds (MeV the dotted lines).
Figure 7: Computed masses(MeV the solid lines) of the hidden-bottom tetraquark \(nb\bar{n}\bar{b}\) and the two meson thresholds (MeV the dotted lines).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline State & \(J^{PC}\) & Eigenvector & \(R_{0}(\mathrm{GeV}^{-1})\) & \(M(T)(\mathrm{MeV})\) & Threshold (MeV) \\ \hline \(nc\bar{n}\bar{c}\) & \(0^{++}\) & \((-0.24,-0.45,0.83,0.20)\) & 4.90 & 3715 & \(D^{0}D^{0}=3730;D^{*}D^{*}=4017\) \\ & & \((-0.71,0.26,-0.22,0.62)\) & 5.01 & 3954 & \(\omega\left(782\right)J/\psi=3879\) \\ & & \((0.17,0.85,0.50,0.00)\) & 5.17 & 4127 & \(\pi^{0}\eta_{c}=3119\) \\ & & \((0.65,-0.11,-0.06,0.75)\) & 5.38 & 4259 & \\ \(1^{+-}\) & \((-0.31,0.65,-0.06,0.69)\) & 5.15 & 4079 & \(D^{0}D^{*}=3874;D^{*}D^{*}=4017\) \\ & & \((0.87,0.33,0.34,0.11)\) & 5.26 & 4172 & \(\pi^{0}J/\psi=3264;\omega\left(782\right)\eta_{c}=3767\) \\ & & \((-0.15,-0.50,0.70,0.48)\) & 5.33 & 4263 & \\ & & \((0.35,-0.46,-0.61,0.54)\) & 5.32 & 4626 & \\ \(1^{++}\) & \((-0.58,0.82)\) & 5.18 & 4210 & \(D^{0}D^{*}=3874;\omega\left(782\right)J/\psi=3879\) \\ & & \((0.82,0.58)\) & 5.36 & 4233 & \\ \(2^{++}\) & \((0.46,0.89)\) & 5.34 & 4152 & \(D^{*}D^{*}=4017;\omega\left(782\right)J/\psi=3879\) \\ & & \((-0.89,0.46)\) & 5.31 & 4219 & \\ \(nb\bar{n}\bar{b}\) & \(0^{++}\) & \((-0.37,-0.31,0.80,0.36)\) & 4.65 & 10484 & \(B^{0}B^{0}=10560;B^{*}B^{*}=10650\) \\ & & \((-0.51,0.37,-0.40,0.67)\) & 4.70 & 10760 & \(\omega\left(782\right)\Upsilon=10242\) \\ & & \((0.42,0.78,0.43,0.14)\) & 4.82 & 10887 & \(\pi^{0}\eta_{b}=9533\) \\ & & \((0.65,-0.40,-0.14,0.63)\) & 4.90 & 10943 & \\ \(1^{+-}\) & \((-0.43,0.67,-0.25,0.54)\) & 4.79 & 10865 & \(B^{0}B^{*}=10605;B^{*}B^{*}=10650\) \\ & & \((0.73,0.41,0.47,0.29)\) & 4.84 & 10925 & \(\pi^{0}\Upsilon=9594;\omega\left(782\right)\eta_{b}=10182\) \\ & & \((-0.36,-0.45,0.61,0.55)\) & 4.81 & 11097 & \\ & & \((0.39,-0.42,-0.59,0.57)\) & 4.95 & 11509 & \\ \(1^{++}\) & \((0.82,0.58)\) & 4.89 & 10949 & \(B^{0}B^{*}=10605;\omega\left(782\right)\Upsilon=10242\) \\ & & \((-0.58,0.82)\) & 4.78 & 11090 & \\ \(2^{++}\) & \((0.45,0.89)\) & 4.88 & 10835 & \(B^{*}B^{*}=10650;\omega\left(782\right)\Upsilon=10242\) \\ & & \((-0.89,0.45)\) & 4.88 & 10944 & \\ \hline \hline \end{tabular}
\end{table}
Table 7: Computed results for the hidden heavy-flavor tetraquark \(nc\bar{n}\bar{c}\) and \(nb\bar{n}\bar{b}\), with respective thresholds of two mesons shown also.
## Appendix A
Based on the color \(SU(3)_{c}\) symmetry, one can obtain two components of color singlets \(6_{c}\otimes\bar{6}_{c}\) and \(\bar{3}_{c}\otimes 3_{c}\) for the hidden-flavor tetraquarks,
\[\phi_{1}^{T} = \frac{1}{\sqrt{6}}\left(rr\bar{r}\bar{r}+gg\bar{g}\bar{g}\bar{g}+bb \bar{b}\bar{b}\right) \tag{30}\] \[+\frac{1}{2\sqrt{6}}\left(rb\bar{b}\bar{r}+br\bar{b}\bar{r}+rg \bar{g}\bar{r}+rg\bar{g}\bar{r}+gb\bar{b}\bar{g}+bg\bar{b}\bar{g}\right.\] \[\left.+gr\bar{r}\bar{g}+rg\bar{r}\bar{g}+gb\bar{b}\bar{g}\bar{b} +rb\bar{r}\bar{b}+br\bar{r}\bar{b}\right),\]
\[\phi_{2}^{T} = \frac{1}{2\sqrt{3}}\left(rb\bar{b}\bar{r}-br\bar{b}\bar{r}-gr \bar{g}\bar{r}+rg\bar{g}\bar{r}+gb\bar{b}\bar{g}-bg\bar{b}\bar{g}\right. \tag{31}\] \[\left.+gr\bar{r}\bar{g}-rg\bar{r}\bar{g}-gb\bar{b}\bar{g}\bar{b} +bg\bar{g}\bar{b}-rb\bar{r}\bar{b}+br\bar{r}\bar{b}\right),\]
which corresponds to two color configurations in Eq. (1).
For six states \(\chi_{1-6}^{T}\) (2) of heavy tetraquarks, one can construct their spin wave functions via writing the \(CG\) coefficients explicitly:
\[\chi_{1}^{T}= \uparrow\uparrow\uparrow\uparrow,\] \[\chi_{2}^{T}= \frac{1}{2}\left(\uparrow\uparrow\uparrow\downarrow+\uparrow \downarrow\uparrow-\uparrow\downarrow\uparrow\uparrow-\downarrow\uparrow \uparrow\uparrow\right),\]
in which notations \(\uparrow\) and \(\downarrow\) represent the third component of the quark's spin. Alternatively, one can also use the \(CG\) coefficients given in Ref. [15] to examine Eq. (32) for the different spin states. Note that the results of spin factors in Ref. [15] are shown as matrix form in the spanned space of the states \(\chi_{1-6}^{T}\) as the spin multiplets (2) indicated. Combining with two color configurations \(\phi_{1-2}^{T}\) in Eq. (1) and six spin configurations \(\chi_{1-6}^{T}\) in Eq. (2), one can then construct their color-spin wavefunctions (3). The allowed states of the hidden-flavor tetraquarks to be mix due to chromomagnetic interaction are listed in Table 1.
|
2310.12001
|
Bayesian Flow Networks in Continual Learning
|
Bayesian Flow Networks (BFNs) has been recently proposed as one of the most
promising direction to universal generative modelling, having ability to learn
any of the data type. Their power comes from the expressiveness of neural
networks and Bayesian inference which make them suitable in the context of
continual learning. We delve into the mechanics behind BFNs and conduct the
experiments to empirically verify the generative capabilities on non-stationary
data.
|
Mateusz Pyla, Kamil Deja, Bartłomiej Twardowski, Tomasz Trzciński
|
2023-10-18T14:32:20Z
|
http://arxiv.org/abs/2310.12001v1
|
# Bayesian Flow Networks in Continual Learning
###### Abstract
Bayesian Flow Networks (BFNs) has been recently proposed as one of the most promising direction to universal generative modelling, having ability to learn any of the data type. Their power comes from the expressiveness of neural networks and Bayesian inference which make them suitable in the context of continual learning. We delve into the mechanics behind BFNs and conduct the experiments to empirically verify the generative capabilities on non-stationary data.
## 1 Introduction
Diffusion models [15] have been progressively advancing the state-of-the-art in generative modelling, especially in the field of image processing [2; 8; 11]. This is thanks to the usage of the diffusion processes that allows to learn complex data distributions [7; 9; 11; 16; 17].
However, diffusion models tend to struggle when in comes to non-continuous data. This is mostly because of the fact that denoising process for discrete, discretised or tabular data is not easy to define. Therefore, Alex Graves et al. recently introduced **Bayesian Flow Networks (BFNs)**[6] to efficiently train the model and iteratively update the data parameters without forward pass. The general idea behind this technique is to change the way in which we model training data where instead of modelling a single instance, authors propose to output the parameters of the distribution that best fits the training data. The main motivation behind this concept is that by doing so, authors introduce an elegant way to directly model the discrete data distribution.
In this work, we argue, that with direct modelling of parameters describing the original training data, BFNs can also be used to efficiently consolidate portions of separate data chunks. Therefore, we relate to the problem of continual learning, which tackles the ability of ML models to learn progressively as new data arrive. Bayesian update is an elegant way to manage prior belief and information from new observations. However, in Bayesian learning we often face the issue of turning theory into practical implementations, limiting the use of the Bayesian learning paradigm [3; 20; 21].
In this preliminary studies, we show the first benchmark of Bayesian Flow Networks in Continual Learning setup. We show how we can adapt several known techniques that prevent catastrophic forgetting in neural networks to continually train BFNs. We highlight their strengths and drawbacks and discuss future directions on how to employ BFNs to continually consolidate knowledge.
Related Work
Continual Learning (CL)gathers various approaches in machine learning that aim at reducing catastrophic forgetting, a phenomenon where models suffer from abrupt loss in performance when retrained with additional data. Usually there are three standard group of approaches that try to mitigate this issue: (i) architectural approaches - methods that focus on the structure of the model itself that adds task-specific submodules to the architecture; (ii) memory approaches, methods involve storing some extra information in memory which are then used to rehearse knowledge during training in subsequent task; (iii) Regularization Approaches: that identify the important weights for the learned tasks and penalise the large updates on those weights when learning a new task [18; 12].
Several of those approaches were applied in generative continual learning. In particular, [13] adapt several regularisation-based methods and introduce Variational Continual Learning where additional architectural change is added with each task. In BooVAE [5] an additive aggregated posterior expansion technique is used to continually trained Variational Autoencoders, while [1] propose to continually disentangle data representations with VAE. Several methods train GANs in CL scenarios, e.g. using memory buffer [22]. Most similarly to this work, in [23] authors benchmark the possibilities of continual-learning of diffusion models with recent CL strategies.
## 3 Method
Although BFNs work on different type of data, both discrete and continuous time steps, the most approachable way to understand the dynamics is through continuous data in the discrete setting and extending it as in 6.1.
### Inference
We use neural network \(\psi\) parameterised by \(\theta\) to learn the parameters \(\xi\) controlling the data distribution. The underlying distribution is complex, however we can sample from it. The general idea is to start with uneducated guess of normal distribution with high variance, and iteratively improve the estimation of data distribution with the help of the network as we sample more and more data. We assume that we model the data only with Gaussians.
As in diffusion models, we set the number of steps and we establish the accuracy scheduler managing the usefulness of noised samples for various time steps. We start from the prior belief, for instance centered at 0 with huge standard deviation. We want our network to predict better data parameters from the current estimate. To update network weights, we calculate the gradient as a KL divergence between the predicted and original data in their noised forms. While explaining the mathematical formulation behind the process, we point out concrete example in 1.
More rigorously, we threat each data variable separately. Due to the independence, the input distribution can be expressed as the product of one dimensional distributions.
\[p_{{}_{I}}(\mathbf{x}\mid\xi)=\Pi_{d=1}^{D}p_{{}_{I}}(x^{(d)}\mid\xi^{(d)}) \tag{1}\]
Rather than data points, we receive the noisy samples forming the normal distribution centered at the true values with variance purely depended on the accuracy scheduler.
\[p_{{}_{S}}\left(\mathbf{y}\mid\mathbf{x};\alpha\right)=\Pi_{d=1}^{D}p_{{}_{S}} \left(y^{(d)}\mid x^{(d)};\alpha\right) \tag{2}\]
Figure 1: BFNs: Our goal is to model one dimensional data distribution controlled by unknown \(\xi\), which we can only sample from. We start off with some initial prior belief (0 in this case) that we are very uncertain of (blue prior). We sample an observation \(y\) and add noise to obtain what we call a sender distribution – Gaussian centered at \(y\). We pass the parameters of input to the neural network obtaining output distribution, improved version enriched by jointly processes of all the variables. We comply with the noise scheduler to obtain orange receiver. We minimise the KL divergence between the sender and receiver so that our output gets closes to the samples from true data distribution.
Since we proceed each dimension independently, we need global feedback coming from the interactions between variables. The role of neural network is to update the guess knowing the previous belief, so that we can better decode the sended sample.
\[p_{{}_{O}}(\mathbf{x}\mid\xi,t)=\Pi_{d=1}^{D}p_{{}_{O}}(x^{(d)}\mid\Psi^{(d)}( \xi,t)) \tag{3}\]
Since we are not aware of the true \(x^{(d)}\), knowing \(p_{{}_{S}}\left(\cdot\mid x^{(d)};\alpha\right)\) we can only marginalise over all possible values \(x^{\prime(d)}\) weighted by the output probability, obtaining the receiver:
\[p_{{}_{R}}(\mathbf{y}\mid\xi;t,\alpha)=\mathop{\mathbb{E}}_{p_{{}_{O}}( \mathbf{x}^{\prime}|\xi;t)}p_{{}_{S}}\left(\mathbf{y}\mid\mathbf{x}^{\prime};\alpha\right) \tag{4}\]
Iteratively, for the next time steps, we apply Bayesian updates to improve the input distribution (that accumulates only local knowledge about a single dimension) by the acquired knowledge from receiver (that encodes global knowledge about interactions between dimensions). Both input and sender are Gaussian distributions and factorised independently, hence the update is straightforward: \(\rho_{i+1}=\rho_{i}+\alpha\) and \(\mu_{i+1}=\frac{\rho_{i}\mu_{i}+\alpha\mathbf{y}}{\rho_{i+1}}\) Once we set the number of steps to infinity, under mild conditions for the scheduler, we are able to efficiently compute the dynamics:
\[p_{{}_{P}}(\xi\mid\mathbf{x};t)=p_{{}_{U}}(\xi\mid\xi_{0},\mathbf{x};\beta(t)). \tag{5}\]
There is a freedom in choosing the underlying network, as long as it returns the new parameters of data (U-Net [14], Transformers [19], TabTransformer [10]) and inference conditions on the time step.
The proposed scheduler is of form \(\beta(t)\doteq\sigma_{1}^{-2t}-1\) for \(t\in[0,1]\) yielding accuracy rate \(\alpha(t)=\frac{2\log\sigma_{1}}{\sigma_{1}^{2t}}\). When \(\alpha\) is 0, the model is uninformative of samples and confidence increases with the higher values. \(\sigma_{1}\) is the hyperparameter standing for standard deviation at the final time.
### Training
Loss function can be intuitively understood as costs of revealing the underlying data distribution with the least possible effort or information. Our objective is to match output distribution to the data distribution indirectly by optimizing KL divergence between their noisy versions. Specifically, we minimise KL divergence between sender and receiver distributions: \(D_{KL}\left(p_{{}_{S}}\parallel p_{{}_{R}}\right)\) in order to bring output predictions closer and closer to the true data values.
\[L^{n}(\mathbf{x})\doteq\mathop{\mathbb{E}}_{p(\xi_{1},\ldots,\xi_{n-1})}\sum_ {i=1}^{n}D_{KL}\left(p_{{}_{S}}\left(\cdot\mid\mathbf{x};\alpha_{i}\right) \parallel p_{{}_{R}}(\cdot\mid\xi_{i-1};t_{i-1},\alpha_{i})\right), \tag{6}\]
This loss indirectly optimises our true goal:
\[L^{r}(\mathbf{x})=-\mathop{\mathbb{E}}_{p_{{}_{F}}(\xi|\mathbf{x},1)}\ln p_{ {}_{O}}(\mathbf{x}\mid\xi;1). \tag{7}\]
Let us note that this kind of form follows information-theory interpretation: we minimise the number of nats required to transmits a sample between two distributions.
### BFN in Continual Learning
We propose to extend the basic idea of BFNs in order to benchmark it with several known continual-learning strategies. In particular, we start with a simple regularisation strategy, where we prevent model in subsequent tasks to diverge from the previous by penalising \(\mathcal{L}_{1}\) or \(\mathcal{L}_{2}\) norm.
We compare the regularisation approach with two rehearsal-based methods. In the first one we employ a simple buffer-based rehearsal where we store a subset of previous data examples in a buffer and use them together with new data samples when retraining a model on new tasks. In the second one, taking advantage of the generative model we continually train, we propose to generate examples from previous tasks in order to use them as rehearsal samples in a generative replay approach.
## 4 Experiments
We evaluate the performance of BFNs in Continual Learning using the standard MNIST dataset and our new scenario with tabular data on US flights connections in 2013 [4]. One of the most common setting in which we are able to assess the continual learning capabilities of the proposed model is
to split the training dataset into disjoint chunks and perform the training in a sequential way. In Class-Incremental Learning set up, each task often contains the same. Each task \(\tau_{i}\) is associated with a dataset \(\mathcal{D}_{i}\), and the objective is to model the distribution of \(\mathcal{D}_{i}\).
In particular, we split the MNIST in CIL setting \(5\times 2\) by dividing it into 5 tasks, each binary classification of two consecutive digits. Following [6], we also binarise the images. In flight dataset we divide group flights by month of the journey obtaining 12 tasks.
### Image dataset
In Figure 2, we present the results of our experiments with MNIST dataset. To measure the catastrophic forgetting, after each task, we generate 1000 examples and report the share of each class as measured by the externally trained classifier. As visible, in finetuning (without any CL strategy), we can observe catastrophic forgetting as with each new task model abruptly forgets how to generate examples from the previous classes. On the other hand, both: buffer-based and generative-based replay prevent catastrophic forgetting, as even after the last task, we can observe some generations of classes from the first task. For qualitative analysis we provide some samples in Figure 4.
## 5 Tabular data
To evaluate the performance of BFNs in modelling categorical data in the continual learning scenario, we refer to the problem of tabular data modelling. The results are presented in Figure 3. We inspect the test loss metric as a proxy for model surprise of provided data.
## 6 Conclusion, Limitations and Future work
Bayesian Flow Networks (BFNs) are exciting family of generative models that are able to deal with various type of data. In this work we highlight that modelling data distribution parameters does not prevent those models from catastrophic forgetting. However, BFNs can benefit from known CL strategies such as rehearsal and generative replay. In our future works, we plan to explore further how we can benefit from the sweet combination of Bayesian update, with neural modelling in order to continually adjust parameters of the data distrubution.
Figure 3: Results of loss applied on test images (a) left: using finetuning (b) right: generative-replay method. Loss induced by BFNs measures bits by dimension and offers (negative log-) likelihood estimation interpretation. On \(Y\) axis incrementally we are proceed with tasks, whereas values on \(X\) axis corresponds to data batches from indicated tasks.
Figure 2: Results of classification of the generated images (a) left: using finetuning (b) middle: memory-based (c) right: generative-based method. (d) The colours corresponds to percentage share of consecutive digits across the sequential training.
|
2302.12210
|
Using Colors and Sketches to Count Subgraphs in a Streaming Graph
|
Suppose we wish to estimate $\#H$, the number of copies of some small graph
$H$ in a large streaming graph $G$. There are many algorithms for this task
when $H$ is a triangle, but just a few that apply to arbitrary $H$. Here we
focus on one such algorithm, which was introduced by Kane, Mehlhorn, Sauerwald,
and Sun. The storage and update time per edge for their algorithm are both
$O(m^k/(\#H)^2)$, where $m$ is the number of edges in $G$, and $k$ is the
number of edges in $H$. Here, we propose three modifications to their algorithm
that can dramatically reduce both the storage and update time. Suppose that $H$
has no leaves and that $G$ has maximum degree $\leq m^{1/2 - \alpha}$, where
$\alpha > 0$. Define $C = \min(m^{2\alpha},m^{1/3})$. Then in our version of
the algorithm, the update time per edge is $O(1)$, and the storage is
approximately reduced by a factor of $C^{2k-t-2}$, where $t$ is the number of
vertices in $H$; in particular, the storage is $O(C^2 + m^k/(C^{2k-t-2}
(\#H)^2))$.
|
Shirin Handjani, Douglas Jungreis, Mark Tiefenbruck
|
2023-02-23T18:02:48Z
|
http://arxiv.org/abs/2302.12210v1
|
# Using Colors and Sketches to Count Subgraphs in a Streaming Graph
###### Abstract
Suppose we wish to estimate \(\#H\), the number of copies of some small graph \(H\) in a large streaming graph \(G\). There are many algorithms for this task when \(H\) is a triangle, but just a few that apply to arbitrary \(H\). Here we focus on one such algorithm, which was introduced by Kane, Mehlhorn, Sauerwald, and Sun. The storage and update time per edge for their algorithm are both \(O(m^{k}/(\#H)^{2})\), where \(m\) is the number of edges in \(G\), and \(k\) is the number of edges in \(H\). Here, we propose three modifications to their algorithm that can dramatically reduce both the storage and update time. Suppose that \(H\) has no leaves and that \(G\) has maximum degree \(\leq m^{1/2-\alpha}\), where \(\alpha>0\). Define \(C=\min(m^{2\alpha},m^{1/3})\). Then in our version of the algorithm, the update time per edge is \(O(1)\), and the storage is approximately reduced by a factor of \(C^{2k-t-2}\), where \(t\) is the number of vertices in \(H\); in particular, the storage is \(O(C^{2}+m^{k}/(C^{2k-t-2}(\#H)^{2}))\).
## 1 Introduction
Suppose that a large simple graph \(G\) is presented as a stream of edge insertions and deletions, and suppose that \(H\) is a very small graph (e.g., a small clique or cycle). Our goal is to estimate \(\#H\), the number of copies of \(H\) that appear in \(G\), where we are permitted a single pass through the stream. This problem has received a great deal of attention, particularly in the case where \(H\) is a triangle; however, there are only a few known techniques that apply to arbitrary \(H\). Here we focus on the technique that was developed in [22, 27], which we refer to as the [KMSS]-algorithm.
The [KMSS]-algorithm, which uses complex-valued linear sketches, has many strengths: it applies to arbitrary \(H\); it can be used in distributed settings; it allows edge deletions; and it is extremely efficient in a variety
of situations, such as when \(H\) is a star graph. However, there are many situations where the algorithm is not practical. Suppose \(G\) has \(m\) edges, and suppose \(H\) has \(k\) edges and \(t\) vertices. When the [KMSS]-algorithm produces a single estimate of \(\#H\), that estimate has variance \(\Theta(m^{k})\), so it is necessary to produce \(O(m^{k}/(\#H)^{2})\) estimates and average them. The storage and update time per edge are proportional to the number of estimates produced, and are therefore both \(O(m^{k}/(\#H)^{2})\).
In this paper, we describe three modifications to the [KMSS]-algorithm that greatly reduce both the storage and update time per edge. Suppose that \(H\) is a connected graph with no leaves. Suppose also that the maximum degree of any vertex in \(G\) is \(\Delta\leq m^{1/2-\alpha}\), where \(\alpha>0\), and define \(C=\min(m^{1/3},m^{2\alpha})\). Then the storage required by our algorithm is \(O(C^{2}+m^{k}/(C^{2k-t-2}(\#H)^{2}))\), i.e., it has been reduced approximately by a factor of \(C^{2k-t-2}\). The update time per edge is \(O(1)\).
The problem of counting copies of a small graph \(H\) in a large graph \(G\) has been studied extensively. It has many applications, as diverse as community detection, information retrieval, and motifs in bioinformatics; see for instance [5, 13, 15, 26, 32]. Here we restrict to the case where \(G\) is given as a data stream, and our goal is merely to estimate \(\#H\), as opposed to computing \(\#H\) exactly. Most work on this problem has addressed the case where \(H\) is a triangle [4, 7, 9, 10, 11, 14, 16, 17, 18, 19, 20, 23, 24, 25, 28, 29, 30]. A few authors have addressed other specific subgraphs, such as butterflies [31] and cycles [27]. We are only aware of a few algorithms that apply to arbitrary subgraphs [6, 8, 22, 21]. Two of these, [8] and [6], require multiple passes through the stream, which we do not allow here. The third, [22], presents the [KMSS]-algorithm, which is the focus of this paper. The last, [21], presents a vertex-sampling algorithm which, in some situations, is extremely efficient, requiring storage \(O(m/(\#H)^{1/\tau})\), where \(\tau\) is the fractional vertex cover number of \(H\). However, this bound requires a strong assumption on \(G\): it either requires that \(G\) have bounded degree, or it requires that the maximum degree in \(G\) is \((\#H)^{1/(2\tau)}\) and that some optimal fractional vertex cover of \(H\) can place non-zero degree on every vertex.
In order to explain our contribution to this problem, we first need to briefly review the [KMSS]-algorithm. Consider a fixed \(H\). Many independent estimates are made for \(\#H\), and they are then averaged. To get a single estimate, the first step is to arbitrarily assign directions to the edges of \(H\). We refer to the resulting digraph as \(\vec{H}\) and its edges as \(\overrightarrow{a_{1}a_{2}},\overrightarrow{a_{3}a_{4}},\ldots,\overrightarrow {a_{2k-1}a_{2k}}\). Also each edge \(vw\) of \(G\) is replaced by two directed edges \(\overrightarrow{vw}\) and \(\overrightarrow{vv}\). We refer to the resulting directed version of \(G\) as \(\vec{G}\). For any graph or digraph \(X\), we refer to its vertices and edges as \(\mathcal{V}(X)\) and \(\mathcal{E}(X)\). Now we define \(k\) functions \(\mathcal{M}_{i}\colon\mathcal{E}(\vec{G})\to\mathbf{C}\), one for each edge \(\overrightarrow{a_{2i-1}a_{2i}}\in\mathcal{E}(\vec{H})\); each \(\mathcal{M}_{i}\) maps edges of \(\mathcal{E}(\vec{G})\) to complex roots of unity. These functions are defined in such a way that they can "recognize" whether a \(k\)-tuple of edges \(\vec{T}=(\overrightarrow{v_{1}v_{2}},\ldots,\overrightarrow{v_{2k-1}v_{2k}})\) in \(\vec{G}\) forms a copy of \(\vec{H}\) with each \(\overrightarrow{a_{2i-1}a_{2i}}\) mapping to \(\overrightarrow{v_{2i-1}v_{2i}}\). In particular, if \(\vec{T}\) does form such a copy, then the expected value (over all permissible choices of the maps \(\mathcal{M}_{i}\)) of \(\prod_{i=1}^{k}\mathcal{M}_{i}(\overrightarrow{v_{2i-1}v_{2i}})\) is a non-zero constant; otherwise, the expected value is zero. Then, as the edges stream by, the \(k\) values \(\mathcal{Z}_{i}=\sum_{\overrightarrow{vw}\in\mathcal{E}(\vec{G})}\mathcal{M}_{ i}(\overrightarrow{vv})\) are computed. Finally, when the stream ends,
the estimate of \(\#H\) is given by \(\prod_{i=1}^{k}\mathcal{Z}_{i}\) multiplied by an appropriate constant.
The key to the algorithm is how to define the functions \(\mathcal{M}_{i}\) so that they can recognize when \(\vec{T}\) forms a copy of \(\vec{H}\). Each of these functions \(\mathcal{M}_{i}\) has two parts: one part is meant to recognize when \(\vec{T}\) forms a homomorphic image of \(\vec{H}\), and the other part is meant to recognize when the \(t\) vertices of this homomorphic image are distinct. In this paper, we do not use the second part; we use a different method to ensure that the \(t\) vertices are distinct. We therefore omit the second part from our description, keeping in mind that this description differs somewhat from the one in [22]. For each vertex \(b\in H\), we define a hash function \(\mathcal{X}_{b}\colon\mathcal{V}(G)\to\mathbf{C}\), which maps vertices of \(G\) to complex \(\deg(b)^{\text{th}}\) roots of unity, where \(\deg(b)\) is the degree of \(b\) in \(H\). Then \(\mathcal{M}_{i}(\overrightarrow{vw})\) is defined to be \(\mathcal{X}_{a_{2i-1}}(v)\mathcal{X}_{a_{2i}}(w)\). It is not difficult to see that \(\prod_{i=1}^{k}\mathcal{M}_{i}(\overrightarrow{v_{2i-1}v_{2i}})\) has expected value \(1\) if \(\vec{T}\) forms a homomorphic image of \(\vec{H}\) with each \(\overrightarrow{a_{2i-1}a_{2i}}\) mapping to \(\overrightarrow{v_{2i-1}v_{2i}}\); otherwise, it has expected value \(0\).
We can now describe our contributions to this problem. We present three modifications to the [KMSS]-algorithm, which can be used separately or together to reduce the storage and update time per edge. First, we introduce a different method for ensuring that we count only those homomorphic images of \(\vec{H}\) that have \(t\) distinct vertices. We do this by assigning colors to the vertices of \(G\). Assuming there are \(C\) colors, we subdivide each sum \(\mathcal{Z}_{i}\) into \(C^{2}\) different sums, one for each pair of colors. For instance, there might be a red-blue sum
\[\mathcal{Z}_{i}^{\text{red,blue}}=\sum_{\begin{subarray}{c}v\text{ red}\\ w\text{ blue}\end{subarray}}\mathcal{M}_{i}(\overrightarrow{vw})\,.\]
There might also be analogous blue-green sums and green-red sums, and if we were counting triangles, then
\[\mathcal{Z}_{1}^{\text{red,blue}}\mathcal{Z}_{2}^{\text{blue,green}}\mathcal{ Z}_{3}^{\text{green,red}}\]
would give an estimate for the number of triangles whose three vertices were respectively red, blue, and green. This allows us to count only homomorphic images whose vertices all have different colors, which in turn ensures that the vertices are all distinct. However, making sure the vertices are distinct is not the primary reason we use colors. The primary reason is that it dramatically reduces the variance.
For our second modification, rather than defining one hash function \(\mathcal{X}\) for each vertex of \(H\), we define one for each half-edge of \(H\), with the condition that for any vertex \(v\) of \(G\) and \(b\) of \(H\), the product \(\prod_{h}\mathcal{X}_{h}(v)=1\), where the product is taken over all half-edges \(h\) in \(H\) that are incident to \(b\). This too reduces the variance of each estimate.
For the third modification, rather than using hash functions \(\mathcal{X}\) that map vertices to roots of unity, we use hash functions that map vertices to diagonal \(d\)-by-\(d\) matrices. Each position along the diagonal of the matrix more-or-less gives a separate estimate of \(\#H\), so in some sense, this is almost equivalent to making \(d\) independent estimates. The difference is that, when an edge streams by, instead of updating each \(\mathcal{Z}_{i}\) for \(d\) different
estimates, we only have to update each \(\mathcal{Z}_{i}\) for one matrix of estimates. This lets us reduce the update time per edge approximately by a factor of \(d\).
This paper is organized as follows. In Section 2, we describe our modified version of the [KMSS]-algorithm and prove that it gives an unbiased estimate of \(\#H\). In Section 3, we bound the variance of our estimate. In Section 4, we compare the storage and update time of our algorithm to that of the original algorithm.
The authors would like to thank Kyle Hofmann, Anthony Gamst, and Eric Price for many helpful conversations.
## 2 Description of Algorithm
In this section, we describe our algorithm and show that it gives an unbiased estimate of \(\#H\). We only explain how to use the algorithm to produce a single estimate of \(\#H\), but in order to get a more accurate estimate of \(\#H\), we would compute many such estimates and take their average.
Fix some small graph \(H\). We assume throughout the paper that \(H\) is connected and has no leaves. Let \(t\) and \(k\) respectively denote the number of vertices and edges in \(H\). Arbitrarily assign directions to the edges of \(H\), and call the resulting directed graph \(\vec{H}\). We assume that the \(t\) vertices of \(H\) are labeled \(1,\ldots,t\), and the \(k\) edges are \(\widehat{a_{1}a_{2}},\ldots,\widehat{a_{2k-1}a_{2k}}\), where each \(a_{i}\in\{1,\ldots,t\}\). \(\vec{H}\) has \(2k\) half-edges, which we call \(h_{1},\ldots,h_{2k}\), where \(h_{2i-1}\) and \(h_{2i}\) are respectively the two halves of \(\widehat{a_{2i-1}a_{2i}}\). In particular, each \(h_{j}\) is incident to \(a_{j}\). For \(b\in\mathcal{V}(\vec{H})\) define \(\Gamma(b)=\{i:a_{i}=b\}\). In other words, \(\Gamma(b)\) tells which half-edges are incident to \(b\). Figure 1 illustrates an example where \(t=4\) and \(k=5\).
For each vertex \(b\in\mathcal{V}(\vec{H})\), select an arbitrary element \(i\in\Gamma(b)\), and call \(h_{i}\) the _distinguished_ half-edge at \(b\). Observe that there are \(2k\) half-edges
in \(\vec{H}\), of which \(t\) are distinguished and \(2k-t\) are not.
### The Functions \(\mathcal{X}_{i}\)
The [KMSS]-algorithm uses hash functions \(\mathcal{X}\) that map vertices of \(G\) to complex roots of unity. Here we define similar functions, but there are two differences. First, instead of having one function \(\mathcal{X}\) for each vertex of \(H\), we have one for each half-edge of \(H\). Second, we allow the more general setting where the co-domain of each \(\mathcal{X}\) is a group of diagonal matrices.
Let \(\mathcal{G}\) be any finite group of diagonal matrices with the property that the average of the elements of \(\mathcal{G}\) (i.e., \(\sum_{g\in\mathcal{G}}g/|\mathcal{G}|\)) is the zero matrix. Note that since \(\mathcal{G}\) consists of diagonal matrices, it is abelian. We use \(d\) to denote the dimension of the matrices in \(\mathcal{G}\). We are primarily interested in two types of groups \(\mathcal{G}\). In the first type, \(d=1\), and the elements of \(\mathcal{G}\) are the complex \(r^{\text{th}}\) roots of unity, for some \(r\geq 2\). In that case, the matrices can be viewed as complex numbers and are therefore equivalent to what's used in the [KMSS]-algorithm. For the second type of \(\mathcal{G}\), \(d\geq 2\). Let \(\omega=e^{2\pi i/d}\), and let \(M\) be the square diagonal matrix that has \(1,\omega,\omega^{2},\ldots,\omega^{d-1}\) along the diagonal. Then \(\mathcal{G}\) is the group generated by \(M\) and \(-I\) (where \(I\) is the \(d\)-dimensional identity matrix); thus \(\mathcal{G}\) has \(2d\) elements: \(\pm I,\pm M,\pm M^{2},\ldots,\pm M^{d-1}\). In this paper, we focus on those two types of \(\mathcal{G}\), but we remark that there are other \(\mathcal{G}\) that satisfy the given conditions; e.g., diagonal matrices whose diagonal entries are all \(\pm 1\). The entire discussion in this section applies to any such \(\mathcal{G}\); in particular, our algorithm gives an unbiased estimate of \(\#H\) for any such \(\mathcal{G}\). However, the discussion of the variance in the next section applies only to these two specific choices of \(\mathcal{G}\).
Fix any such group \(\mathcal{G}\), and for each \(1\leq i\leq 2k\), define a hash function \(\mathcal{X}_{i}\colon\mathcal{V}(G)\to\mathcal{G}\). If \(h_{i}\) is a non-distinguished half-edge of \(\vec{H}\), then for each \(v\in\mathcal{V}(G)\), the value \(\mathcal{X}_{i}(v)\) is a random element of \(\mathcal{G}\), and the functions \(\mathcal{X}_{i}\) for non-distinguished \(h_{i}\) are chosen independently and uniformly from a family of \(4k\)-wise independent hash functions. If \(h_{i}\) is the distinguished half-edge at \(b\), then \(\mathcal{X}_{i}(v)\) is defined by
\[\mathcal{X}_{i}(v)=\prod_{j\in\Gamma(b),j\neq i}\mathcal{X}_{j}(v)^{-1}\,.\]
If \(i\) is the only element of \(\Gamma(b)\), then \(\mathcal{X}_{i}(v)=I\). Observe that this definition of \(\mathcal{X}_{i}\) ensures that for any vertex \(b\) of \(H\) and any \(v\in\mathcal{V}(G)\),
\[\prod_{j\in\Gamma(b)}\mathcal{X}_{j}(v)=I\,.\]
**Lemma 1**.: _Let \(b\in\{1,\ldots,t\}\) be any vertex of \(\vec{H}\), and suppose its degree is \(\delta\). Suppose \(\Gamma(b)=\{i_{1},\ldots,i_{\delta}\}\); i.e., \(h_{i_{1}},\ldots,h_{i_{\delta}}\) are the half-edges of \(\vec{H}\) incident to \(b\). Let \(v_{1},\ldots,v_{\delta}\) be any \(\delta\) not-necessarily-distinct vertices of \(G\). Then \(\mathcal{X}_{i_{1}}(v_{1})\cdots\mathcal{X}_{i_{\delta}}(v_{\delta})\) is equal to \(I\) if \(v_{1}=\cdots=v_{\delta}\), and otherwise it is a uniformly random element of \(\mathcal{G}\)._
**Proof:** If \(\delta=1\), then the result is clearly true, so assume \(\delta>1\). Assume without loss of generality that the distinguished half-edge at \(b\) is \(h_{i_{\delta}}\). Then
by definition,
\[\mathcal{X}_{i_{\delta}}(v_{\delta})=\mathcal{X}_{i_{1}}(v_{\delta})^{-1}\cdots \mathcal{X}_{i_{\delta-1}}(v_{\delta})^{-1}\,,\]
so
\[\prod_{j=1}^{\delta}\mathcal{X}_{i_{j}}(v_{j})=\prod_{j=1}^{\delta-1}\mathcal{ X}_{i_{j}}(v_{j})\mathcal{X}_{i_{j}}(v_{\delta})^{-1}\,. \tag{1}\]
If \(v_{1}=\cdots=v_{\delta}\), then (1) is equal to \(I\). Now assume that some \(v_{j}\neq v_{\delta}\). Then for that \(j\), \(\mathcal{X}_{i_{j}}(v_{j})\mathcal{X}_{i_{j}}(v_{\delta})^{-1}\) is the quotient of two independent uniformly random elements of \(\mathcal{G}\), and is thus a uniformly random element of \(\mathcal{G}\). Also, none of \(h_{i_{1}},\ldots,h_{i_{\delta-1}}\) are distinguished, so
\[\left(\mathcal{X}_{i_{1}}(v_{1})\mathcal{X}_{i_{1}}(v_{\delta})^{-1}\right), \ldots,\left(\mathcal{X}_{i_{\delta-1}}(v_{\delta-1})\mathcal{X}_{i_{\delta-1 }}(v_{\delta})^{-1}\right)\]
are independent for all \(v_{j}\neq v_{\delta}\), and the rest are \(I\). Since at least one is uniformly random, their product is as well.
### The Functions \(\mathcal{M}_{i}\)
Let \(\vec{G}\) be the directed graph obtained by replacing each edge \(vw\) of \(G\) by two directed edges, \(\overrightarrow{vw}\) and \(\overrightarrow{wv}\). Each time an edge of \(G\) streams by, treat it as two directed edges of \(\vec{G}\). From now on, we use \(m\) to refer to the number of edges in \(\vec{G}\). Arguably, we should use \(2m\); however, \(m\) will be more convenient, and the factor of \(2\) will be irrelevant to all of our conclusions, which use \(O()\) notation.
For each edge \(\overline{a_{2i-1}a_{2i}}\) of \(\vec{H}\), define a function \(\mathcal{M}_{i}\colon\mathcal{E}(\vec{G})\to\mathcal{G}\) by
\[\mathcal{M}_{i}(\overrightarrow{vw})=\mathcal{X}_{2i-1}(v)\mathcal{X}_{2i}(w )\,.\]
For any \(k\)-tuple \(\vec{T}=(\overrightarrow{v_{1}v_{2}},\ldots,\overrightarrow{v_{2k-1}v_{2k}})\) of (not necessarily distinct) edges in \(\mathcal{E}(\vec{G})\), define
\[\mathcal{Q}(\vec{T})=\prod_{i=1}^{k}\mathcal{M}_{i}(\overrightarrow{v_{2i-1} v_{2i}^{2}})=\prod_{j=1}^{2k}\mathcal{X}_{j}(v_{j})\,,\]
and for each vertex \(b\in\mathcal{V}(\vec{H})\), define
\[\mathcal{P}_{b}(\vec{T})=\prod_{j\in\Gamma(b)}\mathcal{X}_{j}(v_{j})\,.\]
Since every half-edge of \(\vec{H}\) is in exactly one of the sets \(\Gamma(b)\), we have
\[\mathcal{Q}(\vec{T})=\prod_{b\in\mathcal{V}(\vec{H})}\mathcal{P}_{b}(\vec{T})\,.\]
The function \(\mathcal{Q}\) will in a sense "test" whether \(\vec{T}\) forms a copy of \(\vec{H}\).
**Lemma 2**.: _Let \(\vec{T}=(\overrightarrow{v_{1}v_{2}},\ldots,\overrightarrow{v_{2k-1}v_{2k}})\) be any \(k\)-tuple of edges of \(\vec{G}\). Suppose \(f\colon\mathcal{E}(\vec{H})\to\mathcal{E}(\vec{G})\) sends \(\overline{a_{2i-1}a_{2i}}\) to \(\overrightarrow{v_{2i-1}v_{2i}^{2}}\) for each \(i\). If \(f\) induces a homomorphism from \(\vec{H}\) to \(\vec{G}\), then \(\mathcal{Q}(\vec{T})=I\). If \(f\) does not induce such a homomorphism, then \(\mathcal{Q}(\vec{T})\) is a uniformly random element of \(\mathcal{G}\)._
**Proof:** Suppose \(f\) induces a homomorphism from \(\vec{H}\) to \(\vec{G}\). Let \(b\in\{1,\ldots,t\}\) be any vertex of \(H\), and suppose the homomorphism sends \(b\) to \(w\). Suppose \(\Gamma(b)=\{j_{1},\ldots,j_{d}\}\); i.e., \(h_{j_{1}},\ldots,h_{j_{d}}\) are the half-edges of \(\vec{H}\) that are incident to \(b\). Then \(v_{j_{1}},\ldots,v_{j_{d}}\) must all be equal to \(w\). By Lemma 1, \(\mathcal{X}_{j_{1}}(v_{j_{1}})\cdots\mathcal{X}_{j_{d}}(v_{j_{d}})=\ I\). Equivalently, \(\mathcal{P}_{b}(\vec{T})=I\). This is true for every \(b\in\mathcal{V}(\vec{H})\), so
\[\mathcal{Q}(\vec{T})=\prod_{b\in\mathcal{V}(\vec{H})}\mathcal{P}_{b}(\vec{T})=I.\]
Now suppose \(f\) does not induce such a homomorphism. Then there must be some vertex \(b\) of \(H\) such that, if \(\Gamma(b)=\{j_{1},\ldots,j_{d}\}\), then the vertices \(v_{j_{1}},\ldots,v_{j_{d}}\) are not all equal. Thus by Lemma 1, \(\mathcal{X}_{j_{1}}(v_{j_{1}})\cdots\mathcal{X}_{j_{d}}(v_{j_{d}})\) is a uniformly random element of \(\mathcal{G}\), i.e., \(\mathcal{P}_{b}(\vec{T})\) is a uniformly random element. \(\mathcal{P}_{b}(\vec{T})\) is independent of \(\mathcal{P}_{c}(\vec{T})\) for any other \(c\in\mathcal{V}(\vec{H})\), so \(\prod_{c\in\mathcal{V}(\vec{H})}\mathcal{P}_{c}(\vec{T})\) is also a uniformly random element; i.e., \(\mathcal{Q}(\vec{T})\) is a uniformly random element.
### Coloring Vertices
Fix some number of colors \(C\geq t\). For the purposes of bounding the variance, we will later assume that the maximum degree of any vertex of \(G\) is \(\leq m^{1/2-\alpha}\) and then set \(C=\min(m^{1/3},m^{2\alpha})\); however, here \(C\) may take any value \(\geq t\). Define a hash function \(\mathcal{C}\colon\mathcal{V}(G)\to\{1,\ldots,C\}\) that assigns a color to each vertex of \(G\). For each vertex \(v\), \(\mathcal{C}(v)\) is a uniformly random color, and \(\mathcal{C}\) is chosen uniformly at random from a family of \(4k\)-wise independent hash functions.
Consider functions \(f\colon\mathcal{E}(\vec{H})\to\mathcal{E}(\vec{G})\). There are \(m^{k}\) such functions, but we want to find only the ones that map \(\vec{H}\) isomorphically onto its image. Suppose that \(f\) maps the edges \(\overrightarrow{a_{1}a_{2}^{*}},\ldots,\overrightarrow{a_{2k-1}a_{2k}^{*}}\) to the edges \(\overrightarrow{v_{1}v_{2}^{*}},\ldots,\overrightarrow{v_{2k-1}v_{2k}^{*}}\) respectively. Then for any vertex \(b\in\mathcal{V}(\vec{H})\), all of the vertices \(\{a_{i}:i\in\Gamma(b)\}\) are equal to \(b\); i.e., they're all the same vertex. Therefore, a necessary condition for \(f\) to induce an isomorphism is that all the vertices \(\{v_{i}:i\in\Gamma(b)\}\) are the same vertex. In particular, a necessary condition is that all the vertices \(\{v_{i}:i\in\Gamma(b)\}\) have the same color. Thus we say that either the map \(f\) or the \(k\)-tuple of edges \(\vec{T}=(\overrightarrow{v_{1}v_{2}},\ldots,\overrightarrow{v_{2k-1}v_{2k}^{*}})\) is _color-compatible_ if for every \(b\in\mathcal{V}(\vec{H})\), all the vertices \(\{v_{i}:i\in\Gamma(b)\}\) have the same color. More specifically, for any ordered \(t\)-tuple of colors \((c_{1},\ldots,c_{t})\), we say that \(\vec{T}\) is \((c_{1},\ldots,c_{t})\)_-compatible_ if for every \(b\in\mathcal{V}(\vec{H})\), all the vertices \(\{v_{i}:i\in\Gamma(b)\}\) have color \(c_{b}\), or equivalently, if \(\mathcal{C}(v_{i})=c_{a_{i}}\) for every \(1\leq i\leq 2k\). Thus \(\vec{T}\) is color-compatible if there exists a \(t\)-tuple \((c_{1},\ldots,c_{t})\) such that \(\vec{T}\) is \((c_{1},\ldots,c_{t})\)-compatible. Furthermore, if \(\vec{T}\) is \((c_{1},\ldots,c_{t})\)-compatible and the \(t\) colors \(c_{1},\ldots,c_{t}\) are distinct, then we will say that \(\vec{T}\) is _distinctly color-compatible_.
As we saw in Lemma 2, \(\mathcal{Q}(\vec{T})\) is equal to \(I\) if \(\vec{T}\) forms a homomorphic image of \(\vec{H}\), and otherwise is a uniformly random element of \(\mathcal{G}\). The strategy in [22] is basically to compute the sum of \(\mathcal{Q}(\vec{T})\) over all \(\vec{T}\). The sum then has \(m^{k}\) terms and therefore tends to have high variance. Here,
rather than summing over all \(\vec{T}\), we will only sum over distinctly color-compatible \(\vec{T}\). The resulting sum will then have far fewer terms and therefore tend to have far lower variance.
For colors \(c_{1},c_{2}\in\{1,\ldots,C\}\) and \(1\leq i\leq k\), define
\[\mathcal{Z}_{i}^{c_{1},c_{2}}=\sum_{\begin{subarray}{c}\overrightarrow{vw}\in \mathcal{E}(G)\ :\\ \mathcal{C}(v)=c_{1},\ \mathcal{C}(w)=c_{2}\end{subarray}}\mathcal{M}_{i}( \overrightarrow{vw})\,. \tag{2}\]
Thus there are \(C^{2}k\) such sums, and \(\mathcal{Z}_{i}^{c_{1},c_{2}}\) is the sum of \(\mathcal{M}_{i}(\overrightarrow{vw})\) over all edges \(\overrightarrow{vw}\) for which the color of \(v\) is \(c_{1}\) and the color of \(w\) is \(c_{2}\). Also, define
\[\mathcal{S}_{(c_{1},\ldots,c_{t})}=\prod_{i=1}^{k}\mathcal{Z}_{i}^{c_{a_{2}i- 1},c_{a_{2}i}}\;. \tag{3}\]
We use \(E(\,)\) to denote expected value (not to be confused with \(\mathcal{E}(\,)\), which refers to the edge-set). We use \(\operatorname{tr}(\,)\) to denote the trace of a matrix.
**Lemma 3**.: _For \(c_{1},\ldots,c_{t}\) distinct, \(E(\operatorname{tr}(\mathcal{S}_{(c_{1},\ldots,c_{t})})/d)\) is equal to the number of \((c_{1},\ldots,c_{t})\)-compatible maps \(f\colon\mathcal{E}(\vec{H})\to\mathcal{E}(\vec{G})\) that induce injective homomorphisms from \(\vec{H}\) to \(\vec{G}\)._
**Proof:** From the definitions of \(\mathcal{S}_{(c_{1},\ldots,c_{t})}\) and \(\mathcal{Z}_{i}^{c_{1},c_{2}}\), we have
\[\mathcal{S}_{(c_{1},\ldots,c_{t})} = \prod_{i=1}^{k}\mathcal{Z}_{i}^{c_{a_{2}i-1},c_{a_{2}i}} \tag{4}\] \[= \prod_{\begin{subarray}{c}\overrightarrow{vw}\in\mathcal{E}( \mathcal{G}),\\ \mathcal{C}(v)=c_{a_{2}i-1},\\ \mathcal{C}(w)=c_{a_{2}i}\end{subarray}}\mathcal{M}_{i}(\overrightarrow{vw})\] \[= \sum_{\begin{subarray}{c}\overrightarrow{v_{1}v_{2}},\ldots, \overrightarrow{v_{2k-1}v_{2k}},\\ \mathcal{C}(v_{j})=c_{a_{j}}\end{subarray}}\prod_{i=1}^{k}\mathcal{M}_{i}( \overrightarrow{v_{2i-1}v_{2i}})\] \[= \sum_{\begin{subarray}{c}\overrightarrow{v_{1}v_{2}},\ldots, \overrightarrow{v_{2k-1}v_{2k}}\\ \text{is }(c_{1},\ldots,c_{t})\text{-compatible}\end{subarray}}\prod_{i=1}^{k} \mathcal{M}_{i}(\overrightarrow{v_{2i-1}v_{2i}})\] \[= \sum_{\begin{subarray}{c}\overrightarrow{v_{1}v_{2}},\ldots, \overrightarrow{v_{2k-1}v_{2k}}\\ \text{is }(c_{1},\ldots,c_{t})\text{-compatible}\end{subarray}}\mathcal{Q}(\vec{T})\,.\] \[= \sum_{\begin{subarray}{c}\overrightarrow{v_{1}v_{2}},\ldots, \overrightarrow{v_{2k-1}v_{2k}}\\ \text{is }(c_{1},\ldots,c_{t})\text{-compatible}\end{subarray}}\prod_{i=1}^{k} \mathcal{Q}(\vec{v}_{2i-1}v_{2i})\] \[= \sum_{\begin{subarray}{c}\overrightarrow{v_{1}v_{2}},\ldots, \overrightarrow{v_{2k-1}v_{2k}},\\ \text{is }(c_{1},\ldots,c_{t})\text{-compatible}\end{subarray}}\mathcal{Q}(\vec{T})\,.\]
In that last sum, there is one term for every \((c_{1},\ldots,c_{t})\)-compatible map \(f\colon\mathcal{E}(\vec{H})\to\mathcal{E}(\vec{G})\). Consider any one such term. By Lemma 2, if \(f\) does not induce a homomorphism from \(\vec{H}\) to \(\vec{G}\), then that term is a uniformly random element of \(\mathcal{G}\), and, by our assumption on \(\mathcal{G}\), its trace therefore has expected value \(0\). Thus those terms do not contribute to \(E(\operatorname{tr}(\mathcal{S}_{(c_{1},\ldots,c_{t})}))\). If \(f\) does induce such a homomorphism, then by Lemma 2, that term is equal to \(I\), so it contributes \(d\) to the trace of \(\mathcal{S}_{(c_{1},\ldots,c_{t})}\). Thus \(E(\operatorname{tr}(\mathcal{S}_{(c_{1},\ldots,c_{t})})/d)\) is equal to the number of \((c_{1},\ldots,c_{t})\)-compatible maps \(f\) that induce homomorphisms from \(\vec{H}\) to \(\vec{G}\). Since the colors \(c_{1},\ldots,c_{t}\) were assumed to
be distinct, any such homomorphism sends the vertices of \(\vec{H}\) to vertices of \(\vec{G}\) with different colors and is therefore injective.
Define
\[\mathcal{S}=\sum_{\begin{subarray}{c}(e_{1},\ldots,e_{t})\\ \text{distinct}\end{subarray}}\mathcal{S}_{(e_{1},\ldots,e_{t})}\,.\]
**Theorem 1**.: \[E\left(\frac{C^{t}}{C(C-1)\cdots(C-t+1)}\cdot\frac{\operatorname{tr}(\mathcal{ S})}{d\cdot\operatorname{auto}(H)}\right)=\#H\,,\]
_where \(\operatorname{auto}(H)\) is the number of automorphisms of \(H\)._
**Proof:** By Lemma 3, if \(c_{1},\ldots,c_{t}\) are distinct colors, then \(\operatorname{tr}(\mathcal{S}_{(c_{1},\ldots,c_{t})})/d\) gives an unbiased estimate of the number of \((c_{1},\ldots,c_{t})\)-compatible maps \(f\colon\mathcal{E}(\vec{H})\to\mathcal{E}(\vec{G})\) that induce injective homomorphisms from \(\vec{H}\) to \(\vec{G}\), i.e., the number of injective homomorphic images of \(\vec{H}\) in \(\vec{G}\) whose vertices have colors \(c_{1},\ldots,c_{t}\) respectively. Summing over distinct \(c_{1},\ldots,c_{t}\), we see that \(\operatorname{tr}(\mathcal{S})/d\) gives an unbiased estimate of the number of injective homomorphic images whose vertices have distinct colors. The probability that a randomly colored injective homomorphic image of \(\vec{H}\) has distinct colors is
\[\frac{C(C-1)\cdots(C-t+1)}{C^{t}},\]
so we divide by this expression. Finally, each copy of \(H\) gets counted as \(\operatorname{auto}(H)\) different injective homomorphic images, so we divide by \(\operatorname{auto}(H)\).
Theorem 1 provides the method for counting copies of \(H\). As the edges stream by, we compute the sums \(\mathcal{Z}_{i}^{c_{1},c_{2}}\). In particular, if the edge \(\overrightarrow{vw}\) streams by, then for each \(1\leq i\leq k\), we compute \(\mathcal{M}_{i}(\overrightarrow{vw})\) and add it to the sum \(\mathcal{Z}_{i}^{\mathcal{C}(v),\mathcal{C}(w)}\). (For an edge-deletion, we subtract \(\mathcal{M}_{i}(\overrightarrow{vw})\) from \(\mathcal{Z}_{i}^{\mathcal{C}(v),\mathcal{C}(w)}\).) Once the data-stream has ended, for every \(t\)-tuple of distinct colors \((c_{1},\ldots,c_{t})\), we compute the product \(\mathcal{S}_{(c_{1},\ldots,c_{t})}\) using Equation (3). Finally, we sum these values to get \(S\), take the trace, and multiply by
\[\frac{C^{t}}{C(C-1)\cdots(C-t+1)\cdot d\cdot\operatorname{auto}(H)}\]
to get the final estimate. We refer to this as _Algorithm 1_ and summarize the steps in Table 1. Observe that after the data-stream ends, we do a potentially large computation, which could involve computing roughly \(C^{t}\) values \(\mathcal{S}_{(c_{1},\ldots,c_{t})}\). There are often, but not always, ways to do this computation with less than \(C^{t}\) work. This is discussed further in Section 4.
In the case where \(\mathcal{G}=\{\pm I,\pm M,\pm M^{2},\ldots,\pm M^{d-1}\}\) with \(d>1\), a very slight modification to Algorithm 1 reduces the update time per edge by roughly a factor of \(d\). In this modified algorithm, which we call _Algorithm 2_, we do not compute the sums \(\mathcal{Z}_{i}^{c_{1},c_{2}}\) until after the data stream has ended. Instead, we keep counts of how many times each \(M^{j}\) would have contributed to \(\mathcal{Z}_{i}^{c_{1},c_{2}}\). Thus we have a count for each \(i,j,c_{1},c_{2}\), which we call \(\operatorname{Count}_{c_{1},c_{2}}(i,j)\). Suppose that when some edge \(\overrightarrow{vw}\) streams by, we compute \(\mathcal{M}_{i}(\overrightarrow{vw})\) and find that it is equal to \(M^{j}\). Rather than immediately adding \(M^{j}\) to \(\mathcal{Z}_{i}^{\mathcal{C}(v),\mathcal{C}(w)}\), we add \(1\) to \(\operatorname{Count}_{\mathcal{C}(v),\mathcal{C}(w)}(i,j)\)
**Initialize:**
For \(c_{1},c_{2}\in\{1,\ldots,C\}\) and each \(1\leq i\leq k\), set \(\mathcal{Z}_{i}^{c_{1},c_{2}}=0\).
**Update:**
When an edge \(\overrightarrow{vw}\) streams by, for each \(1\leq i\leq k\), update
\[\begin{array}{lll}\mathcal{Z}_{i}^{\mathcal{C}(v),\mathcal{C}(w)}&\leftarrow& \mathcal{Z}_{i}^{\mathcal{C}(v),\mathcal{C}(w)}+\mathcal{M}_{i}( \overrightarrow{vw})\,,&\mbox{ for an insertion,}\\ \mathcal{Z}_{i}^{\mathcal{C}(v),\mathcal{C}(w)}&\leftarrow&\mathcal{Z}_{i}^{ \mathcal{C}(v),\mathcal{C}(w)}-\mathcal{M}_{i}(\overrightarrow{vw})\,,&\mbox { for a deletion.}\end{array}\]
**Final Computation:**
For \((c_{1},\ldots,c_{t})\) distinct, compute
\[\mathcal{S}=\sum_{\begin{subarray}{c}(c_{1},\ldots,c_{t})\\ \mbox{distinct}\end{subarray}}\mathcal{S}_{(c_{1},\ldots,c_{t})}\,.\]
Output
\[\left(\frac{C^{t}}{C(C-1)\cdots(C-t+1)}\right)\left(\frac{\mbox{tr}(\mathcal{ S})}{d\cdot\mbox{auto}(H)}\right)\,.\]
\begin{table}
\begin{tabular}{l l}
**Initialize:** & \\ & For \(c_{1},c_{2}\in\{1,\ldots,C\}\) and each \(1\leq i\leq k\), set \(\mathcal{Z}_{i}^{c_{1},c_{2}}=0\). \\
**Update:** & \\ & When an edge \(\overrightarrow{vw}\) streams by, for each \(1\leq i\leq k\), update \\ & & \\ & & \\ & & \\ & & \\ \end{tabular} \begin{array}{lll}\mathcal{Z}_{i}^{\mathcal{C}(v),\mathcal{C}(w)}& \leftarrow&\mathcal{Z}_{i}^{\mathcal{C}(v),\mathcal{C}(w)}+\mathcal{M}_{i}( \overrightarrow{vw})\,,&\mbox{ for an insertion,}\\ & \\ & & \\ \end{array}\]
**Final Computation:**
For \((c_{1},\ldots,c_{t})\) distinct, compute
\[\mathcal{S}_{(c_{1},\ldots,c_{t})}=\prod_{i=1}^{k}\mathcal{Z}_{i}^{c_{a_{2} i-1},c_{a_{2}i}}\;.\]
Then compute
\[\mathcal{S}=\sum_{\begin{subarray}{c}(c_{1},\ldots,c_{t})\\ \mbox{distinct}\end{subarray}}\mathcal{S}_{(c_{1},\ldots,c_{t})}\,.\]
Output
\[\left(\frac{C^{t}}{C(C-1)\cdots(C-t+1)}\right)\left(\frac{\mbox{tr}(\mathcal{ S})}{d\cdot\mbox{auto}(H)}\right)\,.\]
\end{table}
Table 1: Algorithm 1
(If \(\overrightarrow{v\!\!t}\) is an edge-deletion or if \(\mathcal{M}_{i}(\overrightarrow{v\!\!t})\) is equal to \(-M^{j}\), then we instead subtract \(1\) from the count.) Thus, rather than updating \(d\) diagonal entries, we update one count, saving a factor of \(d\) in update time. The storage does not change much: for each \(\mathcal{Z}_{i}^{\varepsilon_{1},\varepsilon_{2}}\), rather than storing the values of \(d\) diagonal entries, we store \(d\) counts. After the data stream ends, we compute each
\[\mathcal{Z}_{i}^{\varepsilon_{1},\varepsilon_{2}}=\sum_{j=0}^{d-1}\text{Count }_{c_{1},c_{2}}(i,j)M^{j}\,. \tag{5}\]
Note that Equation (5) can be evaluated using a fast Fourier transform, though this is unlikely to have much effect on the overall run time. The steps of Algorithm 2 are summarized in Table 2.
## 3 The Variance
In this section, we bound the variance of the estimate given by our algorithm. Note that the variance is the same whether we use Algorithm 1 or Algorithm 2, since they produce the same estimate, so we do not distinguish between the two. The variance does however depend on the choice of \(\mathcal{G}\), and our proof only applies when \(\mathcal{G}\) is either the group of \(r^{\text{th}}\) roots of unity or the group \(\{\pm I,\pm M,\pm M^{2},\ldots,\pm M^{d-1}\}\). In either case, the variance is a large sum, but most terms in the sum are zero. In Section 3.1, we give conditions that classify which terms contribute non-trivially to the sum when \(\mathcal{G}\) is the group of \(r^{\text{th}}\) roots of unity. In Section 3.2, we do the same when \(\mathcal{G}\) is the group \(\{\pm I,\ldots,\pm M^{d-1}\}\). In Section 3.3, we bound the number of terms that satisfy those conditions, obtaining our bound.
Our estimate of \(\#H\) (which is given in Theorem 1) has variance
\[\left(\frac{C^{t}}{C(C-1)\cdots(C-t+1)\cdot d\cdot\text{auto}(H)}\right)^{2}E \left(\text{tr}(\mathcal{S})\text{tr}(\overline{\mathcal{S}})\right)-\left( \#H\right)^{2}, \tag{6}\]
where \(\overline{\mathcal{S}}\) denotes the complex conjugate of \(\mathcal{S}\). We thus wish to understand the term \(E\left(\text{tr}(\mathcal{S})\text{tr}(\overline{\mathcal{S}})\right)\).
From Equation (4),
\[\mathcal{S}_{(c_{1},\ldots,c_{t})}=\sum_{\begin{subarray}{c}\vec{T}=( \overrightarrow{v_{1}v_{2}},\ldots,\overrightarrow{v_{2k-1}v_{2k}})\\ \text{is }(c_{1},\ldots,c_{t})\text{-compatible}\end{subarray}}\mathcal{Q}( \vec{T})\,,\]
so
\[\mathcal{S}=\sum_{\begin{subarray}{c}(c_{1},\ldots,c_{t})\\ \text{distinct}\end{subarray}}\sum_{\begin{subarray}{c}\vec{T}=( \overrightarrow{v_{1}v_{2}},\ldots,\overrightarrow{v_{2k-1}v_{2k}})\\ \text{is }(c_{1},\ldots,c_{t})\text{-compatible}\end{subarray}}\mathcal{Q}( \vec{T})=\sum_{\begin{subarray}{c}\vec{T}=(\overrightarrow{v_{1}v_{2}}, \ldots,\overrightarrow{v_{2k-1}v_{2k}})\\ \text{distinctly color-compatible}\end{subarray}}\mathcal{Q}(\vec{T})\,. \tag{7}\]
Thus \(\text{tr}(\mathcal{S})\text{tr}(\overline{\mathcal{S}})\) is a sum of terms of the form
\[\text{tr}(\mathcal{Q}(\vec{T_{1}}))\text{tr}(\overline{\mathcal{Q}(\vec{T_{2 }})})\,. \tag{8}\]
In particular, there is one term for every \(2k\)-tuple of edges \((\vec{T_{1}},\vec{T_{2}})\) for which \(\vec{T_{1}}=\overrightarrow{v_{1}v_{2}},\ldots,\overrightarrow{v_{2k-1}v_{2k}}\) is distinctly color-compatible and
**Initialize:**
For all \(c_{1},c_{2}\in\{1,\ldots,C\}\), \(1\leq i\leq k\), and \(1\leq j\leq d\), set \(\text{Count}_{c_{1},c_{2}}(i,j)=0\).
**Update:**
When an insertion edge \(\overrightarrow{vw}\) streams by, for each \(1\leq i\leq k\),
if \(\mathcal{M}_{i}(\overrightarrow{vw})=M^{j}\), then increment \(\text{Count}_{\mathcal{C}(v),\mathcal{C}(w)}(i,j)\);
if \(\mathcal{M}_{i}(\overrightarrow{vw})=-M^{j}\), then decrement \(\text{Count}_{\mathcal{C}(v),\mathcal{C}(w)}(i,j)\).
For a deletion edge, interchange the increment and decrement.
**Final Computation:**
For \(c_{1},c_{2}\in\{1,\ldots,C\}\) and each \(1\leq i\leq k\), compute
\[\mathcal{Z}_{i}^{c_{1},c_{2}}=\sum_{j=0}^{d-1}\text{Count}_{c_{1},c_{2}}(i,j)M ^{j}\,.\]
For \((c_{1},\ldots,c_{t})\) distinct, compute
\[\mathcal{S}_{(c_{1},\ldots,c_{t})}=\prod_{i=1}^{k}\mathcal{Z}_{i}^{c_{a_{2}i- 1},c_{a_{2}i}}\,.\]
Then compute
\[\mathcal{S}=\sum_{\begin{subarray}{c}(c_{1},\ldots,c_{t})\\ \text{distinct}\end{subarray}}\mathcal{S}_{(c_{1},\ldots,c_{t})}\,.\]
Output
\[\left(\frac{C^{t}}{C(C-1)\cdots(C-t+1)}\right)\left(\frac{\text{tr}(\mathcal{ S})}{d\cdot\text{auto}(H)}\right)\,.\]
\begin{table}
\begin{tabular}{l}
**Initialize:** \\ For all \(c_{1},c_{2}\in\{1,\ldots,C\}\), \(1\leq i\leq k\), and \(1\leq j\leq d\), set \(\text{Count}_{c_{1},c_{2}}(i,j)=0\). \\
**Update:** \\ When an insertion edge \(\overrightarrow{vw}\) streams by, for each \(1\leq i\leq k\), \\ if \(\mathcal{M}_{i}(\overrightarrow{vw})=M^{j}\), then increment \(\text{Count}_{\mathcal{C}(v),\mathcal{C}(w)}(i,j)\); \\ if \(\mathcal{M}_{i}(\overrightarrow{vw})=-M^{j}\), then decrement \(\text{Count}_{\mathcal{C}(v),\mathcal{C}(w)}(i,j)\). \\ For a deletion edge, interchange the increment and decrement. \\
**Final Computation:**
For \(c_{1},c_{2}\in\{1,\ldots,C\}\) and each \(1\leq i\leq k\), compute
\[\mathcal{Z}_{i}^{c_{1},c_{2}}=\sum_{j=0}^{d-1}\text{Count}_{c_{1},c_{2}}(i,j)M ^{j}\,.\]
For \((c_{1},\ldots,c_{t})\) distinct, compute
\[\mathcal{S}_{(c_{1},\ldots,c_{t})}=\prod_{i=1}^{k}\mathcal{Z}_{i}^{c_{a_{2}i- 1},c_{a_{2}i}}\,.\]
Then compute
\[\mathcal{S}=\sum_{\begin{subarray}{c}(c_{1},\ldots,c_{t})\\ \text{distinct}\end{subarray}}\mathcal{S}_{(c_{1},\ldots,c_{t})}\,.\]
Output
\[\left(\frac{C^{t}}{C(C-1)\cdots(C-t+1)}\right)\left(\frac{\text{tr}(\mathcal{ S})}{d\cdot\text{auto}(H)}\right)\,.\]
\end{table}
Table 2: Algorithm 2; here, we are using the group \(\mathcal{G}=\{\pm I,\pm M,\pm M^{2},\ldots,\pm M^{d-1}\}\).
\(\overrightarrow{w_{1}w_{2}^{*}},\ldots,\overrightarrow{w_{2k-1}w_{2k}}\) is distinctly color-compatible. In contrast, for the [KMSS]-algorithm, the analogous expression for the variance has a term for each \(2k\)-tuple of edges regardless of color-compatibility.
For most \(2k\)-tuples of edges \((\vec{T_{1}},\vec{T_{2}})\), the product (8) has expected value \(0\) and therefore does not contribute to the variance. Here we classify the \(2k\)-tuples that do contribute to the variance. Consider some \(2k\)-tuple of edges \((\vec{T_{1}},\vec{T_{2}})\), and consider any vertex \(b\in\mathcal{V}(H)\). We consider three conditions that the \(2k\)-tuple may or may not satisfy at \(b\):
**Condition 1**: _The vertices_ \(\{v_{i}:i\in\Gamma(b)\}\) _are all the same, and the vertices_ \(\{w_{i}:i\in\Gamma(b)\}\) _are all the same._
**Condition 2**: \(v_{i}=w_{i}\) _for all_ \(i\in\Gamma(b)\)_._
**Condition 3**: _There are vertices_ \(x,y\in\mathcal{V}(\vec{G})\) _such that for every_ \(i\in\Gamma(b)\)_, either_ \(v_{i}=x\) _and_ \(w_{i}=y\)_, or_ \(v_{i}=y\) _and_ \(w_{i}=x\)_._
Note that Condition 1 is a special case of Condition 3. In general, when Condition 1 is satisfied at every vertex of \(\vec{H}\), each of \(\vec{T_{1}}\) and \(\vec{T_{2}}\) forms a homomorphic image of \(\vec{H}\). In general, when Condition 2 is satisfied at every vertex of \(\vec{H}\), \(\vec{T_{1}}\) is an arbitrary collection of \(k\) edges, and \(\vec{T_{2}}=\vec{T_{1}}\).
The following lemma turns Conditions 1-3 into conditions on \(\mathcal{P}_{b}(\vec{T_{1}})\) and \(\mathcal{P}_{b}(\vec{T_{2}})\). Those conditions will later let us characterize which \(\mathrm{tr}(\mathcal{Q}(\vec{T_{1}}))\mathrm{tr}(\overline{\mathcal{Q}(\vec{ T_{2}})})\) contribute to the variance.
**Lemma 4**.: _Suppose \(\vec{T}=(\vec{T_{1}},\vec{T_{2}})\) is any \(2k\)-tuple of edges of \(\vec{G}\)._
1. _If_ \(\vec{T}\) _satisfies Condition 1 at_ \(b\)_, then_ \(\mathcal{P}_{b}(\vec{T_{1}})=\mathcal{P}_{b}(\vec{T_{2}})=I\)_._
2. _If_ \(\vec{T}\) _satisfies Condition 2 at_ \(b\) _but not Condition 1, then_ \(\mathcal{P}_{b}(\vec{T_{1}})=\mathcal{P}_{b}(\vec{T_{2}})\)_, and each is a uniformly random element of_ \(\mathcal{G}\)_._
3. _If_ \(\vec{T}\) _satisfies Condition 3 at_ \(b\) _but not Condition 1, then_ \(\mathcal{P}_{b}(\vec{T_{1}})=\mathcal{P}_{b}(\vec{T_{2}})^{-1}\)_, and each is a uniformly random element of_ \(\mathcal{G}\)_._
4. _If_ \(\vec{T}\) _does not satisfy Condition 1,2, or 3 at_ \(b\)_, then either_ \(\mathcal{P}_{b}(\vec{T_{1}})\) _or_ \(\mathcal{P}_{b}(\vec{T_{2}})\) _is a uniformly random element of_ \(\mathcal{G}\) _and is independent of the other._
**Proof:** Suppose that \(\vec{T_{1}}=(\overrightarrow{v_{1}v_{2}},\ldots,\overrightarrow{v_{2k-1}v_{2k} })\) and \(\vec{T_{2}}=(\overrightarrow{w_{1}w_{2}},\ldots,\overrightarrow{w_{2k-1}w_{2k} })\). If \(\vec{T}\) satisfies Condition 1 at \(b\), then by Lemma 1, \(\mathcal{P}_{b}(T_{1})=I\) and \(\mathcal{P}_{b}(T_{2})=I\).
Now suppose that Condition 1 is not satisfied at \(b\). Let \(h_{\delta}\) be the distinguished half-edge at \(b\). Then
\[\mathcal{X}_{\delta}(v_{\delta})=\prod_{i\in\Gamma(b)\setminus\delta} \mathcal{X}_{i}(v_{\delta})^{-1}\,,\]
so
\[\mathcal{P}_{b}(\vec{T_{1}})=\prod_{i\in\Gamma(b)\setminus\delta}\mathcal{X}_ {i}(v_{i})\mathcal{X}_{i}(v_{\delta})^{-1}\,.\]
Similarly,
\[\mathcal{P}_{b}(\vec{T_{2}})=\prod_{i\in\Gamma(b)\setminus\delta}\mathcal{X}_ {i}(w_{i})\mathcal{X}_{i}(w_{\delta})^{-1}\,.\]
Since Condition 1 is not satisfied, either some \(v_{i}\neq v_{\delta}\) or some \(w_{i}\neq w_{\delta}\). Assume it is the former. Then \(\mathcal{X}_{i}(v_{i})\mathcal{X}_{i}(v_{\delta})^{-1}\) is a uniformly random
element of \(\mathcal{G}\), and it is independent of \(\mathcal{X}_{j}(v_{j})\mathcal{X}_{j}(v_{\delta})^{-1}\) for all \(j\notin\{i,\delta\}\), since then neither \(h_{i}\) nor \(h_{j}\) is distinguished. Thus \(\mathcal{P}_{b}(\vec{T_{1}})\) is a uniformly random element of \(\mathcal{G}\). Similarly, if \(w_{i}\neq w_{\delta}\), then \(\mathcal{P}_{b}(\vec{T_{2}})\) is a uniformly random element of \(\mathcal{G}\).
If \(\vec{T}\) satisfies Condition 2, then for each \(i\in\Gamma(b)\), \(\mathcal{X}_{i}(v_{i})=\mathcal{X}_{i}(w_{i})\), so \(\mathcal{P}_{b}(\vec{T_{1}})=\mathcal{P}_{b}(\vec{T_{2}})\).
If \(\vec{T}\) satisfies Condition 3, then for each \(i\in\Gamma(b)\), either \(v_{i}=v_{\delta}\) and \(w_{i}=w_{\delta}\), or \(v_{i}=w_{\delta}\) and \(w_{i}=v_{\delta}\). Either way, \(\mathcal{X}_{i}(v_{i})\mathcal{X}_{i}(v_{\delta})^{-1}\) is the inverse of \(\mathcal{X}_{i}(w_{i})\mathcal{X}_{i}(w_{\delta})^{-1}\), so \(\mathcal{P}_{b}(\vec{T_{1}})=\mathcal{P}_{b}(\vec{T_{2}})^{-1}\).
Suppose then that \(\vec{T}\) does not satisfy any of the three conditions. Suppose also that for some \(i\in\Gamma(b)\), one of \(v_{i}\), \(w_{i}\), \(v_{\delta}\), and \(w_{\delta}\) differs from the other three. Suppose the one that differs is either \(v_{i}\) or \(v_{\delta}\). Then \(\mathcal{X}_{i}(v_{i})\mathcal{X}_{i}(v_{\delta})^{-1}\) is a uniformly random element of \(\mathcal{G}\), and it is independent of \(\mathcal{X}_{i}(w_{i})\mathcal{X}_{i}(w_{\delta})^{-1}\). It is also independent of \(\mathcal{X}_{j}(v_{j})\mathcal{X}_{j}(v_{\delta})^{-1}\) and \(\mathcal{X}_{j}(w_{i})\mathcal{X}_{j}(w_{\delta})^{-1}\) for all \(j\notin\{i,\delta\}\). Thus \(\mathcal{P}_{b}(\vec{T_{1}})\) is a uniformly random element of \(\mathcal{G}\) and is independent of \(\mathcal{P}_{b}(\vec{T_{2}})\). Similarly, if \(w_{i}\) or \(w_{\delta}\) was the one that differed from the other three, then \(\mathcal{P}_{b}(\vec{T_{2}})\) would be uniformly random and independent of \(\mathcal{P}_{b}(\vec{T_{1}})\). Suppose then that for each \(i\), none of \(v_{i}\), \(w_{i}\), \(v_{\delta}\), and \(w_{\delta}\) is different from the other three. If \(v_{\delta}=w_{\delta}\), then Condition 2 must hold; whereas if \(v_{\delta}\neq w_{\delta}\), then Condition 3 must hold.
### Variance When \(\mathcal{G}\) Consists of Roots of Unity
At this point, the discussion splits into two cases depending on whether \(\mathcal{G}\) is a group of roots of unity or a group of matrices. Here we consider the former. Therefore we fix some integer \(r\geq 2\) and let \(\mathcal{G}\) be the group of 1-by-1 matrices whose entries are \(r^{\rm th}\) roots of unity. Since the matrices are 1-by-1, we treat all matrices as complex numbers rather than matrices. Also, since the trace of a 1-by-1 matrix is equal to its entry, we simply remove "tr" from any equations. Thus the expression (6) for variance becomes
\[\left(\frac{C^{t}}{C(C-1)\cdots(C-t+1)\cdot\mathrm{auto}(H)}\right)^{2}E( \mathcal{S}\overline{\mathcal{S}})-(\#H)^{2}\,. \tag{9}\]
Since \(\mathcal{S}\overline{\mathcal{S}}\) is a sum of terms of the form \(\mathcal{Q}(\vec{T_{1}})\overline{\mathcal{Q}(\vec{T_{2}})}\), the next theorem classifies which pairs \((\vec{T_{1}},\vec{T_{2}})\) contribute to \(E(\mathcal{S}\overline{\mathcal{S}})\).
**Theorem 2**.: _Let \(\vec{T}=(\vec{T_{1}},\vec{T_{2}})\) be a \(2k\)-tuple of edges of \(\vec{G}\). If either of the following hold:_
* \(\vec{T}\) _satisfies Condition_ 1 _or_ 2 _for every_ \(b\in\mathcal{V}(\vec{H})\)_, or_
* \(r=2\)_, and_ \(\vec{T}\) _satisfies Condition_ 1_,_ 2_, or 3 _for every_ \(b\in\mathcal{V}(\vec{H})\)_,_
_then \(\mathcal{Q}(\vec{T_{1}})\overline{\mathcal{Q}(\vec{T_{2}})}=1\). Otherwise,_
\[E\left(\mathcal{Q}(\vec{T_{1}})\overline{\mathcal{Q}(\vec{T_{2}})}\right)=0.\]
**Proof:** We can write
\[\mathcal{Q}(\vec{T_{1}})\overline{\mathcal{Q}(\vec{T_{2}})}\]
as
\[\prod_{b\in\mathcal{V}(\vec{H})}\mathcal{P}_{b}(\vec{T_{1}})\overline{\mathcal{P}_ {b}(\vec{T_{2}})}.\]
If \(\vec{T}\) satisfies Condition 1 or 2 at some \(b\), then by Lemma 4, \(\mathcal{P}_{b}(\vec{T_{1}})=\mathcal{P}_{b}(\vec{T_{2}})\), so
\[\mathcal{P}_{b}(\vec{T_{1}})\overline{\mathcal{P}_{b}(\vec{T_{2}})}=\mathcal{ P}_{b}(\vec{T_{1}})\overline{\mathcal{P}_{b}(\vec{T_{1}})}=\mathcal{P}_{b}( \vec{T_{1}})/\mathcal{P}_{b}(\vec{T_{1}})=1\,.\]
If \(r=2\) and \(\vec{T}\) satisfies Condition 3 at some \(b\), then \(\mathcal{P}_{b}(\vec{T_{1}})=\mathcal{P}_{b}(\vec{T_{2}})^{-1}\), so
\[\mathcal{P}_{b}(\vec{T_{1}})\overline{\mathcal{P}_{b}(\vec{T_{2}})}=\mathcal{ P}_{b}(\vec{T_{1}})^{2}=1\,.\]
Thus if either of these two conditions holds at every \(b\), then
\[\prod_{b\in\mathcal{V}(\vec{H})}\mathcal{P}_{b}(\vec{T_{1}})\overline{ \mathcal{P}_{b}(\vec{T_{2}})}=1\,.\]
Suppose now that at some \(b\), Conditions 1 and 2 don't hold. If Condition 3 holds and \(r>2\), then by Lemma 4, \(\mathcal{P}_{b}(\vec{T_{1}})=\mathcal{P}_{b}(\vec{T_{2}})^{-1}\), and each is a uniformly random element of \(\mathcal{G}\). Then \(\mathcal{P}_{b}(\vec{T_{1}})\overline{\mathcal{P}_{b}(\vec{T_{2}})}=\mathcal{ P}_{b}(\vec{T_{1}})^{2}\), and since \(r>2\), we have
\[E\left(\mathcal{P}_{b}(\vec{T_{1}})^{2}\right)=0.\]
Thus
\[E\left(\mathcal{P}_{b}(\vec{T_{1}})\overline{\mathcal{P}_{b}(\vec{T_{2}})} \right)=0.\]
If instead Condition 3 does _not_ hold, then by Lemma 4, one of \(\mathcal{P}_{b}(\vec{T_{1}})\) and \(\mathcal{P}_{b}(\vec{T_{2}})\) is a uniformly random \(r^{\mathrm{th}}\) root of unity and is independent of the other, so again
\[E\left(\mathcal{P}_{b}(\vec{T_{1}})\overline{\mathcal{P}_{b}(\vec{T_{2}})} \right)=0.\]
In either case, \(\mathcal{P}_{b}(\vec{T_{1}})\overline{\mathcal{P}_{b}(\vec{T_{2}})}\) is independent of \(\mathcal{P}_{c}(\vec{T_{1}})\overline{\mathcal{P}_{c}(\vec{T_{2}})}\) for \(c\in\mathcal{V}(\vec{H})\setminus b\), so
\[E\left(\mathcal{Q}(\vec{T_{1}})\overline{\mathcal{Q}(\vec{T_{2}})}\right)=0.\]
### Variance When \(\mathcal{G}=\{\pm I,\pm M,\ldots,\pm M^{d-1}\}\)
Now we consider the variance of our estimate in the case where \(\mathcal{G}\) is a group of matrices. In particular, fix a dimension \(d\geq 2\) and let \(\mathcal{G}\) consist of the matrices \(\{\pm I,\pm M,\ldots,\pm M^{d-1}\}\), where \(M\) is the diagonal matrix with entries \(1,\omega,\omega^{2},\ldots,\omega^{d-1}\), and \(\omega=e^{2\pi i/d}\).
The variance of our estimate for \(\#H\) is given by Expression (6). Note, however, that the trace of every element of \(\mathcal{G}\) is real, so we can dispense with complex conjugation. Thus the variance becomes
\[\left(\frac{C^{t}}{C(C-1)\cdots(C-t+1)\cdot d\cdot\mathrm{auto}(H)}\right)^{2} E(\mathrm{tr}(\mathcal{S})^{2})-(\#H)^{2}\,. \tag{10}\]
We thus wish to understand the term \(E(\mathrm{tr}(\mathcal{S})^{2})\). Since \(\mathrm{tr}(\mathcal{S})^{2}\) is a sum of terms of the form \(\mathrm{tr}(\mathcal{Q}(\vec{T_{1}}))\mathrm{tr}(\mathcal{Q}(\vec{T_{2}}))\), the next theorem classifies how much each pair \((\vec{T_{1}},\vec{T_{2}})\) contributes to \(E(\mathrm{tr}(\mathcal{S})^{2})\).
**Theorem 3**.: _Suppose \(\vec{T}=(\vec{T}_{1},\vec{T}_{2})\) is a \(2k\)-tuple of edges of \(\vec{G}\)._
* _If_ \(\vec{T}\) _satisfies Condition_ 1 _for every_ \(b\in\mathcal{V}(\vec{H})\)_, then_ \(\mathrm{tr}(\mathcal{Q}(\vec{T}_{1}))\mathrm{tr}(\mathcal{Q}(\vec{T}_{2}))=d^{2}\)_._
* _If_ \(\vec{T}\) _satisfies either Condition_ 1_,_ 2_, or 3 _at every_ \(b\in\mathcal{V}(\vec{H})\) _but not always Condition_ 1_, then_ \[0<E\left(\mathrm{tr}(\mathcal{Q}(\vec{T}_{1}))\mathrm{tr}(\mathcal{Q}(\vec{T}_ {2}))\right)\leq d.\]
* _Otherwise,_ \[E\left(\mathrm{tr}(\mathcal{Q}(\vec{T}_{1}))\mathrm{tr}(\mathcal{Q}(\vec{T}_{2 }))\right)=0.\]
**Proof:** Suppose that Condition 1, 2, or 3 holds at every \(b\in\mathcal{V}(\vec{H})\). Recall that \(\mathcal{Q}(\vec{T}_{1})=\prod_{b\in\mathcal{V}(\vec{H})}\mathcal{P}_{b}( \vec{T}_{1})\), and \(\mathcal{Q}(\vec{T}_{2})=\prod_{b\in\mathcal{V}(\vec{H})}\mathcal{P}_{b}( \vec{T}_{2})\). Let \(R_{1}\) denote the product of \(\mathcal{P}_{b}(\vec{T}_{1})\) over all \(b\in\mathcal{V}(\vec{H})\) where Condition 1 holds. Let \(R_{2}\) denote the same product at all \(b\in\mathcal{V}(\vec{H})\) where Condition 2 holds, but not Condition 1. Let \(R_{3}\) denote the same product over all \(b\in\mathcal{V}(\vec{H})\) where Condition 3 holds, but not Condition 1. (In each case, if the given conditions are not satisfied at any \(b\), then define \(R_{i}\) to be \(I\).) Thus \(\mathcal{Q}(\vec{T}_{1})=R_{1}R_{2}R_{3}\). By Lemma 4-A, \(R_{1}=I\), so \(\mathcal{Q}(\vec{T}_{1})=IR_{2}R_{3}\). By Lemmas 4-B and 4-C, \(\mathcal{Q}(\vec{T}_{2})=IR_{2}R_{3}^{-1}\). Furthermore, if there is at least one \(b\) where Condition 2 (resp. 3) holds but not Condition 1, then \(R_{2}\) (resp. \(R_{3}\)) is uniformly random. Finally, since \(R_{2}\) and \(R_{3}\) involve different vertices of \(H\), they are independent.
If \(T\) satisfies Condition 1 at every \(b\in\mathcal{V}(\vec{H})\), then \(R_{2}=R_{3}=I\), so \(\mathcal{Q}(\vec{T}_{1})=\mathcal{Q}(\vec{T}_{2})=I\), and \(\mathrm{tr}(\mathcal{Q}(\vec{T}_{1}))\mathrm{tr}(\mathcal{Q}(\vec{T}_{2}))=d^ {2}\).
If \(T\) satisfies Condition 1 or 2 at every \(b\in\mathcal{V}(\vec{H})\) but not always Condition 1, then \(R_{3}=I\), and \(\mathcal{Q}(\vec{T}_{1})=\mathcal{Q}(\vec{T}_{2})=R_{2}\). With probability \(1/d\), \(R_{2}=\pm I\), in which case \(\mathrm{tr}(\mathcal{Q}(\vec{T}_{1}))\mathrm{tr}(\mathcal{Q}(\vec{T}_{2}))=d^ {2}\). If \(R_{2}\) is not \(\pm I\), then \(\mathrm{tr}(R_{2})=0\), so \(\mathrm{tr}(\mathcal{Q}(\vec{T}_{1}))\mathrm{tr}(\mathcal{Q}(\vec{T}_{2}))=0\). Thus
\[E\left(\mathrm{tr}(\mathcal{Q}(\vec{T}_{1}))\mathrm{tr}(\mathcal{Q}(\vec{T}_{ 2}))\right)=d.\]
If \(T\) satisfies Condition 1 or 3 at every \(b\in\mathcal{V}(\vec{H})\) but not always Condition 1, then \(\mathcal{Q}(\vec{T}_{1})=R_{3}\), and \(\mathcal{Q}(\vec{T}_{2})=R_{3}^{-1}\). With probability \(1/d\), \(R_{3}=\pm I\), in which case \(\mathrm{tr}(\mathcal{Q}(\vec{T}_{1}))\mathrm{tr}(\mathcal{Q}(\vec{T}_{2}))=d^ {2}\). If \(R_{3}\) is not \(\pm I\), then \(\mathrm{tr}(R_{3})=0\), so \(\mathrm{tr}(\mathcal{Q}(\vec{T}_{1}))\mathrm{tr}(\mathcal{Q}(\vec{T}_{2}))=0\). Thus
\[E\left(\mathrm{tr}(\mathcal{Q}(\vec{T}_{1}))\mathrm{tr}(\mathcal{Q}(\vec{T}_{ 2})\right)=d.\]
Next, suppose \(\vec{T}\) satisfies Condition 1, 2, or 3 at every \(b\in\mathcal{V}(\vec{H})\) but not always Condition 1 or 2, and not always Condition 1 or 3. Then \(\mathcal{Q}(\vec{T}_{1})=R_{2}R_{3}\) and \(\mathcal{Q}(\vec{T}_{2})=R_{2}R_{3}^{-1}\). If either of \(R_{2}R_{3}\) or \(R_{2}R_{3}^{-1}\) is not \(\pm I\), then it has trace \(0\), in which case \(\mathrm{tr}(\mathcal{Q}(\vec{T}_{1}))\mathrm{tr}(\mathcal{Q}(\vec{T}_{2}))=0\). Thus we only need to consider the cases where \(R_{2}R_{3}\) and \(R_{2}R_{3}^{-1}\) are both \(\pm I\); or equivalently, the case where \(R_{2}=\pm R_{3}\) and \(R_{2}^{2}=I\). This happens with probability \(1/d^{2}\) if \(d\) is odd and \(2/d^{2}\) if \(d\) is even. Thus
\[E\left(\mathrm{tr}(\mathcal{Q}(\vec{T}_{1}))\mathrm{tr}(\mathcal{Q}(\vec{T}_{2} )\right)\]
is equal to 1 if \(d\) is odd, and 2 if \(d\) is even.
Finally, suppose that \(\vec{T}\) does not satisfy any of Conditions 1, 2, or 3 at some vertex \(b\in\mathcal{V}(\vec{H})\). Then by Lemma 4-D, one of \(\mathcal{Q}(\vec{T_{1}})\) and \(\mathcal{Q}(\vec{T_{2}})\) is a uniformly random element of \(\mathcal{G}\), and is independent of the other. Thus
\[E\left(\operatorname{tr}(\mathcal{Q}(\vec{T_{1}})\operatorname{tr}(\mathcal{Q }(\vec{T_{2}})\right)=E\left(\operatorname{tr}(\mathcal{Q}(\vec{T_{1}})\right) E\left(\operatorname{tr}(\mathcal{Q}(\vec{T_{2}}))\right)\right)=0.\]
### Bounding the Variance
As we saw in Sections 3.1 and 3.2, a \(2k\)-tuple of edges only contributes to the variance if it is distinctly color-compatible and satisfies Condition 1, 2, or 3 at every vertex of \(H\). Now we bound the number of \(2k\)-tuples with these properties to get a bound on the variance.
Throughout this section, \(\vec{T}=(\vec{T_{1}},\vec{T_{2}})\) will denote a \(2k\)-tuple of edges of \(\vec{G}\), where \(\vec{T_{1}}=\overrightarrow{v_{1}v_{2}},\ldots,\overrightarrow{v_{2k-1}v_{2k}}\) and \(\vec{T_{2}}=\overrightarrow{w_{1}w_{2}},\ldots,\overrightarrow{w_{2k-1}w_{2k}}\). We continue to refer to the edges of \(\vec{H}\) as \(\overrightarrow{a_{1}a_{2}},\ldots,\overrightarrow{a_{2k-1}a_{2k}}\). In much of this section, edge-directions will be irrelevant and will often be ignored. We refer to the two halves of the edge \(\overrightarrow{a_{2i-1}},\overrightarrow{a_{2i}}\) as the "half-edge at \(a_{2i-1}\)" and the "half-edge at \(a_{2i}\)," and similarly for the two halves of \(\overrightarrow{v_{2i-1}},\overrightarrow{v_{2i}}\) and \(\overrightarrow{w_{2i-1}},\overrightarrow{w_{2i}}\). Let \(K\) denote the undirected subgraph of \(G\) consisting of the \(2k\) edges of \(\vec{T}\), ignoring edge-directions. If \(i\in\Gamma(b)\) (so \(a_{i}=b\)), then we say that \(v_{i}\) and \(w_{i}\)_lie over \(b\)_. Thus, for instance, Condition 1 is satisfied at some \(b\in H\) if and only if all \(v_{i}\) that lie over \(b\) are equal and all \(w_{i}\) that lie over \(b\) are equal. If \(i\in\Gamma(b)\) and \(\vec{T}\) satisfies Condition 1 (resp. 2 or 3) at \(b\), then we'll say also that \(\vec{T}\) satisfies Condition 1 (resp. 2 or 3) at \(v_{i}\) and at \(w_{i}\).
Suppose \(b\) and \(c\) are distinct vertices of \(H\), and suppose \(i\in\Gamma(b)\) and \(j\in\Gamma(c)\). The vertices \(v_{i}\) and \(v_{j}\) need not be distinct; however, if they are not distinct, then \(\vec{T_{1}}\) cannot be distinctly color-compatible (because that would require that \(v_{i}\) and \(v_{j}\) get different colors). And similarly for \(w_{i}\), \(w_{j}\), and \(\vec{T_{2}}\). Thus if we are given \(\vec{T}\) but we are not yet given the colors of the vertices, then we will say that \(\vec{T}\) is _distinctly colorable_ if for all distinct vertices \(b,c\in H\), and for all \(i\in\Gamma(b)\) and \(j\in\Gamma(c)\), the vertices \(v_{i}\) and \(v_{j}\) are distinct, as are the vertices \(w_{i}\) and \(w_{j}\) (though \(v_{i}\) and \(w_{j}\) are not required to be distinct). If \(\vec{T}\) is not distinctly colorable, then no matter how colors are assigned to vertices, \(\vec{T}\) will not be distinctly color-compatible and therefore will not contribute to the variance.
We begin with some lemmas.
**Lemma 5**.: _Suppose \(i\in\Gamma(b)\) and \(j\notin\Gamma(b)\), where \(b\) is some vertex of \(H\). If \(\vec{T}\) is distinctly colorable and Condition 2 or 3 is satisfied at \(b\), but not Condition 1, then neither \(v_{i}\) nor \(w_{i}\) can be equal to either \(v_{j}\) or \(w_{j}\)._
**Proof:** If Condition 2 holds at \(b\), then \(v_{i}=w_{i}\). By the definition of "distinctly colorable," \(v_{i}\neq v_{j}\) and \(w_{i}\neq w_{j}\), so neither \(v_{j}\) nor \(w_{j}\) can equal \(v_{i}=w_{i}\).
If instead Condition 3 holds at \(b\), then there are two vertices \(x\) and \(y\) that lie over \(b\) in \(K\) such that for all \(i^{\prime}\in\Gamma(b)\), either \(v_{i^{\prime}}=x\) and \(w_{i^{\prime}}=y\) or vice versa. We may assume that \(v_{i}=x\) and \(w_{i}=y\). But since Condition 1 does not hold at \(b\), there must also be some \(i^{\prime}\in\Gamma(b)\) such that \(v_{i^{\prime}}=y\) and \(w_{i^{\prime}}=x\). Since \(v_{i}=x=w_{i^{\prime}}\), it follows from the
definition of "distinctly colorable" that \(v_{i}\) cannot be equal to either \(v_{j}\) or \(w_{j}\), and similarly for \(w_{i}\).
Normally, we refer to the edges of \(\vec{H}\) as \(\overrightarrow{a_{1}a_{2}},\ldots,\overrightarrow{a_{2k-1}a_{2k}}\); however, in the next lemma, we will not be concerned with the directions of the edges, so we will refer to the edges as \(\overline{a_{\alpha}a_{\beta}}\), with the understanding that for some \(1\leq r\leq k\), either \(\alpha=2r-1\) and \(\beta=2r\), or vice versa.
**Lemma 6**.: _Suppose \(W=\overline{a_{\alpha_{1}}a_{\beta_{1}}}\ldots\overline{a_{\alpha_{r}}a_{ \beta_{s}}}\) is a walk in the undirected graph \(H\), and suppose that Condition 1 or Condition 3 holds at every internal vertex of the walk (i.e., Condition 1 or 3 holds at each vertex \(a_{\beta_{i}}=a_{\alpha_{i+1}}\) for \(1\leq i<s\)). Then there is a walk in \(K\) from \(v_{\alpha_{1}}\) to either \(v_{\beta_{s}}\) or \(w_{\beta_{s}}\), and similarly for \(w_{\alpha_{1}}\)._
**Proof:** Let \(e_{i}\) denote the \(i^{\rm th}\) edge of \(W\); in other words, \(e_{i}=\overline{a_{\alpha_{i}}a_{\beta_{i}}}\). Then \(e_{i}\) has two "lifts" in \(K\), namely, \(\overline{v_{\alpha_{i}}v_{\beta_{i}}}\) and \(\overline{w_{\alpha_{i}}w_{\beta_{i}}}\). We will show that each lift of \(e_{i}\) is adjacent to a lift of \(e_{i+1}\), so we will be able to piece together lifts of the \(e_{i}\)'s to get a lift of the entire walk.
We use induction on the length \(s\) of \(W\). The proof is the same for \(v_{\alpha_{1}}\) and \(w_{\alpha_{1}}\), so we present the proof just for \(v_{\alpha_{1}}\). If \(s=1\), then \(\overline{v_{\alpha_{1}}v_{\beta_{1}}}\) is the required walk. If \(s>1\), then by induction, there is a walk \(U\) in \(K\) from \(v_{\alpha_{1}}\) to either \(v_{\beta_{s-1}}\) or \(w_{\beta_{s-1}}\). Since \(W\) is a walk, the edges \(e_{s-1}\) and \(e_{s}\) are adjacent; in particular, the vertices \(a_{\beta_{s-1}}\) and \(a_{\alpha_{s}}\) are equal. Equivalently, there is some vertex \(b\) of \(H\) such that \(\beta_{s-1},\alpha_{s}\in\Gamma(b)\). By assumption, Condition 1 or 3 holds at \(b\), so either \(v_{\beta_{s-1}}=v_{\alpha_{s}}\) and \(w_{\beta_{s-1}}=w_{\alpha_{s}}\), or \(v_{\beta_{s-1}}=w_{\alpha_{s}}\) and \(w_{\beta_{s-1}}=v_{\alpha_{s}}\). Either way, we can append either the edge \(\overline{v_{\alpha_{s}}v_{\beta_{s}}}\) or the edge \(\overline{w_{\alpha_{s}}w_{\beta_{s}}}\) to \(U\), obtaining a walk from \(v_{\alpha_{1}}\) to either \(v_{\beta_{s}}\) or \(w_{\beta_{s}}\).
Although \(H\) is connected, \(K\) need not be. For instance, \(K\) might consist of two isomorphic copies of \(H\). In that case, each connected component of \(K\) contains a lift of every edge of \(H\). However, it can also happen that a connected component of \(K\) contains lifts of only some edges of \(H\). The next lemmas involve the connected components of \(K\). We generally use \(J\) to denote a connected component of \(K\) and use \(J^{\prime}\) to denote the subgraph of \(H\) that lies "below" \(J\). Note that \(H\) is connected, so \(J^{\prime}\) is not generally a connected component of \(H\).
**Lemma 7**.: _Suppose that \(\vec{T}\) satisfies Condition 1, 2, or 3 at each vertex of \(H\), and suppose it satisfies Condition 2 at some vertex. Then for every \(i\in\{1,\ldots,2k\}\), the two vertices \(v_{i}\) and \(w_{i}\) are in the same connected component of \(K\). Furthermore, that component also contains some vertex at which Condition 2 is satisfied._
**Proof:**\(H\) is connected, so there is a walk that starts with the half-edge at \(a_{i}\) and ends at a vertex where Condition 2 holds. We can choose a minimal such walk, in which case it has no internal vertices where Condition 2 holds. Suppose the walk ends with the half-edge at \(a_{j}\). By Lemma 6, there is a walk in \(K\) from \(v_{i}\) to either \(v_{j}\) or \(w_{j}\), and also a walk from \(w_{i}\) to either \(v_{j}\) or \(w_{j}\). But Condition 2 holds at \(a_{j}\), so \(v_{j}=w_{j}\). Thus there is a walk from \(v_{i}\) to \(v_{j}\), and one from \(w_{i}\) to \(v_{j}\). Concatenating them gives a walk from \(v_{i}\) to \(w_{i}\). Thus \(v_{i}\) and \(w_{i}\) are in the same component, and are also in the same component as the vertex \(v_{j}\), at which Condition 2 holds.
**Lemma 8**.: _Suppose \(\vec{T}\) satisfies Condition 1, 2, or 3 at every vertex of \(H\) and satisfies Condition 2 at some vertex of \(H\). Let \(J\) be any connected component of \(K\), and define \(J^{\prime}\) to be the subgraph of \(H\) consisting of all edges \(\overline{a_{2i-1}a_{2i}}\) for which either \(\overline{v_{2i-1}v_{2i}}\) or \(\overline{w_{2i-1}w_{2i}}\) is in \(J\). Then \(J^{\prime}\) must contain either_
* _at least two vertices where_ \(\vec{T}\) _satisfies Condition 2;_
* _a vertex with degree at least 2 in_ \(J^{\prime}\) _and where_ \(\vec{T}\) _satisfies Condition 2;_
* _a vertex with degree at least 3 in_ \(H\) _and where_ \(\vec{T}\) _satisfies Condition 1 or 3._
**Proof:** We assumed there is some vertex of \(H\) where Condition 2 is satisfied, so by Lemma 7, \(J\) contains such a vertex, and so then does \(J^{\prime}\). If \(J^{\prime}\) contains two such vertices, then we are done, so assume there is just one. If that one vertex has degree at least 2 in \(J^{\prime}\), then again we are done, so assume it has degree 1. By the Handshaking Lemma, there must be another vertex with odd degree in \(J^{\prime}\); and \(\vec{T}\) must satisfy either Condition 1 or 3 at that vertex.
If \(b\) is any vertex in \(J^{\prime}\), then for some \(i\in\Gamma(b)\), the half-edge at \(a_{i}\) is in \(J^{\prime}\), so either the half-edge at \(v_{i}\) or the half-edge at \(w_{i}\) is in \(J\). If in addition \(\vec{T}\) satisfies either Condition 1 or 3 at \(b\), then (by the definition of Conditions 1 and 3), for every \(i\in\Gamma(b)\), either the half-edge at \(v_{i}\) or the half-edge at \(w_{i}\) is in \(J\). Therefore, for every \(i\in\Gamma(b)\), the half-edge at \(a_{i}\) is in \(J^{\prime}\). In other words, \(b\) has the same degree in \(J^{\prime}\) as in \(H\). We saw in the previous paragraph that some vertex satisfies either Condition 1 or 3 and has odd degree in \(J^{\prime}\). It has the same degree in \(H\). But we assumed that \(H\) has no leaves, so it must have degree at least 3 in \(H\).
**Lemma 9**.: _Suppose \(\vec{T}\) satisfies Condition 1, 2, or 3 at every vertex of \(H\) and satisfies Condition 2 at some vertex \(b\) of \(H\). Let \(\delta\) denote the degree of \(b\) in \(H\). For each connected component \(J\) of \(K\), define \(J^{\prime}\) as in the previous lemma. Suppose there are \(\kappa\) components \(J_{1},\ldots,J_{\kappa}\) of \(K\) that satisfy:_
* \(J_{i}\) _has at most one vertex where Condition 2 holds, and_
* \(b\) _has degree at least two in_ \(J^{\prime}_{i}\)_._
_Then at most \(\delta-\kappa\) distinct vertices of \(K\) lie over \(b\)._
**Proof:** Suppose \(\Gamma(b)=\{s_{1},\ldots,s_{\delta}\}\), so the vertices \(a_{s_{1}},\ldots,a_{s_{\delta}}\) are all equal to \(b\). The vertices that lie above \(b\) in \(K\) are \(v_{s_{1}},\ldots,v_{s_{\delta}}\) and \(w_{s_{1}},\ldots,w_{s_{\delta}}\). Since Condition 2 holds at \(b\), \(v_{s_{j}}=w_{s_{j}}\) for each \(j\), so in fact the vertices that lie above \(b\) in \(K\) are just \(v_{s_{1}},\ldots,v_{s_{\delta}}\). Consider any \(J^{\prime}_{i}\) as defined in the lemma. Since \(b\) has degree at least two in \(J^{\prime}_{i}\), at least two of the half-edges at \(a_{s_{1}},\ldots,a_{s_{\delta}}\) are in \(J^{\prime}_{i}\). Assume without loss of generality that the half-edges at \(a_{s_{1}}\) and \(a_{s_{2}}\) are in \(J^{\prime}_{i}\). Then \(J_{i}\) contains the half-edge at either \(v_{s_{1}}\) or \(w_{s_{1}}\) and the half-edge at either \(v_{s_{2}}\) or \(w_{s_{2}}\). Since \(v_{s_{1}}=w_{s_{1}}\) and \(v_{s_{2}}=w_{s_{2}}\), \(J_{i}\) contains both \(v_{s_{1}}\) and \(v_{s_{2}}\). But \(J_{i}\) has at most one vertex where Condition 2 holds, so \(v_{s_{1}}\) and \(v_{s_{2}}\) must be the same vertex. Thus each of \(J_{1},\ldots,J_{\kappa}\) contains two of \(v_{s_{1}},\ldots,v_{s_{\delta}}\) that are equal, so there can be at most \(\delta-\kappa\) that are distinct.
**Lemma 10**.: _Suppose \(0<\Delta\leq m^{1/2-\alpha}\), where \(\alpha>0\), and suppose \(0<C\leq\min(m^{2\alpha},m^{1/3})\). Then \(\Delta^{2}\leq m/C\) and \(\Delta\leq m/C^{2}\)._
**Proof:** The first inequality follows from \(\Delta^{2}C\leq m^{1-2\alpha}m^{2\alpha}\leq m\). For the second inequality, if \(\alpha\geq 1/6\), then both \(C\) and \(\Delta\) are at most \(m^{1/3}\), so \(\Delta C^{2}\leq m\); if instead \(\alpha\leq 1/6\), then \(\Delta C^{2}\leq m^{1/2-\alpha}(m^{2\alpha})^{2}=m^{1/2+3\alpha}\leq m\).
**Theorem 4**.: _Suppose that the maximum degree \(\Delta\) of any vertex in \(G\) is at most \(m^{1/2-\alpha}\), where \(\alpha>0\), and assume \(C\leq\min(m^{2\alpha},m^{1/3})\). Then the expected number of distinctly color-compatible \(2k\)-tuples of edges of \(G\) that satisfy either Condition 1, 2, or 3 at every vertex of \(H\), and satisfy Condition 2 at some vertex of \(H\) is \(O(m^{k}/C^{2k-t})\)._
**Proof:** Let \(\vec{T}\) and \(K\) be as defined above. There are \(O(1)\) possibilities for the isomorphism class of \(K\) (i.e., which of the vertices \(v_{1},\ldots,v_{2k},w_{1},\ldots,w_{2k}\) are the same), so it suffices to prove the theorem for an arbitrary isomorphism class. Consider then any one such class. We may assume that it is distinctly colorable.
The expected number of possibilities for \(\vec{T}\) can be computed in two steps: first count the number of ways to select the vertices of \(\vec{T}\) where colors are ignored, and then find the probability that when colors are assigned, \(\vec{T}\) becomes distinctly color-compatible. (When we say "select the vertices of \(\vec{T}\)," we mean, choose a vertex of \(G\) for each \(v_{i}\) and \(w_{i}\) so that the resulting \(\vec{T}\) has the assumed isomorphism class.) To count the number of ways to select the vertices for \(\vec{T}\), we consider one connected component of \(K\) at a time. Let \(J\) be some connected component of \(K\). We can arbitrarily designate any one edge of \(J\) to be the "first edge." Once we designate the first edge, there are at most \(m\) ways to select its two endpoints (since \(\vec{G}\) has \(m\) edges). There are then at most \(\Delta\) ways to select each subsequent vertex of \(J\), for a total of \(m\Delta^{|\mathcal{V}(J)|-2}\). Equivalently, we could have arbitrarily designated any two (not necessarily adjacent) vertices of \(J\) to be the "first two vertices;" we could have then pretended that there were at most \(\sqrt{m}\) ways to select each of those two vertices and at most \(\Delta\) ways to select each other vertex of \(J\).
For a component \(J\), we use the following method to decide which will be its first two vertices. Let \(J^{\prime}\) be the subgraph of \(H\) consisting of all edges \(\overline{a_{2i-1}a_{2i}}\) for which either \(\overline{v_{2i-1}v_{2i}}\) or \(\overline{w_{2i-1}w_{2i}}\) is in \(J\) (as in Lemma 8). By Lemma 7, \(J\) has at least one vertex where Condition 2 is satisfied. We'll designate that as one of the first two vertices of \(J\). If there is a second such vertex, then we'll designate it as the other. If not, if some vertex of \(J\) lies above a vertex of \(H\) that has degree at least 3, and where Condition 1 or 3 is satisfied, then we'll designate that as the other. Otherwise, we'll designate any vertex as the other.
We now show that the result is at most \(m^{k}/C^{2k-t}\). We consider one vertex \(b\) of \(H\) at a time, and compute the factor that the vertices that lie above \(b\) contribute to the result. Recall that if some vertex above \(b\) was designated as one of the first two vertices in its component, then it contributes a factor of \(\sqrt{m}\), and otherwise contributes a factor of \(\Delta\). Furthermore, if Condition 2 holds at \(b\), and if there are \(d\) vertices that lie above \(b\), then they must all receive the same color, which introduces a
factor of \(C^{1-d}\). Note also that by Lemma 5, if \(b_{i}\) and \(b_{j}\) are two vertices of \(H\) where Condition 2 holds, then all the vertices that lie above \(b_{i}\) are distinct from all the vertices that lie above \(b_{j}\), and so these factors of \(C^{1-d}\) are all independent. We consider four cases for \(b\). In the first case, Condition 2 holds at \(b\). In the other three cases, Condition 1 or 3 holds at \(b\), but we subdivide these cases based on whether some vertex that lies above \(b\) was designated as a first vertex of its component, and whether \(b\) has degree \(>2\).
First consider the case where Condition 2 holds at \(b\). Let \(d\) denote the number of vertices of \(K\) that lie above \(b\). There are at most \(\sqrt{m}\) ways to select each of these \(d\) vertices, and they must all receive the same color, so these vertices contribute at most a factor of \(m^{d/2}/C^{d-1}\) to the count. By Lemma 9, \(d\) is at most \(\delta-\kappa\), where \(\delta\) is the degree of \(b\) in \(H\), and \(\kappa\) is the number of connected components \(J\) of \(K\) that satisfy: \(J\) has at most one vertex where Condition 2 holds, and \(b\) has degree at least two in \(J^{\prime}\). Thus the contribution of \(b\) to the overall expected value is at most a factor of
\[\frac{m^{d/2}}{C^{d-1}}\leq\frac{m^{(\delta-\kappa)/2}}{C^{\delta-\kappa-1}}= \left(\frac{m^{\delta/2}}{C^{\delta-1}}\right)\left(\frac{C}{m^{1/2}}\right) ^{\kappa}\,. \tag{11}\]
For the next three cases, suppose that either Condition 1 or 3 holds at \(b\). Then at most two vertices of \(K\) lie above \(b\), and by Lemma 7, they lie in the same component of \(K\). In the case where neither was designated as one of the first two vertices of that component, their contribution to the count is at most a factor of
\[\Delta^{2}\leq\frac{m}{C}\leq\frac{m}{C}\left(\frac{m^{1/2}}{C}\right)^{\delta -2}=\frac{m^{\delta/2}}{C^{\delta-1}}\,. \tag{12}\]
(We used Lemma 10 in the first inequality.)
For the last two cases, suppose that one of the vertices that lie above \(b\)_was_ designated as one of the first two vertices of the component. First assume \(\delta\geq 3\). Then the contribution of the vertices that lie above \(b\) to the overall expected value is at most a factor of
\[m^{1/2}\Delta\leq\frac{m^{3/2}}{C^{2}}=C\left(\frac{m^{1/2}}{C}\right)^{3}\leq C \left(\frac{m^{1/2}}{C}\right)^{\delta}=\frac{m^{\delta/2}}{C^{\delta-1}}\,. \tag{13}\]
(We used Lemma 10 in the first inequality.)
Finally, suppose that one of the vertices that lie above \(b\)_was_ designated as one of the first two vertices of the component, and \(\delta<3\) (which means that \(\delta=2\)). Then the contribution of the vertices that lie above \(b\) to the overall expected value is at most a factor of
\[m^{1/2}\Delta=\left(\frac{m}{C}\right)\left(\frac{\Delta C}{m^{1/2}}\right)= \left(\frac{m^{\delta/2}}{C^{\delta-1}}\right)\left(\frac{\Delta C}{m^{1/2}} \right)\leq\left(\frac{m^{\delta/2}}{C^{\delta-1}}\right)\left(\frac{m^{1/2}} {C}\right)\,. \tag{14}\]
(We used Lemma 10 in the last inequality.)
Observe that in all four cases (Equations (11), (12), (13), and (14)), the vertex \(b\) contributed a factor of \(m^{\delta/2}/C^{\delta-1}\), except that in (11) and (14), there are additional factors of \(\sqrt{m}/C\) or \(C/\sqrt{m}\). We first show that there are at least as many factors of \(C/\sqrt{m}\) as \(\sqrt{m}/C\). For the remainder
of the proof, we use \(\delta(b)\) rather than \(\delta\) to denote the degree of \(b\), since \(b\) will no longer be clear from context. There is one factor of \(\sqrt{m}/C\) in Equation (14) for each vertex \(b\) and component \(J\) such that:
* \(\delta(b)<3\),
* \(b\) satisfies Condition 1 or 3, and
* a vertex \(x\) that lies above \(b\) was designated as one of the first two vertices of \(J\).
Observe that \(J\) can have at most one vertex that satisfies Condition 1 or 3 and was designated as one of the first two vertices, so \(J\) cannot contribute a factor of \(\sqrt{m}/C\) for any vertex besides \(b\). In other words, \(J\) contributes at most one factor of \(\sqrt{m}/C\) overall. Now we'll show that \(J\) also contributes a factor of \(C/\sqrt{m}\) to (11). Since \(x\) was designated as one of the first two vertices of \(J\), we know that \(J\) has only one vertex where Condition 2 holds (which, by Lemma 5, implies that \(J^{\prime}\) has only one vertex where Condition 2 holds), and \(J\) cannot have a vertex that lies above a vertex with degree at least 3 in \(H\) and where Condition 1 or 3 holds. Thus by Lemma 8, \(J^{\prime}\) must have a vertex with degree at least 2 (in \(J^{\prime}\)) where Condition 2 holds. Then for that vertex, \(J\) contributes a factor of \(C/\sqrt{m}\) to (11). Thus there must be at least as many factors of \(C/\sqrt{m}\) in (11) as there are factors of \(\sqrt{m}/C\) in (14). We can therefore ignore all such factors; this can only increase the product. When we ignore these factors, each vertex \(b\) of \(H\) contributes a factor of at most \(m^{\delta(b)/2}/C^{\delta(b)-1}\). Taking the product over \(b\) gives
\[\frac{m^{\sum_{b}\delta(b)/2}}{C^{\sum_{b}(\delta(b)-1)}}=\frac{m^{k}}{C^{2k-t }}\,.\]
Next, we consider what happens when no vertex of \(\vec{T}\) satisfies Condition 2.
**Lemma 11**.: _Suppose \(\vec{T}\) satisfies Condition 1 or 3 at every vertex of \(H\). Then \(K\) has at most two connected components._
**Proof:**\(H\) is connected, so for any \(i\), there is a walk from \(a_{i}\) to \(a_{1}\). Since Condition 1 or 3 holds at every vertex along the walk, we can apply Lemma 6 to deduce that there is a walk in \(K\) from \(v_{i}\) to either \(v_{1}\) or \(w_{1}\) and also a walk from \(w_{i}\) to either \(v_{1}\) or \(w_{1}\). Thus every vertex of \(K\) is in the same connected component as either \(v_{1}\) or \(w_{1}\).
**Lemma 12**.: _Suppose \(\vec{T}\) satisfies Condition 1 or 3 at every vertex of \(H\), but not always Condition 1. Then at least one of the following must hold:_
* \(H\) _has more edges than vertices;_
* _there are at least two vertices of_ \(H\) _where_ \(\vec{T}\) _does not satisfy Condition 1;_
* \(K\) _is connected (as an undirected graph)._
**Proof:** We assumed that \(H\) is connected and has no leaves. Suppose the first condition above does not hold (i.e., \(H\) has as many vertices as edges). Then \(H\) must be a cycle. Now suppose that the second condition above also does not hold, i.e., there is exactly one vertex of \(H\) where
does not satisfy Condition 1 (and therefore satisfies Condition 3). We will assume that the edges of \(H\) going around the cycle in order are \(\overline{a_{1}a_{2}},\overline{a_{3}a_{4}},\ldots,\overline{a_{2t-1}a_{2t}^{ \prime}}\). Note that there is no loss of generality in this assumption, because edge-directions are irrelevant to this lemma. We can also assume that the vertex \(a_{1}=a_{2t}\) is the vertex where Condition 3 is satisfied, but not Condition 1. Then \(v_{1}=w_{2t}\), and \(w_{1}=v_{2t}\). Since Condition 1 holds everywhere else, we have \(v_{2i}=v_{2i+1}\) and \(w_{2i}=w_{2i+1}\) for all \(1\leq i<t\). Thus \(\overline{v_{1}v_{2}},\overline{v_{3}v_{4}},\ldots,\overline{v_{2t-1}v_{2t}}, \overline{w_{1}w_{2}},\overline{w_{3}w_{4}},\ldots,\overline{w_{2t-1}w_{2t}}\) is a path that visits every vertex of \(K\), so \(K\) is connected.
**Theorem 5**.: _Suppose that the maximum degree \(\Delta\) of any vertex in \(G\) is at most \(m^{1/2-\alpha}\), where \(\alpha>0\), and assume \(C\leq\min(m^{2\alpha},m^{1/3})\). Then the expected number of distinctly color-compatible \(2k\)-tuples of edges of \(G\) that satisfy either Condition 1 or 3 at every vertex of \(H\), but do not satisfy Condition 1 at every vertex of \(H\), is \(O(m^{k}/C^{2k-t})\)._
**Proof:** There are again \(O(1)\) possibilities for the isomorphism class of \(K\) (i.e., which of the vertices \(v_{1},\ldots,v_{2k},w_{1},\ldots,w_{2k}\) are the same), so it suffices to prove the theorem for an arbitary isomorphism class. Assume then that we are given the isomporphism class of \(K\). We can assume that it is distinctly colorable.
We first count the number of ways to select the vertices of \(K\). By Lemma 11, \(K\) has at most two components. As in the proof of Theorem 4, in each component, we can arbitrarily designate any one edge to be the "first edge." There are at most \(m\) ways to select its two endpoints (since \(\vec{G}\) has \(m\) edges), and there are at most \(\Delta\) ways to select each subsequent vertex in the component. Equivalently, we can arbitrarily designate any two (not necessarily adjacent) vertices of the component to be the "first two vertices;" we can then pretend that there are at most \(\sqrt{m}\) ways to select each of these two vertices and at most \(\Delta\) ways to select each subsequent vertex. Since \(K\) has at most \(2t\) vertices (because at most two vertices lie above each vertex of \(H\)), there is a total of at most \(m\Delta^{2t-2}\) possibilities for the vertices of \(K\) if \(K\) has one component, and \(m^{2}\Delta^{2t-4}\) possibilities if \(K\) has two. Note that the number is greater in the two-component case.
Once the vertices of \(K\) are chosen, colors must be assigned in such a way that the \(2k\)-tuple of edges is distinctly color-compatible. Consider any vertex \(b\) of \(H\) where Condition 3 (but not Condition 1) holds. That means that exactly two vertices \(x\) and \(y\) lie above \(b\) in \(K\), and they must be distinct (or else Condition 1 would hold). Color-compatibility requires that \(x\) and \(y\) be assigned the same color, which happens with probability \(1/C\). Furthermore, if there are two vertices \(b_{i}\) and \(b_{j}\) where Condition 3 (but not Condition 1) holds, and if \(x_{i}\), \(y_{i}\), \(x_{j}\), and \(y_{j}\) are the corresponding vertices that lie above \(b_{i}\) and \(b_{j}\), then by Lemma 5, \(x_{i}\), \(y_{i}\), \(x_{j}\), and \(y_{j}\) are all distinct. Thus every vertex where Condition 3 (but not Condition 1) holds contributes an independent factor of \(1/C\) to the count.
Suppose now that \(H\) has more edges than vertices (i.e., \(k-t\geq 1\)). There are at most \(m^{2}\Delta^{2t-4}\) ways to select the vertices of \(K\) and a probability of at most \(1/C\) that the result is distinctly color-compatible, so
(using Lemma 10) the expected number of \(2k\)-tuples of edges is at most
\[\frac{m^{2}\Delta^{2t-4}}{C}=\frac{m^{2}\Delta^{2}\Delta^{2t-6}}{C}\leq\frac{m^{2 }}{C}\left(\frac{m}{C^{2}}\right)^{2}\left(\frac{m}{C}\right)^{t-3}=\frac{m^{t+ 1}}{C^{t+2}}\leq\frac{m^{t+(k-t)}}{C^{t+2(k-t)}}=\frac{m^{k}}{C^{2k-t}}\,.\]
Next suppose instead that \(H\) has at least two vertices where Condition 3 (but not Condition 1) holds. Then there are at most \(m^{2}\Delta^{2t-4}\) ways to select the vertices of \(K\) and a probability of at most \(1/C^{2}\) that the result is distinctly color-compatible, so the expected number of \(2k\)-tuples of edges is at most
\[\frac{m^{2}\Delta^{2t-4}}{C^{2}}=\frac{m^{2}}{C^{2}}(\Delta^{2})^{t-2}\leq \left(\frac{m^{2}}{C^{2}}\right)\left(\frac{m}{C}\right)^{t-2}=\frac{m^{t}}{C^ {t}}\leq\frac{m^{t+(k-t)}}{C^{t+2(k-t)}}=\frac{m^{k}}{C^{2k-t}}\,.\]
The only remaining case is where \(H\) does not have more edges than vertices and \(H\) has only one vertex where Condition 3 (but not Condition 1) holds. By Lemma 12, \(K\) is connected, i.e., has only one component. Then there are at most \(m\Delta^{2t-2}\) ways to select the vertices of \(K\) and a probability of at most \(1/C\) that the result is distinctly color-compatible, so the expected number of \(2k\)-tuples of edges is at most
\[\frac{m\Delta^{2t-2}}{C}=\frac{m}{C}(\Delta^{2})^{t-1}\leq\left(\frac{m}{C} \right)\left(\frac{m}{C}\right)^{t-1}=\frac{m^{t}}{C^{t}}\leq\frac{m^{t+(k-t)} }{C^{t+2(k-t)}}=\frac{m^{k}}{C^{2k-t}}\,.\]
**Lemma 13**.: _If the \(2k\)-tuple of edges \(\vec{T}=(\vec{T}_{1},\vec{T}_{2})\) is distinctly colorable and satisfies Condition 1 at every vertex of \(H\), then \(\vec{T}_{1}\) and \(\vec{T}_{2}\) are each isomorphic to \(\vec{H}\)._
**Proof:** Suppose \(b\) and \(c\) are (not necessarily distinct) vertices of \(H\), and suppose \(i\in\Gamma(b)\) and \(j\in\Gamma(c)\). Since Condition 1 holds everywhere, if \(b=c\), then \(v_{i}=v_{j}\). Since \(\vec{T}\) is distinctly colorable, if \(b\neq c\), then \(v_{i}\neq v_{j}\). In other words, \(v_{i}=v_{j}\) if and only if \(b=c\). Then the edge map that sends each \(\overrightarrow{v_{2i-1}v_{2i}}\) to \(\overrightarrow{a_{2i-1}a_{2i}}\) induces an isomorphism between \(\vec{T}_{1}\) and \(\vec{H}\). The proof for \(\vec{T}_{2}\) is analogous.
**Theorem 6**.: _Suppose \(\mathcal{G}\) is either the group of \(r^{\rm th}\) roots of unity (in which case \(d=1\)) or the group \(\{\pm I,\pm M,\pm M^{2},\ldots,\pm M^{d-1}\}\). Suppose that the maximum degree \(\Delta\) of any vertex in \(G\) is at most \(m^{1/2-\alpha}\), where \(\alpha>0\), and assume \(C\leq\min(m^{2\alpha},m^{1/3})\). Then the estimate for \(\#H\) given by Theorem 1 has variance that is \(O((\#H)^{2}+m^{k}/(dC^{2k-t}))\)._
**Proof:** The variance is given by
\[\left(\frac{C^{t}}{C(C-1)\cdots(C-t+1)\cdot d\cdot{\rm auto}(H)}\right)^{2}E \left({\rm tr}(\mathcal{S}){\rm tr}(\overline{\mathcal{S}})\right)-\left(\#H \right)^{2}.\]
As discussed earlier, \({\rm tr}(\mathcal{S}){\rm tr}(\overline{\mathcal{S}})\) is a sum of terms of the form \({\rm tr}(\mathcal{Q}(\vec{T}_{1})){\rm tr}(\overline{\mathcal{Q}(\vec{T}_{2})})\), where \(\vec{T}_{1}=\overrightarrow{v_{1}v_{2}},\ldots,\overrightarrow{v_{2k-1}v_{2k}}\) and \(\vec{T}_{2}=\overrightarrow{w_{1}w_{2}},\ldots,\overrightarrow{w_{2k-1}w_{2k}}\) are each distinctly color-compatible. By Theorems 2 and 3, such a term contributes to \(E(tr(\mathcal{S}){\rm tr}(\overline{\mathcal{S}}))\) only if \((\vec{T}_{1},\vec{T}_{2})\) satisfies Condition 1, 2, or 3 at every vertex of \(H\).
First consider the \(\vec{T}=(\vec{T}_{1},\vec{T}_{2})\) that satisfy Condition 1 at every vertex of \(H\). By Lemma 13, if \(\vec{T}\) satisfies Condition 1 at every vertex of \(H\) and
is distinctly colorable, then \(\vec{T}_{1}\) and \(\vec{T}_{2}\) are each isomorphic to \(\vec{H}\). The number of such \(\vec{T}\) is then \(O((\#H)^{2})\). By Theorems 2 and 3, each such \(\vec{T}\) contributes \(d^{2}\) to \(E(\mathrm{tr}(\mathcal{S})\mathrm{tr}(\overline{\mathcal{S}}))\) and therefore contributes at most
\[\left(\frac{C^{t}}{C(C-1)\cdots(C-t+1)\cdot\mathrm{auto}(H)}\right)^{2}\]
to the variance. This term is \(O(1)\), so the contributions of these \(\vec{T}\) to the variance is \(O((\#H)^{2})\).
Next consider the \(\vec{T}\) that satisfy Condition 1, 2, or 3 at every vertex of \(H\), but not always Condition 1. By Theorems 4 and 5, the number of such \(\vec{T}\) that are distinctly color-compatible is \(O(m^{k}/C^{2k-t})\). By Theorems 2 and 3, each such \(\vec{T}\) contributes at most \(d\) to \(E(\mathrm{tr}(\mathcal{S})\mathrm{tr}(\overline{\mathcal{S}}))\). Thus these terms contribute \(O(m^{k}/(dC^{2k-t}))\) to the variance.
## 4 Discussion of Algorithm
In this section, we discuss how our version of the algorithm compares to the original in terms of storage, update time per edge, and a one-time calculation.
First consider the case where \(\mathcal{G}\) is the group of \(r^{\mathrm{th}}\) roots of unity. As we showed in Theorem 6, the variance of a single instance of our algorithm is \(O((\#H)^{2}+m^{k}/C^{2k-t})\), so the number of instances needed to attain a variance of \(O((\#H)^{2})\) is
\[O(1+m^{k}/((\#H)^{2}C^{2k-t}))\,. \tag{15}\]
Each instance of our algorithm requires \(O(C^{2})\) storage, so the storage needed is \(O(C^{2}+m^{k}/((\#H)^{2}C^{2k-t-2}))\). Assuming our goal is to minimize storage, if the first term in this expression is larger than the second, then we want to choose a smaller value of \(C\) to ensure that
\[C^{2}\leq m^{k}/((\#H)^{2}C^{2k-t-2})\,,\]
i.e.,
\[C\leq(m^{k}/(\#H)^{2})^{1/(2k-t)}\,.\]
Thus, although we proved Theorem 6 for any \(C\leq\min(m^{2\alpha},m^{1/3})\), the best choice of \(C\) is \(\min(m^{2\alpha},m^{1/3},(m^{k}/(\#H)^{2})^{1/(2k-t)})\). In that case, the number of instances of our algorithm that we need to perform is \(O(m^{k}/((\#H)^{2}C^{2k-t}))\), so the update time per edge is also \(O(m^{k}/((\#H)^{2}C^{2k-t}))\), and the storage is \(O(m^{k}/((\#H)^{2}C^{2k-t-2}))\). We thus save a factor of roughly \(C^{2k-t-2}\) in storage and \(C^{2k-t}\) in update time over the original algorithm.
Of the two terms in (15), 1 and \(m^{k}/((\#H)^{2}C^{2k-t})\), if the first is larger, then we are doing \(O(1)\) instances of the algorithm, so the update time per edge is \(O(1)\). If the second is larger, then we can reduce the update time per edge by instead letting \(\mathcal{G}\) be the group \(\{\pm I,\pm M,\ldots,\pm M^{d-1}\}\) and setting \(d=m^{k}/((\#H)^{2}C^{2k-t})\), but performing \(1/d\) times as many instances of the algorithm. By Theorem 6, the variance remains \(O((\#H)^{2})\). The storage requirement also does not change, since we do \(1/d\) times as
many instances of the algorithm, but each instance requires \(d\) times the storage. However, now we are performing \(O(1)\) instances of the algorithm, so the update time is \(O(1)\).
There is one drawback of our version of the algorithm: when the stream ends, a potentially large calculation is required. In particular, we must compute
\[\sum_{\begin{subarray}{c}(c_{1},\ldots,c_{t})\\ \text{distinct}\end{subarray}}\mathcal{S}_{(c_{1},\ldots,c_{t})}\,.\]
This could potentially involve \(C^{t}\) work, although for most \(H\), we can use inclusion-exclusion to perform the calculation more efficiently. For instance, if \(H\) is a 4-cycle with vertices 1,2,3,4 and edges \(\vv{12},\vv{23},\vv{34},\vv{41}\), then we can loop through colors \(c_{1}\) and \(c_{3}\) for vertices 1 and 3. For each such pair of colors, we can loop through colors \(c_{2}\notin\{c_{1},c_{3}\}\) for vertex 2, computing
\[\sum_{c_{2}\notin\{c_{1},c_{3}\}}\mathcal{Z}_{1}^{c_{1},c_{2}}\mathcal{Z}_{2}^ {c_{2},c_{3}}\,.\]
Separately, we can loop through colors \(c_{4}\notin\{c_{1},c_{3}\}\) for vertex 4, computing
\[\sum_{c_{4}\notin\{c_{1},c_{3}\}}\mathcal{Z}_{3}^{c_{3},c_{4}}\mathcal{Z}_{4}^ {c_{4},c_{1}}\,.\]
We can multiply those two sums and then subtract the terms where \(c_{2}=c_{4}\):
\[\sum_{c\notin\{c_{1},c_{3}\}}\mathcal{Z}_{1}^{c_{1},c}\mathcal{Z}_{2}^{c,c_{3} }\mathcal{Z}_{3}^{c_{3},c}\mathcal{Z}_{4}^{c,c_{1}}\,.\]
We thus do the computation with \(C^{3}\) work rather than \(C^{4}\) work. In fact, it is possible to do slightly better: each of the three sums above can be computed for all \(c_{1}\) and \(c_{3}\) by performing a \(C\times C\) matrix multiplication, which can be done using less than \(C^{3}\) work. It would be unusual for this computation to be a significant issue, but if it is, then we might want to choose a smaller value of \(C\), in which case we would not realize the full reduction in storage.
## 5 Conclusion
We have described three modifications to the [KMSS]-algorithm: we define one hash function \(\mathcal{X}_{i}\) for each half-edge of \(H\) rather than one for each vertex of \(H\); we assign colors to the vertices of \(G\) and restrict to distinctly color-compatible \(\vec{T}\); and we allow matrix-valued hash functions as an alternative to complex-valued hash functions. The first two modifications reduce the variance in each instance of the algorithm, and therefore reduce the number of instances needed. This in turn reduces the required storage and update time per edge. The third modification reduces only the update time per edge.
Suppose that the maximum degree \(\Delta\) of any vertex in \(G\) is at most \(m^{1/2-\alpha}\), where \(\alpha>0\), and suppose \(C\leq\min(m^{2\alpha},m^{1/3})\). For the original [KMSS]-algorithm, both the storage and update time per edge are \(O(m^{k}/(\#H)^{2})\). For our algorithm, we have shown that the update time
per edge is \(O(1)\), and the storage is \(O(C^{2}+m^{k}/(C^{2k-t-2}(\#H)^{2}))\), i.e., the storage has been reduced approximately by a factor of \(C^{2k-t-2}\).
|
2303.05445
|
Flooding with Absorption: An Efficient Protocol for Heterogeneous
Bandits over Complex Networks
|
Multi-armed bandits are extensively used to model sequential decision-making,
making them ubiquitous in many real-life applications such as online
recommender systems and wireless networking. We consider a multi-agent setting
where each agent solves their own bandit instance endowed with a different set
of arms. Their goal is to minimize their group regret while collaborating via
some communication protocol over a given network. Previous literature on this
problem only considered arm heterogeneity and networked agents separately. In
this work, we introduce a setting that encompasses both features. For this
novel setting, we first provide a rigorous regret analysis for a standard
flooding protocol combined with the classic UCB policy. Then, to mitigate the
issue of high communication costs incurred by flooding in complex networks, we
propose a new protocol called Flooding with Absorption (FwA). We provide a
theoretical analysis of the resulting regret bound and discuss the advantages
of using FwA over flooding. Lastly, we experimentally verify on various
scenarios, including dynamic networks, that FwA leads to significantly lower
communication costs despite minimal regret performance loss compared to other
network protocols.
|
Junghyun Lee, Laura Schmid, Se-Young Yun
|
2023-03-09T17:44:58Z
|
http://arxiv.org/abs/2303.05445v4
|
# Communication-Efficient Collaborative Heterogeneous Bandits in Networks
###### Abstract.
The multi-agent multi-armed bandit problem has been studied extensively due to its ubiquity in many real-life applications, such as online recommendation systems and wireless networking. We consider the setting where agents should minimize their group regret while _collaborating_ over a given _graph_ via some communication protocol and where each agent is given a _different set of arms_. Previous literature on this problem only considered one of the two desired features separately: agents with the same arm set communicate over a general graph, or agents with different arm sets communicate over a fully connected graph. In this work, we introduce a more general problem setting that encompasses all the desired features. For this novel setting, we first provide a rigorous regret analysis for the standard flooding protocol combined with the UCB policy. Then, to mitigate the issue of high communication costs incurred by flooding, we propose a new protocol called **Flooding with Absorption (FWA)**. We provide a theoretical analysis of the regret bound and intuitions on the advantages of using **FWA** over flooding. Lastly, we verify empirically that using **FWA** leads to significantly lower communication costs despite minimal regret performance loss compared to flooding.
bandits, multi-agent systems, collaborative, network, flooding +
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
+
Footnote †: journal: Networks
ContributionsIn this work, we address this gap in the literature and formally introduce a novel setting of collaborative heterogeneous multi-agent multi-armed stochastic bandits over a communication network. Here, the goal of the agents (or nodes) is to minimize their _group_ regret, and by collaborating with one another by sharing information of arms pulled and rewards received, speed up the overall learning process. To this end, we consider the approach that nodes individually run the standard UCB policy, given their own and others' observations. We first provide a theoretical analysis of the group regret in this setting, using a scenario where agents share their received rewards via the classic flooding protocol. Under this protocol, each message is forwarded to the neighbors of the "current" agent at every time step until the time-to-live expires.
We then identify a major issue with the classic flooding protocol, which is its high communication complexity due to the large number of messages sent. To address this problem, we introduce a new lightweight communication protocol, called Flooding with Absorption (FWA), which can exploit the intrinsic heterogeneity of our setting, resulting in a significantly reduced number of messages sent. In a nutshell, FWA stops message propagation once the message has been received by an agent that can use the information contained within. We provide both theoretical and experimental results to show that this protocol is not only highly communication-efficient, but also comes at a minimal regret performance loss for heterogeneous topologies, compared to the classic flooding. Our experiments show that for the considered instances, using FWA can avoid heavy link congestion in networks and cut the average number of sent messages approximately in half, while also providing performance that is almost on par with standard flooding. Even when message loss can occur with constant probability, our experimental findings show that FWA does not perform significantly worse than the usual flooding protocol, thus making it deployable in a wide range of network applications.
## 2. System Model
We now describe our setting of collaborative _heterogeneous_ multi-agent multi-armed stochastic bandits over a communication network. We assume that there are \(N\) agents over an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), with \(|\mathcal{V}|=N\). We denote \(\mathcal{N}_{\mathcal{G}}(v)\) to be the neighborhood of \(v\) in \(\mathcal{G}\)_not_ including \(v\), and for \(S\subset\mathcal{V}\), let \(\mathcal{G}[S]\) be the induced subgraph. Each agent \(v\in\mathcal{V}\) has access to a finite set \(\mathcal{K}_{o}\) of arms she can pull, and let \(\mathcal{K}=\cup_{v\in\mathcal{V}}\mathcal{K}_{b}\). The execution of all agents proceeds in a sequence of synchronous rounds \(t=1,2,\ldots\). In each round \(t\), all agents simultaneously (i) pull some arm, (ii) compute and send a message to their neighbors in the network, and (iii) receive and process all messages from their neighbors. From the perspective of agents, let us denote \(\mathcal{V}_{a}=\{v\in\mathcal{V}:a\in\mathcal{K}_{o}\}\subseteq\mathcal{V}\) to be the set of agents having action \(a\), and let \(\mathcal{V}_{-a}\subseteq\mathcal{V}_{a}\) be the set of agents containing \(a\) as a _suboptimal_ arm.
Following (Sakai et al., 2016), let \(\mathcal{M}_{o}\) be a set of \(\sigma\)-sub-Gaussian distributions. Each arm \(a\in\mathcal{K}\) is associated with an unknown reward distribution \(P_{a}\in\mathcal{M}_{o}\), and let \(\mu:\mathcal{M}_{\sigma}\rightarrow\mathbb{R}\) be a function mapping each (reward) distribution to its mean. For simplicity, denote \(\mu_{a}:=\mu(P_{a})\). We note that \(P_{a}\) is independent of agents' identities, i.e., each agent \(v\), regardless of their arm set \(\mathcal{K}_{b}\), faces the same distribution of rewards for the same arm \(a\) (whenever \(\mathcal{K}_{o}\) contains \(a\)), and receives an i.i.d sample from this distribution upon pulling this arm. We denote \(a^{o}_{\mathbf{\kappa}}\) to be the best local arm for agent \(v\), and \(\mu^{o}_{\mathbf{\kappa}}=\mu_{\mathbf{\alpha}^{o}_{\mathbf{\kappa}}}\). The main difficulty of analyzing heterogeneous bandits is that even for the same arm \(a\), the suboptimality gap may be different across agents containing \(a\).
Lastly, we remark that we do _not_ consider any collisions (Sakai et al., 2016; Sakai et al., 2016; Sakai et al., 2016), i.e., two neighbors pulling the same arm do not affect their observed rewards in any way. Rather, we focus on the collaborative setting where the agents are encouraged to cooperate with one another by sharing their own observations.
**Remark 1**.: _It is helpful to think of the arm distribution as a hypergraph \(\mathcal{H}=(\mathcal{K},\mathcal{F})\) with \(\mathcal{F}=\{\mathcal{K}_{o}:v\in\mathcal{V}\}\); see Figure 1. Here, two agents incident in \(\mathcal{H}\) with some arm \(a\) indicates that they will collaborate via \(a\), if they can communicate with one another._
### Goals
As done in the classic work on regret minimization in single-agent bandits (Bakai et al., 2016; Sakai et al., 2016; Sakai et al., 2016), our goal is to minimize the _group_ regret, which has been studied widely in the context of collaborative multi-agent bandits (Sakai et al., 2016; Sakai et al., 2016; Sakai et al., 2016). As the name suggests, the agents can (and should) collaborate with each other over some given communication network to minimize the group regret, defined as
\[\mathbb{E}[R(T)]:=\sum_{v\in\mathcal{V}}\left\{\mathbb{E}[R^{v}(T)]\triangleq \sum_{a\in\mathcal{K}_{o}}\Delta^{o}_{a}\mathbb{E}[N^{v}_{a}(T)]\right\}, \tag{1}\]
where \(\Delta^{o}_{a}=\mu^{o}_{\mathbf{\kappa}}-\mu_{a}\) is the agent-specific gap of arm \(a\) and \(N^{o}_{a}(t)\) is the number of times agent \(v\) plays the arm \(a\) up to time \(t\).
In multi-agent settings where agents collaborate with one another, designing effective (learning) algorithms with low communication complexity is of paramount importance, as we do not want the high communication complexity to vershadow the gain obtained from collaboration. Here, we define communication complexity as the total number of message exchanges, with the message consisting of arm index, observed reward, and possibly other information. In _homogeneous_ settings, where all agents are identical in that \(\mathcal{K}_{o}=\mathcal{K}\) for all \(v\in\mathcal{V}\), arm elimination-type algorithms (Sakai et al., 2016)
Figure 1. Communication network and arm heterogeneity. Agents are distributed on a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) and arms are distributed on a hypergraph \(\mathcal{H}=(\mathcal{K},\mathcal{F})\).
as well as gossip-type protocols (Hernandez et al., 2017; Goh et al., 2018) have been shown to be communication efficient and effective in terms of group regret.
In our setting, there are two possible heterogeneities: one due to the underlying communication network \(\mathcal{G}\) in which, depending on the network topology and especially each agent's degree, the amount of benefit one receives from collaboration differs; another due to the heterogeneity of the arm sets of each agent in which the same arm \(a\) may be optimal for some agents but suboptimal for others. Of course, even with these heterogeneities, we expect that agents communicating and collaborating should lead to a speed-up compared to the case of every agent solving the problem for themselves without sharing any information. The question is then how much speed-up (in terms of the group regret) one could get from collaboration, taking into account these heterogeneities, as well as the communication complexity of the protocol used to facilitate such collaboration.
## 3. Algorithms
### Ucb-Flooding
As is common in much of the previous literature, we will focus on agents that individually run the classic UCB policy (Ball and Rafter, 1998; Goh et al., 2018; Goh et al., 2018). In this process, agents can, and should, take advantage of the distributed setting by communicating received rewards amongst each other over the underlying communication network - which is not fully straightforward to implement or analyze, given that the agents do not all share the same set of arms. For instance, the instantaneous reward sharing (IRS) (Goh et al., 2018; Goh et al., 2018) in which information is shared only among the immediate neighbors may not lead to the desired speed-up, if the immediate neighbors do not share any arms.
One way of mitigating this issue is to make the agents share information via the standard flooding communication protocol, where the number of rounds of message forwarding is limited by time-to-live (TTL) \(\gamma\), also sometimes referred to as hop-limited flooding (Hernandez et al., 2017; Goh et al., 2018; Goh et al., 2018). This protocol does not require the nodes to have previous knowledge of the network topology. To account for potential loops in the network and avoid a broadcast storm, we explicitly use a variant of sequence number-controlled flooding (SNCF). We call this UCB-Flooding and refer to it simply as Flooding henceforth. The pseudocode, which is Algorithm 1 with \(absorb=False\), is presented in Appendix A. We note that Madhushani et al (Madhushani et al., 2018) also considered this type of flooding protocol and referred to this as "message passing". However, their algorithm requires that each agent knows its neighborhood in \(\mathcal{G}^{\gamma}\) (see Definition 4), which is a rather strong assumption. This very knowledge of \(\mathcal{G}^{\gamma}\) bypasses any difficulties arising from messages being delayed as they travel along different paths. We account for these factors, and additionally use SNCF in our algorithm to avoid both a broadcast storm and biasing the reward estimates.
Flooding proceeds as follows: In each round \(t\), each agent \(v\) pulls an arm \(a^{v}(t)\) that has the highest upper confidence bound. Note that \(M^{v}_{a}(t)\) is the number of pulls of arm \(a\) available to agent \(v\) by time \(t\), and \(\tilde{a}^{v}_{a}\) is the estimate of \(\mu_{a}\) made by agent \(v\) at time \(t\). In both estimates, agent \(v\) makes use of all observations available to \(v\) by time \(t\), including the messages relayed to her. Having received the corresponding reward \(X^{v}_{a^{v}(t)}(t)\) from pulling arm \(a^{v}(t)\), agent \(v\) creates a message
\[m=\langle a^{v}(t),X^{v}_{a^{v}(t)},\textsc{Hash}(v,a^{v}(t),X^{v}_{a^{v}(t) }(t)),v,\gamma\rangle, \tag{2}\]
and pushes it to its current queue of messages to be sent, denoted by \(\mathcal{M}^{v}\). After UCB has been completed, each agent starts sending out (as well as receiving) messages to (from) its neighbors.
Our message \(m=\langle m(0),m(1),m(2),m(3),m(4)\rangle\) consists of the following components: \(m(0)\) and \(m(1)\) are the arm pulled by agent \(v\) and the reward received at time \(t\), respectively. \(m(2)\) is a hash value of the originating agent \(v\), the arm pulled, and the obtained reward that acts as a unique identifier of the message. Our protocol uses this message identifier to control flooding, which avoids routing loops that can lead to broadcast storms and improper bias in the reward estimations. Hence, each agent \(v\) keeps track of the hash values of messages that she has seen by time \(t\) via a _queue_ of size \(\gamma N\), denoted as \(\mathcal{H}^{v}\) in Algorithm 1. If an already known message comes in, that message is deleted on arrival. Setting the memory length to be \(\gamma N\) follows from the fact that all messages can be forwarded at most \(\gamma\) times, and the worst-case space complexity is when the agent has to keep track of all messages from all agents for the last \(\gamma\) time steps. \(m(3)\) is the agent that last forwarded the message; if the receiver \(w\) of the message \(m\) does not have the corresponding arm in her arm set \(\mathcal{K}_{w}\), she simply passes on \(m\) to her neighbors (except for the originator \(m(3)\)) after replacing \(m(3)\) with \(w\). This prevents messages from echoing after just one hop, further reducing the communication cost.
\(m(4)\) keeps track of the remaining life span of the message, which is initialized to the _time-to-live_ (TTL) value \(\gamma\). It is decayed by \(1\) every time the message is forwarded by a different agent, and the message gets discarded when TTL reaches \(0\), i.e., the message can be forwarded for at most \(\gamma\) hops. We note that \(\gamma=1\) is equivalent to IRS (Goh et al., 2018), where each agent only sends its message to its neighbors, and any message containing arm \(a\) that is sent to agents not containing \(a\) becomes void.
#### 3.1.1. Group Regret Bound of Flooding
Given a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), we recall some graph-theoretic quantities (Goh et al., 2018):
Definition 1.: _For \(v,w\in\mathcal{V}\), their_ **graph distance** _in \(\mathcal{G}\), denoted as \(d_{\mathcal{G}}(v,w)\), is the length of the shortest path connecting them._
Definition 2.: _The_ **clique covering number** _of \(\mathcal{G}\), denoted as \(\theta(\mathcal{G})\), is the smallest integer \(m\) such that \(\mathcal{V}\) can be partitioned into \(m\) subsets, each of which induces a clique. Any such partition (not necessarily minimum) is called a_ **clique cover**_._
Definition 3.: _The_ **independence number** _of \(\mathcal{G}\), denoted as \(\alpha(\mathcal{G})\), is the maximum size of a subset of \(\mathcal{V}\) that induces no edges. Any such set (not necessarily minimum) is said to be_ **independent**_._
Definition 4.: _For any integer \(\gamma\geq 0\), \(\mathcal{G}^{\gamma}\) is the_ **graph power of \(\mathcal{G}\)** _of \(\gamma\)-_**th order**_, which is a graph on \(\mathcal{V}\) such that \(\{v,w\}\) is an edge whenever \(d_{\mathcal{G}}(v,w)\leq\gamma\). The_ **diameter** _of \(\mathcal{G}\) is defined as the minimum value of \(\gamma\) such that \(\mathcal{G}^{\gamma}\) is isomorphic to a complete graph._
We then define our suboptimality gap \(\Delta_{a}^{\gamma}\) as follows:
\[\Delta_{a}^{\gamma}\coloneqq\max_{C\in C_{-a}}\left(\sum_{C\in C_{-a}}\left( \frac{2}{\min_{v\in C}\Delta_{a}^{v}}-\frac{1}{\max_{v\in C}\Delta_{a}^{v}} \right)\right)^{-1}, \tag{3}\]
where \(\max_{C_{-a}}\) is taken over all possible clique cover \(C_{-a}\) of \([\mathcal{G}^{\gamma}]_{-a}:=\mathcal{G}^{\gamma}[\mathcal{V}_{-a}]\). If \(\mathcal{V}_{-a}=\emptyset\), then we simply set \(\Delta_{a}^{\gamma}=0\). Here, we recall that \(\Delta_{a}^{\alpha}=\mu_{\bullet}^{\beta}-\mu_{a}\) is the agent-specific gap of arm \(a\).
We now present a regret upper bound for Flooding:
**Theorem 3.1**.: _Algorithm 1 with \(absorb=False\), \(f(t)=t^{\alpha}\), \(\alpha>\max\left(\frac{1}{2},\frac{2\sigma^{2}}{\gamma+1}\right)\), and \(\gamma\in\{0,1,\cdots,\operatorname{diam}(\mathcal{G})\}\) achieves the following group regret upper bound:_
\[\mathbb{E}\left[R(T)\right]\leq\sum_{\begin{subarray}{c}a\in\mathcal{K}\\ \Delta_{a}^{\gamma}>0\end{subarray}}\frac{4\alpha\log T}{\Delta_{a}^{\gamma}} +b(\gamma)+\sum_{a\in\mathcal{K}}f_{a}(\gamma), \tag{4}\]
_where_
\[b(\gamma):=\left(\frac{\alpha+1/2}{\alpha-1/2}\right)^{2}\frac{8(\gamma+1)}{ \log\frac{(\gamma+1)(\alpha+1/2)}{4\sigma^{2}}}\sum_{a\in\mathcal{K}}\sum_{ \omega\in\mathcal{V}_{a}}\Delta_{a}^{\alpha}\]
_and_
\[f_{a}(\gamma)=\sum_{\omega\in\mathcal{V}_{-a}}\Delta_{a}^{\alpha}\min\left(2 \gamma,\frac{4\alpha\log T}{(\Delta_{a}^{\alpha})^{2}}\right).\]
The complete proof is provided in Appendix C, using a clique covering argument and an Abel transformation. See Appendix D.1 for more thorough discussions on the main technical challenges when proving the regret bound for our setting, compared to previously considered settings.
By choosing the minimum clique cover for each \(a\) in the above definition of \(\Delta_{a}^{\gamma}\), a simplified, asymptotic regret bound that explicitly shows the improved dependency on \(N\) can be deduced:
**Corollary 3.2**.: _When \(\max\left(b_{\alpha,\gamma,\sigma},\sum_{a\in\mathcal{K}}f_{a}(\gamma)\right)= \operatorname{o}(\log T)\),_
\[\limsup_{T\to\infty}\frac{\mathbb{E}\left[R(T)\right]}{\log T}\leq\sum_{ \begin{subarray}{c}a\in\mathcal{K}\\ \Delta_{a}^{\gamma}>0\end{subarray}}\frac{4\alpha}{\Delta_{a}^{\gamma}}\leq \sum_{\begin{subarray}{c}a\in\mathcal{K}\\ \Delta_{a}>0\end{subarray}}\frac{8\alpha\theta(\{[\mathcal{G}^{\gamma}]_{-a} \})}{\tilde{\Delta}_{a}}, \tag{5}\]
_where \(\tilde{\Delta}_{a}:=\min_{\omega\in\mathcal{V}_{-a}}\Delta_{a}^{\alpha}\)._
Intuitively, \(\Delta_{a}^{\gamma}\) describes the difficulty of the given problem instance, which takes into account _both_ the arm heterogeneity and the underlying graph. Note that in the more simplified (yet more loose) asymptotic regret bound given in the RHS of Eqn. (5), \(\tilde{\Delta}_{a}\) is the suboptimality gap introduced in (Gelman and Krapivsky, 2017).
Of course, compared to the setting without any collaboration, we obtain an improvement in \(N\). We now compare our bound with previous bounds in different collaborative settings. When the graph is fully connected, we recover the regret bound presented in (Gelman and Krapivsky, 2017) with matching \(\log T\) dependency and an improved leading coefficient1. Also, in the homogeneous setting with a general graph (Zhu and Zhang, 2017; Zhang et al., 2017), \(\Delta_{a}^{\gamma}\) reduces to \(\frac{\Delta_{a}^{Kolla}}{\theta(\mathcal{G}^{\gamma})}\), where \(\Delta_{a}^{Kolla}\) is the suboptimality gap as defined in (Zhu and Zhang, 2017), satisfying \(\Delta_{a}^{Kolla}:=\Delta_{a}^{\alpha}\) for all \(\omega\in\mathcal{V}\). As \(\frac{1}{\theta(\mathcal{G}^{\gamma})}\) is independent of the arm \(a\), we've shown that our \(\Delta_{a}^{\gamma}\) successfully generalizes the suboptimality gap of (Zhu and Zhang, 2017). When \(\gamma=\operatorname{diam}(\mathcal{G})\), we have that \(\Delta_{a}^{\gamma}=\Lambda_{a}^{Kolla}\) as \(\theta(\mathcal{G}^{\gamma})=1\), which results in the same regret bound as (Zhu and Zhang, 2017). When \(\gamma=1\) (IRS), it can be observed that IRS and FWA coincide, yet our bound is a bit worse compared to (Zhu and Zhang, 2017), whose bound depends on \(\alpha(\mathcal{G})\). Precisely, the gap in the regret bounds depends on the _covering gap_\(\theta(\mathcal{G})-\alpha(\mathcal{G})\), which is known to be small for many classes of graphs, and zero for perfect graphs (Zhu and Zhang, 2017; Zhang et al., 2017) for some recent advances.
Footnote 1: Theorem 2 of (Gelman and Krapivsky, 2017) requires \(\alpha>2\), and thus with proper scaling, it can be seen that our coefficient is \(8\alpha\) while their coefficient is \(24\alpha\).
#### 3.1.2. Drawback of Flooding: Communication Complexity
Classic (hop-limited) flooding algorithms (Zhu and Zhang, 2017), which disseminate a single message throughout the network as long as the time-to-live (TTL) of the message has not yet reached 0, lead to an optimal spread of information in terms of the _completion time_, the time in which global node outreach is achieved, i.e. when all (or some prescribed ratio of) nodes obtain the message. We remark that in our setting, the goal is quite different; instead of dealing with a single message, each agent creates a new message at every \(t\), and the goal is to pass those messages around in the network to facilitate collaboration.
Thus, although Flooding is naturally optimal in terms of information dissemination and significantly improves the group regret (Theorem 3.1) in our setting, it is usually very expensive in terms of communication complexity (CC), defined as the number of messages sent by all agents (Gelman and Krapivsky, 2017). Indeed, for \(\gamma=\operatorname{diam}(\mathcal{G})\), the worst-case CC is \(O(N\cdot|\mathcal{E}|\cdot\gamma)\), which is attained when every message created by every agent up to time \(T\) is being passed around at every edge. As with all flooding-based protocols, this issue can lead to severe problems in a wide range of networking applications. In particular, high link congestion on "sparse" links between dense network regions can not only lead to high latency, but also message losses and link failures due to limited bandwidth/network capacity.
One naive way of controlling the CC is to somehow tune the TTL, \(\gamma\); high \(\gamma\) means information is shared more but with higher CC, and vice versa. However, in our setting, the trade-off between CC and the group regret is not trivial due to the arm heterogeneity; for instance, IRS (Zhu and Zhang, 2017; Zhang et al., 2017), i.e., \(\gamma=1\), has a lower computational complexity \(O(N\cdot|\mathcal{E}|)\) but often does not result in good regret guarantees, as immediate neighbors may not share any arms.
On the other end of the communication protocol spectrum for bandit applications are (uniform) gossip algorithms (Gelman and Krapivsky, 2017; Zhang et al., 2017; Zhang et al., 2017), where that messages are forwarded to only one random neighbor at a time until the TTL expires. However, on a network with sparse links, i.e. bottlenecks, it is well known that uniform gossiping suffers from large latencies (Gelman and Krapivsky, 2017; Zhang et al., 2017) as it may take longer to discover the "right" link to use by a random process. Using such a protocol in a setting like ours is hence problematic.
We now strive to find a simple communication protocol that has good regret guarantees when combined with the UCB policy (compared to UCB-Flooding) and low CC. In the following section, we introduce a new protocol that interpolates between the communication-efficient nature of IRS and the regret optimality of Flooding by using the intrinsic heterogeneity of the system.
### A New Efficient & Effective Protocol: UCB-Flooding with Absorption
To deal with the aforementioned issues, we propose a new approach, which we call **Flooding with Absorption (FWA)** (Figure 2), whose pseudocode is shown in Algorithm 1 with \(absorb=True\), presented in Appendix A. In contrast to Flooding, once a message hits an agent that has the arm in the message, the agent _absorbs_ the message, i.e., does not forward it any further. Additionally, as in Flooding, we retain the TTL \(\gamma\), meaning that if a message
originating at time \(t\) has not found an absorbing agent until \(t^{\prime}=t+\gamma\), it gets discarded. In case the message hits a "dead end", i.e., a leaf node, it is also discarded.
This seemingly small difference to Flooding is actually critical in ensuring low communication complexity, as it prevents messages from circulating for too long. We note that FWA is somewhat reminiscent of the well-studied replication-based epidemic and other controlled flooding algorithms (Graham et al., 2007; Kudzik et al., 2008; Kudzik et al., 2008; Kudzik et al., 2008), which were designed for ad-hoc mobile networks. Our FWA distinguishes itself by using the inherent heterogeneity of agents _without_ any explicit tuning or need for solving NP-hard combinatorial problems (Kudzik et al., 2008). Furthermore, the goal of FWA is to disseminate information to any node that uses it for its own learning, _not_ to route packets from a particular source to a particular destination.
**Remark 2**.: _For FWA to be valid, each agent \(v\in\mathcal{V}\setminus\mathcal{V}_{a}\) must have the capabilities of receiving and sending messages containing \(a\). Also, each agent \(v\) must have a sufficiently large memory buffer to store the messages to be sent in the next round, as well as previously seen message identifiers. As all messages expire after \(\gamma\) rounds, we note that this memory requirement is at most \(N\cdot\gamma\)_
#### 3.2.1. Group Regret Bound of FWA
As similar as FWA is to Flooding algorithm-wise, their regret bounds are somewhat similar as well. To formalize this, we first consider a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), and let \(c:\mathcal{V}\to 2^{\mathcal{K}}\) be a multi-coloring with overlap allowed, i.e., it may be that \(c(v)\cap c(w)\neq\emptyset\) for \(\{v,w\}\in\mathcal{E}\). Let \(a\in\mathcal{K}\) and \(v,w\in\mathcal{V}\) be such that \(a\in c(e)\cap c(w)\).
**Definition 5**.: _A path \(v_{0}v_{1}\cdots v_{m}\) (of length \(m\)) with \(v_{0}=v\) and \(v_{m}=w\) is said to \(a\)_**-free** _if \(a\notin c(v_{i})\) for all \(i=1,\cdots,m-1\)._
**Definition 6**.: _For \(\gamma\geq 1\) and \(a\in\mathcal{K}\), we define the \((a,c)\)_**-non-blocking graph power of \(\gamma\)-th order of \(\mathcal{G}\)**, denoted as \(\mathcal{G}^{\gamma}_{(a,c)}\), as a graph on \(\mathcal{V}\) with the edge set \(\mathcal{E}^{\gamma}_{(a,c)}\) such that \(\{v,w\}\in\mathcal{E}^{\gamma}_{(a,c)}\) iff there exists a \(a\)-free path from \(v\) to \(w\) in \(\mathcal{G}\) of length at most \(\gamma\)._
In our setting, \(v\mapsto\mathcal{K}_{v}\) is the multi-coloring to be considered, which we denote as \(c\). In this case, our new suboptimality gap \(\Delta^{FWA}_{a}\) is defined as follows:
\[\Delta^{FWA,\gamma}_{a}:=\max_{\mathcal{C}^{\gamma}_{(a,c)}}\left(\sum_{C\in \mathcal{C}^{\gamma}_{(a,c)}}\left\{\frac{2}{\min_{v\in C}\Delta^{\gamma}_{a} }-\frac{1}{\max_{v\in C}\Delta^{\gamma}_{a}}\right)\right\}^{-1}, \tag{6}\]
where \(\min_{\mathcal{C}^{\gamma}_{(a,c)}}\) is over all possible clique cover \(\mathcal{C}_{(a,c)^{\gamma}}\) of \(\mathcal{G}^{\gamma}_{(a,c)}\). Then we have the following:
**Theorem 3.3**.: _With absorb \(=True\), Theorem 3.1 holds with \(\Delta^{\gamma}_{a}\) replaced by \(\Delta^{FWA,\gamma}_{a}\)._
Similarly, with appropriate choices of clique covers, we have the following simplified asymptotic regret bound:
**Corollary 3.4**.: _With the same assumption as in Theorem 3.1, we have that_
\[\begin{split}\limsup_{T\to\infty}\frac{\mathbb{E}\left[R(T) \right]}{\log T}\leq\sum_{\begin{subarray}{c}a\in\mathcal{K}\\ \Delta^{\gamma}_{a}>0\end{subarray}}\frac{4\alpha}{\Delta^{FWA,\gamma}_{a}} \leq\sum_{\begin{subarray}{c}a\in\mathcal{K}\\ \Delta_{a}>0\end{subarray}}\frac{8\alpha\theta\left(\left[\mathcal{G}^{\gamma }_{(a,c)}\right]_{-a}\right)}{\tilde{\Delta}_{a}},\end{split} \tag{7}\]
_where we recall that \(\tilde{\Delta}_{a}=\min_{v\in\mathcal{V}_{-a}}\Delta^{\gamma}_{a}\)._
As \(\mathcal{G}^{\gamma}_{(a,c)}\) is always a subgraph of \(\mathcal{G}^{\gamma}\), it can be easily seen that the regret upper-bound of Flooding is always better than that of FWA. But as we will argue later, at the price of _slightly_ worse regret, FWA obtains significantly better communication complexity than Flooding. Corollary 3.2 and 3.4 imply that when \(\tilde{\Delta}_{a}\)'s are fixed, the gap in the asymptotic regret upper-bounds of Flooding and FWA roughly scales with \(\delta\coloneqq\sum_{\begin{subarray}{c}a\in\mathcal{K}\\ \tilde{\Delta}_{a}>0\end{subarray}}\delta_{a}\), where we define \(\delta_{a}:=\theta\left(\left[\mathcal{G}^{\gamma}_{(a,c)}\right]_{-a} \right)-\theta\left(\left[\mathcal{G}^{\gamma}\right]_{-a}\right)\). Thus, the main question is, for which graph topologies and arm distributions are \(\delta_{a}\) small?
To see this, let us consider two extreme cases. First, suppose that the arms are so heterogeneous that no agents of distance at most \(\gamma\) share arm \(a\). In this case, we have that \(\mathcal{G}^{\gamma}_{(a,c)}=\mathcal{G}^{\gamma}\), and \(\delta_{a}=0\). Now suppose that all agents have the same arm set, i.e., \(\mathcal{K}_{0}=\mathcal{K}\) for all \(v\in\mathcal{V}\), in which case FWA is equivalent to IRS, i.e., messages do not get forwarded beyond the direct neighbors. From the definition, we have that \(\mathcal{G}^{\gamma}_{(a,c)}=\mathcal{G}\) for all \(\gamma\geq 1\) and \(a\in\mathcal{K}\), i.e., \(\delta_{a}=\theta(\left[\mathcal{G}\right]_{-a})-\theta(\left[\mathcal{G}^{ \gamma}\right]_{-a})\). Thus, for small \(\gamma\)'s and \(\mathcal{G}\) with _large_ average path length (Bordes and Kudzik, 2008) between agents containing \(a\), \(\delta_{a}\) is small.
### Advantages of Flooding with Absorption
We now informally argue the advantages of using Flooding-with-Absorption over other protocols such as Flooding or IRS.
#### 3.3.1. Interpolation between IRS and Flooding
FWA naturally interpolates between IRS and Flooding in terms of information propagation. To see this, consider a graph with dense and sparse regions, such as the one in Figure 3. On the one hand, node \(v\) shares an arm with a number of other nodes that branch off from it. In this part of the graph, FWA is equivalent to IRS: a message containing the shared arm and its reward gets absorbed immediately at \(w\). On the other hand, in regions of the network where the arm that \(v\) pulled is rare, FWA acts like Flooding with \(\gamma\gg 1\), thereby ensuring that agents (like \(y\)) get information that is relevant to them. We
Figure 2. Flooding with Absorption (FWA). **a**, An agent (\(v_{1}\)) pulls one of its arms (\(a_{2}\)), **b**, \(v_{1}\)**sends a message \(m\) to its neighbors, with a TTL \(y\). **c**, Since one receiver of the message (\(v_{a}\)) does not have \(a_{2}\) in its arm set, they forward \(m\) to their neighbors except the originator \(v_{1}\). The other receiver (\(v_{2}\)) has arm \(a_{2}\) in their arm set, and thus it absorbs \(m\).
Additionally note that in FWA, setting the TTL to larger values will always be less costly than doing so in Flooding, as the probability of congestion is much smaller.
#### 3.3.2. Comparable Regret Guarantees
The fact that FWA acts as a mix of IRS and Flooding results in a regret that is intuitively bounded by IRS from above (where messages get absorbed in just one step) and Flooding from below (where messages are not absorbed until the TTL expires). Theorems 3.1 and 3.3 combined gives us an expression that governs the gap between the regret upper bounds of Flooding and FWA. From this, we can conclude that for the regret gap between FWA and Flooding to be small, either the graph is so sparse that the average path length (Beng et al., 2015) between agents _containing_\(a\) is large, or the graph is dense, yet the arm distribution is sparse enough such that the same property holds. We emphasize that although the gap may be nonzero, the exploding communication complexity of Flooding demonstrates a clear trade-off between performance and communication complexity.
On the other hand, it is also expected that FWA will outperform (uniform) gossiping, in which each node only sends messages to one neighbor (or a small subset of them). Considering a graph with sparse links connecting very dense network regions, the probability of gossip hitting the sparse link before the TTL expires can be arbitrarily small.
#### 3.3.3. Communication Efficiency: Preventing Link Congestion
Having messages absorbed by agents that can profit from its information implies that the FWA protocol completely falls back to the baseline Flooding algorithm only in the case of a network where particular arms are very rare. This means that if there is just one ball of radius \(\gamma\) in the network where two agents in this ball contain the same arm, which should be likely in practice, the communication complexity of FWA will already be lower than the \(O(N\cdot|\mathcal{E}|\cdot\gamma)\), the communication complexity of Flooding. In networks of high density and few arms, this complexity will approach that of IRS, i.e. be lowered by a factor \(\gamma\). Hence, FWA has the advantage of being able to prevent network overload and heavy link congestion without much overhead or the need to fall back on gossip models, which is particularly salient for applications such as wireless networking.
### Limitations of FWA
We note that our protocol's performance (in regret) is dependent on both the graph topology and the distribution of arms among agents, both of which influence \(\Delta_{a}^{FWA}\) (see Eqn. (6)). Although empirically we've verified that FWA performs well on network topologies that have both dense and sparse regions, there are certain cases in which the performance gap between FWA and Flooding is high. Such cases are when the information dissemination of a certain arm \(a\) is blocked by a set of agents containing \(a\) in which it will take certain agents longer to gather enough information about \(a\) (FWA), compared to the case where no blocking occurs (Flooding). We explain such cases using two simple network topologies as well as a specifically designed arm distribution. The examples are shown in Figure 4. In Figure 3(a) (star topology), note that the communication between agent \(u\) and \(w\) (leaf nodes) is blocked by agent \(v\) (center node). In Figure 3(b) (ring topology), note that the communication between agent \(u\) and \(w\) is blocked by agent \(v\), i.e., \(v\) is "locked in" from both sides. The only other possible communication between \(u\) and \(w\), which is along the opposite path, is also not feasible as \(\gamma=2\) is less than the length of that path. Thus, for both examples, even though agent \(v\) will send her information to \(u\) and \(w\), the total amount of collaboration will be diminished by a large portion than that of Flooding, in which every pair of agents(nodes) can communicate with one another.
## 4. Regret Lower Bound
We consider a decentralized2 policy \(\Pi=(\pi^{o})_{o\in\mathcal{V}}\), where \(\pi^{o}:[T]\rightarrow\mathcal{P}(\mathcal{K})\) is the agent-wise policy followed by agent \(o\), possibly affected by other policies and the history. Let us denote \(N_{a}(T)\coloneqq\sum_{o\in\mathcal{V}_{a}}N_{a}^{o}(T)\). For the regret lower bound, we consider a rather general class of policies satisfying the following property, which has been widely adapted in bandit literature (Grover and Leskovec, 2007; Grover and Leskovec, 2007; Grover and Leskovec, 2007):
Footnote 2: see Appendix A of (Grover and Leskovec, 2007) for the measure-theoretic definition of decentralized policy.
Definition 7 ().: _Pi is said to be_ **individually consistent** _if, for any agent \(v\) and any \(a\in\mathcal{K}_{o}\), we have that \(\mathbb{E}[N_{a}(T)]=o(T^{c}),\;\forall c>0\)._
We then have the following regret lower bound:
Theorem 4.1 ().: _For any individually consistent policy \(\Pi\), the following holds:_
\[\liminf_{T\rightarrow\infty}\frac{\mathbb{E}[R(T)]}{\log T}\geq\sum_{ \begin{subarray}{c}a\in\mathcal{K}\\ \tilde{\Delta}_{a}>0\end{subarray}}\frac{\tilde{\Delta}_{a}}{\inf_{P\in \mathcal{M}_{o}}\left\{D_{\mathrm{KL}}(P_{a},P):\mu(P)-\mu(P_{a})>\tilde{ \Delta}_{a}\right\}}, \tag{8}\]
_where we recall that \(\mathcal{M}_{o}\) is the set of \(o\)-sub-Gaussian distributions, and \(\tilde{\Delta}_{a}=\min_{o\in\mathcal{V}_{a}}\Delta_{a}^{o}\)._
Figure 4. Two examples of unfavorable topologies and arm distributions for FWA with \(\gamma=2\).
Figure 3. FWA unifies IRS and Flooding by acting as the former in very dense areas with lots of shared arms (\(v\) and \(w\)) while acting as the latter in areas where nodes do not share arms and connections are sparse (\(v\) and \(y\)).
_Especially when \(\mathcal{M}_{\sigma}=\left\{\mathcal{N}(\mu,\sigma^{2}):\mu\in\mathbb{R}\right\}\), we have:_
\[\liminf_{T\to\infty}\frac{\mathbb{E}[R(T)]}{\log T}\geq\sum_{ \begin{subarray}{c}a\in K\\ \tilde{\Delta}_{a}>0\end{subarray}}\frac{2\sigma^{2}}{\tilde{\Delta}_{a}}. \tag{9}\]
The proof is immediate from the change-of-measure argument for cooperative multi-agent bandit setting (Gosse et al., 2016; Gosse et al., 2017). Note that this asymptotic lower bound matches our asymptotic regret upper bound of both Flooding and FWA up to some graph topology and arm distribution-dependent constants; see Appendix D.2 for more detailed discussions.
## 5. Experimental Results
We show several experimental results comparing the proposed algorithms. The codes are available on our GitHub repository3.
Footnote 3: [https://github.com/nick-jibles/heterogeneous-network-bandits](https://github.com/nick-jibles/heterogeneous-network-bandits)
The experiments were conducted on three random graph models: the Erdos-Renyi model (ER) (Erdos and Renyi, 1996; Barabasi and Albert, 1997), the Barabasi-Albert model (BA) (Barabasi and Albert, 1997), and the binary stochastic block model (SSM) (Barabasi and Albert, 1997). Especially, the binary SBM represents networks with few sparse links connecting the two dense parts. We set the number of agents to \(N=20\), the total number of arms to \(K=20\), and the number of arms per agent to be \(k=10\). We sample sets of size \(k\) as arm sets for all the agents, uniformly at random, which results in a random \(k\)-uniform hypergraph with \(20\) hyperedges and \(10\) vertices (see Section 2 for the analogy of arm distribution as a hypergraph). For our time horizon, we set \(T=10^{4}\). We also assume that arm rewards follow Gaussian distributions, with the corresponding means uniformly sampled from \([0.1,1.0]\) and variance \(\sigma^{2}=1\). We compare the baseline UCB with no cooperation between agents (baseline), Flooding, (uniform) Gossy, IRS, and our FWA. For the Gossy algorithm, we assume that each agent forwards messages to only one random neighbor at a time. All experiments are repeated \(10\) times, and we plot the results in Figure 5. (The hyperparameters for the random graph models are deferred to Appendix E.)
### Baseline Comparison: Group Regret and Communication Complexity
_Regret._ For regret comparison in the three network models, we fix the TTL to be relatively small, with \(\gamma=2\), and observe the evolution of the total regret over time (Figure 5). Here, we do not assume that messages can be lost, i.e. communication always is successful. We observe that while flooding achieves the best regret out of the tested protocols, as expected, our FWA never performs much worse. Also, it is observed that FWA beats IRS, Gossy, and the baseline with no communication. In fact, in BA (Figure (b)b) and binary SBM (Figure (c)c), we find that FWA performs almost on par with Flooding. Even in ER (Figure (a)a), the gap between Flooding and FWA is much smaller than between FWA and IRS.
To see that this aligns with our theoretical results (Theorem 3.1 and 3.3) as well as our intuitions (Section 3.3), for each network we compute4\(\delta\), the quantity that governs the regret gap between FWA and Flooding (see Section 3.2). The values are shown in Figure 5. Observe that BA(ER) achieves the lowest(highest) \(\delta\), which aligns well with the empirical observations that the gap between FWA and Flooding is the smallest(largest) in BA(ER).
Footnote 4: For the computation, we used the integer linear programming-based implementation, available in this external GitHub repository: [https://github.com/somacdivad/grinry](https://github.com/somacdivad/grinry).
_Communication complexity._ Considering the cumulative communication complexity over time, i.e., the total number of messages sent, we find that our FWA protocol leads to very significant gains across all three topologies. Looking at the corresponding plots in Figure 5, we can see that we reduce communication by up to almost \(50\%\) by using FWA. Note that this gain comes at a very small loss in terms of regret, in contrast to gossiping and IRS: both might be able to reduce communication complexity further, but suffer from a significantly increased regret on all tested networks.
### Ablation Study on the TTL \(\gamma\)
Here, we perform an ablation study of the algorithms' performance w.r.t. the TTL \(\gamma\). We vary \(\gamma\in\{1,2,3,4\}\), which we have found to be sufficient to understand the system behavior with regards to this parameter (note that \(\gamma=1\) corresponds to IRS). We plot the results in the bottom row of Figure 5.
_Regret._ We observe that increasing the TTL decreases the regret for all tested protocols, which lets Flooding remain on top for the three network models. However, we find that for the BA model, an increased TTL lets our FWA algorithm perform at least on par with flooding, suggesting that the protocol is particularly attractive to use on networks with hubs such as the internet network (Beng et al., 2016).
_Communication Complexity._ Furthermore, when considering cumulative communication complexity as a function of \(\gamma\), it becomes clear that this is where our protocol is at a large advantage: even when we increase the TTL, the communication complexity of FWA only increases very slowly in comparison to Flooding. This suggests a notable tradeoff: given a fixed communication budget, we can use a much larger value of \(\gamma\) in FWA compared to flooding - which then leads to improved regret. Hence, our protocol can even outperform flooding by tuning the TTL appropriately.
### Robustness to Message Loss
As a next step, we consider that messages can be randomly dropped when they are sent on any link. We assume this message loss happens with probability \(1-p\) independently for any two connected nodes, and vary the messaging success probability \(p\in\{0.05,0.10,\ldots,1.00\}\) to understand how our protocol performs in this setting compared to the baseline algorithms. We again set TTL \(\gamma=2\), and plot the results in Figure 7, presented in Appendix E. Here, too, we find that FWA and Flooding behave very similarly, attaining lower regret as the communication success probability \(p\) increases. In terms of communication complexity, it comes as a little surprise that while both FWA and Flooding show a linear dependence of their complexity on \(p\), FWA outperforms Flooding for every value of the success probability. This clearly makes FWA a good candidate to use even when messages are not fully reliable.
### Link congestion
Finally, we consider the problem of congestion on sparse links in networks of inhomogeneous density. Flooding is optimal in information dissemination but leads to heavy congestion on links that connect different dense regions. Such congestion can lead to
Figure 5. Comparing regret and communication complexity across different topologies and different settings.
significantly decreased performance, as messages may be queued with limited memory in reliable link protocols, or automatically discarded in non-reliable link protocols once more messages are being sent than the link can handle, among numerous highly undesirable latency effects in real-life network applications. This becomes an issue with Flooding as shown in Figure 6, where we visualize the number of messages sent over one of two links that connect two dense clusters in the binary SBM, with TTL \(\gamma=3\). Comparing Flooding and FWA, we see that FWA results in a reduction of messages per round at around 50-60% on average. This implies that our protocol exhibits significant benefits in terms of network health.
## 6. Conclusion and Future Works
In this work, we have described a novel setting for distributed multi-armed bandits, where agents communicate on an underlying network and do not all share the same arm set. We assume that each agent runs a UCB algorithm to identify their local best arm and that they communicate the information they receive to their neighbors, with the goal of minimizing cumulative group regret. First, we have provided theoretical upper and lower regret bounds when agents use the standard Flooding protocol to disseminate information. To deal with the very large communication complexity that however arises from using Flooding in our setting, we have then introduced a new communication protocol, Flooding with Absorption (FWA). With FWA, agents forward information only if it pertains to an arm they themselves do not include in their arm set, whereas they absorb a message that gives information about one of their own arms. Experimentally, we've shown that FWA incurs only minimal group regret performance loss compared to Flooding, even with message losses, while leading to a significantly improved communication complexity. We've also shown that FWA can also reduce network congestion by a factor of 50% for sparse links.
Part of our future work will be devoted to replacing UCB by other algorithms, like arm elimination or arm recommendation, and to refine the scope and ambition of the theoretical and experimental analysis. Other interesting avenues include improving the scalability of our protocol w.r.t. large networks and analyzing and improving the resilience with respect to having non-homogeneous link failures, and even malicious (Byzantine) agents.
Finally, we might think of different problem settings where FWA actually outperforms Flooding in terms of regret. One potential setting to investigate is a nonstationary setting, such as restless or rotting bandits. When messages can go "stale" in this way, FWA can _implicitly_ prevent agents from incorporating information that is of no value anymore without putting in such explicit constraints. For instance, in collaborative rotting bandits setting, when two neighboring agents do share the same arm, absorption at one of them can prevent the other from pulling a rotting arm.
###### Acknowledgements.
We thank Ulrich Schmid (TU Wien) and Jung-hun Kim (KAIST) for their helpful comments and suggestions. Junghyun Lee and Se-Young Yun were supported by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)). Laura Schmid received support from the Stochastic Analysis and Application Research Center (SAARC) under the National Research Foundation of Korea grant NRF-2019R1A5A1028324.
|
2310.02212
|
Neutrino Emissions of TXS 0506+056 caused by a Supermassive Binary Black
Hole Inspiral?
|
The IceCube neutrino observatory detected two distinct flares of high-energy
neutrinos from the direction of the blazar TXS 0506+056: a $\sim 300$ TeV
single neutrino on September 22, 2017 and a $3.5\sigma$ signature of a dozen
TeV neutrinos in 2014/2015. In a previous work, it was shown that these two
episodes of neutrino emission could be due to an inspiral of a supermassive
binary black hole (SMBBH) close to its merger at the core of TXS 0506+056. Such
an inspiral can lead to quasi-periodic particle emission due to jet precession
close to the final coalescence. This model made predictions on when the next
neutrino emission episode must occur. On September 18, 2022, IceCube detected
an additional, $\sim 170$ TeV neutrino in directional coincidence with the
blazar TXS 0506+056, being consistent with the model prediction. Additionally,
in April 2021, the Baikal Collaboration reported the detection of a $224\pm 75$
TeV neutrino, with TXS 0506+056 being in the uncertainty range of the event
direction. We show that these four distinct flares of neutrino emission from
TXS 0506+056 are consistent with a precessing jet scenario, driven by an
inspiraling SMBBH. Using improved modeling, we are now able to constrain the
total mass together with the mass ratio for the binary. We predict when the
next neutrino flares from TXS 0506+056 should be happening. Finally, we
estimate the detection potential of the Laser-interferometer Space Antenna
(LISA) for the merger in the future.
|
Ilja Jaroschewski, Julia Becker Tjus, Armin Ghorbanietemad, Imre Bartos, Emma Kun, Peter L. Biermann
|
2023-10-03T17:09:47Z
|
http://arxiv.org/abs/2310.02212v1
|
# Neutrino Emissions of TXS 0506+056 caused by a Supermassive Binary Black Hole Inspiral?
###### Abstract:
The IceCube neutrino observatory detected two distinct flares of high-energy neutrinos from the direction of the blazar TXS 0506+056: a \(\sim\) 300 TeV single neutrino on September 22, 2017 and a 3.5\(\sigma\) signature of a dozen TeV neutrinos in 2014/2015. In a previous work, it was shown that these two episodes of neutrino emission could be due to an inspiral of a supermassive binary black hole (SMBBH) close to its merger at the core of TXS 0506+056. Such an inspiral can lead to quasi-periodic particle emission due to jet precession close to the final coalescence. This model made predictions on when the next neutrino emission episode must occur. On September 18, 2022, IceCube detected an additional, \(\sim\) 170 TeV neutrino in directional coincidence with the blazar TXS 0506+056, being consistent with the model prediction. Additionally, in April 2021, the Baikal Collaboration reported the detection of a \(224\pm 75\) TeV neutrino, with TXS 0506+056 being in the uncertainty range of the event direction.
We show that these four distinct flares of neutrino emission from TXS 0506+056 are consistent with a precessing jet scenario, driven by an inspiraling SMBBH. Using improved modeling, we are now able to constrain the total mass together with the mass ratio for the binary. We predict when the next neutrino flares from TXS 0506+056 should be happening. Finally, we estimate the detection potential of the Laser-interferometer Space Antenna (LISA) for the merger in the future.
Introduction
Ever since the detection of a \(\sim 300\,\mathrm{TeV}\) neutrino from the direction of the blazar TXS 0506+056 at a \(3\sigma\) level by IceCube, and a coincident detection of a \(\mathrm{GeV}\) gamma-ray flare by Fermi Large Area Telescope [1], this source became one of the main candidates for the source of cosmic rays (CRs) and neutrinos. A blind analysis of \(10\,\mathrm{yr}\) data of IceCube from the same direction revealed another neutrino flare in 2014/2015 with \(\sim 10\,\mathrm{TeV}\) at a \(3.5\sigma\) level [2]. However, gamma-ray data at that time indicated that the source was in a quiescence mode.
In September 2022, IceCube reported the detection of another \(\sim 170\,\mathrm{TeV}\) neutrino from the direction of the blazar [3], but with no associated gamma-ray detection. This track-like event has a directional uncertainty of \(\sim 3.6^{\circ}\) (90% containment) and, with a signalness of 42% of being of astrophysical origin, was classified as a "bronze alert". A reason for this high uncertainty could be that the track skimmed the edge of the IceCube detector and was not fully contained in [3]. TXS 0506+056 lies in that uncertainty region, with a separation of \(\sim 3.06^{\circ}\) from the best-fit event position.
In addition, the Baikal Collaboration reported the detection of a \(224\pm 75\,\mathrm{TeV}\) neutrino in April 2021 with its uncertainty range covering the direction of TXS 0506+056 [4]. This neutrino is a cascade event and has an uncertainty of \(\sim 6^{\circ}\) (90% containment), while TXS 0506+056 has a separation of \(\sim 5.33^{\circ}\) from the best-fit event position. Its signalness was reported as 97.1%.
Though a combined multimessenger modeling of each of these neutrino flares in combination with gamma-ray data is challenging (see e.g. [5, 6]), the distinct arrival times of these flares can be explained by a supermassive binary black hole (SMBBH) close to its merger, which is located at the core of the blazar [7]. In its inspiral stage, the emission of gravitational waves (GWs) is the leading mechanism through which the binary loses orbital energy. As a consequence, due to spin-orbit coupling, the originally unaligned spins of both supermassive black holes (SMBHs) realign themselves, changing the orientation of the associated jets [8]. During this realignment, the jets perform a precessing movement, colliding with the surrounding matter and producing neutrinos due to proton-proton interactions [9]. Is such a precessing jet pointing at Earth, a quasi-periodic neutrino signal will be received, each time the jet finishes one precession. The time between the signals will shorten will each precession, thus quasi-periodic. The jet precession model was first developed in [10]. It was predicted that one flare should occur in the years 2022 and 2023. In [11], it was shown that the 2022 IceCube neutrino agrees with this prediction. For that, an extension of the model was used [12], which considers small mass ratios.
In the following, the jet precession model is extended further to mass ratios up to and including unity by allowing contributions of the second spin and the orbital angular momentum and applied on TXS 0506+056. Though the neutrino event detected by the Baikal Collaboration has a high chance of being of astrophysical origin, it remains unclear if it originated from this source, as it has a huge uncertainty range (being a cascade event). This is why it is investigated with this updated model, whether all four distinct neutrino flares could originate from TXS 0506+056, if it has an inspiraling SMBBH at its core. Alternatively, the possibility of only IceCube neutrinos originating from the source is investigated.
## 2 The Jet precession Model
The model presented here is an extension of the jet precession model first described in [10]. This model focused on SMBBH mergers with the most common mass ratios between \(q=1/3\) and \(q=1/30\), with \(q=m_{2}/m_{1}\) and the masses of the binary \(m_{1}\geq m_{2}\). Since the spin magnitude \(S_{i}\) is proportional to \(m_{i}^{2}\), the contribution of the second spin \(S_{2}\) is ignored in that model, as its magnitude is with \(S_{2}/S_{1}\approx q^{2}\) smaller than \(S_{1}\)[8].
In contrast, this new model also considers mass ratios smaller than \(1/30\) and higher than \(1/3\) up to and including a mass ratio of unity and includes the second spin as well. The schematic overview of the jet precession model is shown in Fig. 1. At a time \(t_{1}\), the supermassive binary black hole system enters the inspiral stage. The spin vectors \({\bf S}_{1}\) and \({\bf S}_{2}\) are most likely unaligned at this time due to the initial random orbital spin orientation of the SMBHs in the galaxy centers of the preceding galaxy merger [13]. In this stage, the spins couple with the orbit, so that they perform a precessional motion around the orbital angular momentum \({\bf L}\) with the angular velocity \(\Omega_{{\rm p},i}\)[8]:
\[\dot{\bf S}_{i}=\Omega_{{\rm p},i}\times{\bf S}_{i}=\frac{G(4+3q)}{2c^{2}r^{3} }{\bf L}\times{\bf S}_{i}. \tag{1}\]
Since the orientation of the total angular momentum vector \({\bf J}={\bf L}+{\bf S}_{1}+{\bf S}_{2}\) is constant in this motion, the precession can be described around \({\bf J}\), with \({\bf L}\) also performing a precession around it. During the emission of GWs, the magnitudes of \({\bf L}\) and \({\bf J}\) are shrinking, while the angle \(\alpha\) between \({\bf L}\) and \({\bf J}\) increases. At the same time, the angles \(\beta_{1}\) between \({\bf S}_{1}\) and \({\bf J}\) and \(\beta_{2}\) between \({\bf S}_{2}\) and \({\bf J}\) decrease, since \(\alpha+\beta_{1}\) and \(\alpha+\beta_{2}\) stay constant [8]. In the 2.5 post-Newtonian (PN) approximation, the direction of \({\bf J}\) stays constant, when the precessional angular velocity of the spins \(\Omega_{{\rm p},i}\) is larger than \(\dot{\alpha}\), which is the case during the inspiral stage.
At a later time \(t_{2}\) during the inspiral stage, the angle \(\alpha\) increased and the angles \(\beta_{1}\) and \(\beta_{2}\) decreased so much that, in this case, the spin \({\bf S}_{1}\) and thus the jet of the heavier black hole \(m_{1}\) could point at Earth. That is if Earth lies inside the opening angle of the jet, indicated by the orange area in Fig. 1. Due to the precession of the jet around \({\bf J}\), the jet cone will move along the blue ring area and point at Earth after a time \(\Delta t\). Since the angle \(\beta_{1}\) decreases, the blue ring will get smaller with time, shortening the time between each potential signal received at Earth. This is until a time \(t_{3}\) is reached, at which the Earth will be outside the blue area, so that the jet will no longer be able to point at Earth.
The directional angle \(\phi\) of the precessing jet from the BH with the spin \({\bf S}_{1}\) can be determined by integrating its precessional velocity and delivers:
\[\phi(\Delta T_{\rm GW},M,q,\alpha,\beta_{1},\beta_{2}) = 2\left(4+3q\right)\,\left(\frac{5\,c}{32}\right)^{\frac{1}{4}} \,\left(\frac{\eta}{G\,M}\right)^{\frac{1}{4}}\,\left(\Delta T_{\rm GW}\right) ^{\frac{1}{4}}\,\left[q^{-1}\cos(\beta_{1})+q\cos(\beta_{2})\right] \tag{2}\] \[+ \frac{4(4+3q)}{3}\left(\frac{5}{32}\right)^{\frac{5}{8}}c^{\frac {9}{8}}\left(\frac{\eta}{G\,M}\right)^{\frac{3}{8}}\,(\Delta T_{\rm GW})^{\frac {3}{8}}\,\cos(\alpha)\] \[+ \psi(\tau,M,q,\alpha,\beta_{1},\beta_{2})\.\]
Here, \(G\) is the gravitational constant, \(M=m_{1}+m_{2}\) the total mass of the SMBBH, \(c\) the speed of light and \(\eta=q/(1+q)^{2}\). The remaining time until merger is \(\Delta T_{\rm GW}\). It is defined as [8]:
\[\Delta T_{\rm GW}=\frac{5\,G\ M}{32\,c^{3}}\varepsilon^{-4}\,\eta^{-1}\, \tag{3}\]
with the PN parameter \(\varepsilon\approx v/c\). A value of \(\varepsilon=10^{-3}\) denotes the beginning of the inspiral stage, while the value \(\varepsilon=10^{-1}\) marks its end. The factor \(\psi(\tau,M,q,\alpha,\beta_{1},\beta_{2})\) is an integration constant, which describes the initial direction of the jet at a time \(\tau\) during the inspiral stage.
Since the jet points at Earth every \(360^{\circ}\pm\zeta\), the following relationship can be established (see [12]):
\[\phi(\Delta T_{\rm GW},M,q,\alpha,\beta_{1},\beta_{2})=\phi(\Delta T_{\rm GW} -P_{\rm jet},M,q,\alpha,\beta_{1},\beta_{2})\pm\zeta\,. \tag{4}\]
The precession period \(P_{\rm jet}\) denotes the time that has passed between two signals from the same source. This relation describes that the jet angle is the same at a time \(\Delta T_{\rm GW}\) until merger and a later time \(\Delta T_{\rm GW}-P_{\rm jet}\) until merger, with a jet cone of \(\zeta\). That way, for a given parameter combination of \(M,q,\alpha,\beta_{1},\beta_{2}\), \(\zeta\) and a measured time between two flares from the same source \(P_{\rm jet}\), the time until the merger \(\Delta T_{\rm GW}\) of the source, in case it is an inspiral SMBBH, can be determined.
Exploiting the same relation, but with the time until merger \(\Delta T_{\rm GW}\) now determined, Eq. 4 can be used to calculate the next, shorter period \(P_{\rm jet,2}\) between the second time that the jet pointed at Earth and the future third time it will point at Earth:
\[\phi(\Delta T_{\rm GW},M,q,\alpha,\beta_{1},\beta_{2})=\phi(\Delta T_{\rm GW }-P_{\rm jet}-P_{\rm jet,2},M,q,\alpha,\beta_{1},\beta_{2})\pm 2\zeta\,. \tag{5}\]
This way, a prediction can be made when the next flare should occur. This relation can be expanded until an \(n\)-th flare from the SMBBH. The condition is that the binary did not merge until the flare and that Earth is still in the path of the jet, as can be seen in Fig. 1 at time \(t_{2}\).
## 3 Prediction of Neutrino Flares from TXS 0506+056 with the Baikal Neutrino
For the prediction of future neutrino flares from TXS 0506+056, the time between the neutrino detections in 2014/2015 [2] and 2017 [1] with \(P_{\rm jet,1}=2.78\pm 0.15\) years was taken as an input in the model. This way, the occurrence of the 2022 neutrino flare was predicted in [10] and confirmed in [11].
Here, the expanded model is used to test whether the 2021 neutrino detected by the Baikal collaboration [4] could also originate from TXS 0506+056 and be consistent with the jet precession
Figure 1: Schematic overview of the jet precession model with two supermassive black holes close to their merger at the center. The jet direction of the heavier black hole \({\bf S}_{1}\), the lighter black hole \({\bf S}_{2}\), the orbital angular momentum \({\bf L}\) and the total angular momentum \({\bf J}\) are shown. Each time the orange area crosses Earth, a possible neutrino signal can be detected. Figure modified from [10].
model. For that, the model is tested with a large parameter set: The total mass is varied between \(7\cdot 10^{7}\,\mathrm{M}_{\odot}\), \(1\cdot 10^{8}\,\mathrm{M}_{\odot}\), \(3\cdot 10^{8}\,\mathrm{M}_{\odot}\), \(5\cdot 10^{8}\,\mathrm{M}_{\odot}\) and \(7\cdot 10^{8}\,\mathrm{M}_{\odot}\) and the half-opening angle \(\zeta\) between \(3^{\circ}\) and \(6^{\circ}\) in \(0.1^{\circ}\) steps. As for the other angles, \(\alpha\) is tested between \(75^{\circ}\) and \(90^{\circ}\), while \(\beta_{1}\) and \(\beta_{2}\) are assumed to lie between \(0^{\circ}\) and \(20^{\circ}\). The mass ratio \(q\) is a set between 0.01 and unity.
Labeling the 2014/2015 neutrino detection as the first neutrino flare from TXS 0506+056 and the 2017 neutrino detection as the second, there are two possibilities for the Baikal neutrino flare: (i) either is it the third neutrino flare and the 2022 neutrino detection the 4th, or (ii) it is the 4th neutrino flare, the 2022 would be thus the 5th, with the third neutrino flare still hidden in the still-to-be analyzed IceCube data as suggested in [11].
Performing a parameter study with the above mentioned parameters yields that there is no parameter combination for which (i) applies. The main reason is that the predicted time between the third and 4th flare is larger than the actual time between the 2021 and 2022 neutrinos. However, the situation is different for case (ii). The parameter study showed that there are several cases possible for the Baikal neutrino to originate from this blazar in case of an inspiral SMBBH at its core. The best-fit parameters are \(M=7\cdot 10^{8}\,\mathrm{M}_{\odot}\), \(\zeta=4.6^{\circ}\), \(\alpha=89^{\circ}\) and \(\beta_{1}=\beta_{2}=20^{\circ}\) and \(q=0.65\). The prediction curves for these values in dependence of the mass ratio \(q\) are shown in Fig. 2. On the x-axis, the time in years and on the upper x-axis in MJD is shown, while the mass ratio is on the y-axis. The gray area marks the occurrence of the 2014/2015 neutrino flare, while the dashed-dotted, dotted and solid vertical lines show the time of the 2017, 2021 and 2022 neutrino flares, respectively. In blue, the predicted time bands for the next neutrino flares are shown. The prediction for the next neutrino flare is highlighted in purple for a better distinction. The green area indicates the time band during which the actual merger of the binary will occur. In orange, the observational window of the Laser-interferometer Space Antenna (LISA) is drawn, expected to lie between 2033 and 2043. It should be sensible for a detection of GWs from SMBBH mergers until a total mass of \(\sim 10^{8}\,\mathrm{M}_{\odot}\). Finally, the red crossed area marks mass ratios for which the model does not work. That entails the condition \(q\geq 0.26\) for the model to work. This is because at smaller mass ratios, the 2022 neutrino flare lies outside the prediction bands for the 5th flare.
As can be seen in Fig. 2, all four neutrinos flares are in agreement with a jet precession origin if the mass ratio of the binary is above \(q=0.25\). However, at such mass ratios, the binary could merge as early as in the year 2024 and in the year 2029/2030 at the latest. This is way before the first observational run by LISA, so that no GW detection of the merging binary could be possible.
Since before the merger the jet will most likely point at Earth again, a prediction on when the next neutrino flare might occur can be made. For the allowed mass ratios, in case all four neutrino flares originated from TXS 0506+056, the next neutrino flare is expected to be between November 2023 and November 2025 (purple are in Fig. 2). In that case, the third, not yet detected neutrino flare, must have occurred between Mai 2019 and October 2020 and must be still hidden in the IceCube data.
## 4 Prediction of Neutrino Flares from TXS 0506+056 without the Baikal Neutrino
The same parameter study as described above has been performed only with the three neutrino flares detected by IceCube. Then, the 2022 neutrino will be the 4th neutrino from the source. In this case, the best-fit parameters are \(M=3\cdot 10^{8}\,\mathrm{M}_{\odot}\), \(\zeta=4.5^{\circ}\), \(\alpha=83^{\circ}\) and \(\beta_{1}=\beta_{2}=0^{\circ}\) and \(q=0.45\)
The respective prediction curves in dependence of the mass ratio \(q\) are illustrated in Fig. 3. The descriptions and axes are the same as in Fig. 2, with the exception that the red area indicates the mass ratio for which the merger of the binary will not occur inside the expected LISA window.
As is seen in Fig. 3, all mass ratio agree with a currently ongoing SMBBH merger with this parameter combination. However, LISA will be able to detect the merger of such a binary only if it has a mass ratio between \(q=1\) and \(q\approx 0.15\). And even then, there is the possibility that the merger happens outside the assumed LISA observational window, as can be seen in Fig. 3.
Nevertheless, for such a SMBBH scenario, a third neutrino flare must have happened between November 2019 and Mai 2021. The next neutrino flare is then expected to happen between July 2023 and March 2027.
## 5 Conclusions
We expanded the analytical jet precession model, introduced in [10], by involving the second spin and the orbital angular momentum into the equations. An application of the model on the three neutrino flares detected by IceCube from the direction of the blazar TXS 0506+056 and one neutrino detected by the Baikal Collaboration shows that all neutrino flares are consistent with a processing jet induced by an inspiral SMBBH close to its merger. However, the actual merger of the binary will occur before LISA is online and will thus not be detectable in GWs.
In case that only the IceCube neutrinos originated from the blazar, LISA will be able to detect GWs from the merger, if the mass ratio is between 1 and \(\sim 0.15\).
Figure 2: Prediction of the times for neutrino flares from TXS 0506+056 in case of an inspiral SMBBH close to its merger at its core and time of its merger in dependence of the mass ratio \(q\). The assumption is that all four distinct neutrino flares detected originated from the blazar.
In both cases, a neutrino flare should be still hidden in the not-analyzed IceCube data between 2019 and 2021, provided the conditions at the source are ideal for neutrino production. The next neutrino flare should be happening between November 2023 and November 2025 if all four neutrino flares originated from the blazar and between July 2023 and March 2027 in case of only IceCube neutrinos originating from it. Again, the requirements are that the conditions for neutrino productions are fulfilled in the source environment.
It should be noted that although all four distinct neutrino flares investigated are consistent with the jet precession model, they are no confirmation of it. This is because the uncertainty regions of the 2021 Baikal neutrino and the 2022 IceCube neutrino are large (\(\sim 6^{\circ}\)[4] and \(\sim 3.6^{\circ}\)[3], respectively), so that the possibility remains that a different source than TXS 0506+056 is responsible for these two neutrino signals.
## Acknowledgments
We acknowledge support from the Deutsche Forschungsgemeinschaft DFG, within the Collaborative Research Center SFB1491 "Cosmic Interacting Matters - From Source to Signal" (project No. 445052434) and from the project "MICRO" (project No. 445990517).
|
2310.19154
|
A central limit theorem for Hilbert modular forms
|
For a prime ideal $\mathfrak{p}$ in a totally real number field $L$ with the
adele ring $\mathbb{A}$, we study the distribution of angles
$\theta_\pi(\mathfrak{p})$ coming from Satake parameters corresponding to
unramified $\pi_\mathfrak{p}$ where $\pi_\mathfrak{p}$ comes from a global
$\pi$ ranging over a certain finite set $\Pi_{\underline{k}}(\mathfrak{n})$ of
cuspidal automorphic representations of GL$_2(\mathbb{A})$ with trivial central
character. For such a representation $\pi$, it is known that the angles
$\theta_\pi(\mathfrak{p})$ follow the Sato-Tate distribution. Fixing an
interval $I\subseteq [0,\pi]$, we prove a central limit theorem for the number
of angles $\theta_\pi(\mathfrak{p})$ that lie in $I$, as
$\mathrm{N}(\mathfrak{p})\to\infty$. The result assumes $\mathfrak{n}$ to be a
squarefree integral ideal, and that the components in the weight vector
$\underline{k}$ grow suitably fast as a function of $x$.
|
Jishu Das, Neha Prabhu
|
2023-10-29T21:10:31Z
|
http://arxiv.org/abs/2310.19154v1
|
# A central limit theorem for Hilbert modular forms
###### Abstract.
For a prime ideal \(\mathfrak{p}\) in a totally real number field \(L\) with the adele ring \(\mathbb{A}\), we study the distribution of angles \(\theta_{\pi}(\mathfrak{p})\) coming from Satake parameters corresponding to unramified \(\pi_{\mathfrak{p}}\) where \(\pi_{\mathfrak{p}}\) comes from a global \(\pi\) ranging over a certain finite set \(\Pi_{\mathbb{E}}(\mathfrak{n})\) of cuspidal automorphic representations of \(\mathrm{GL}_{2}(\mathbb{A})\) with trivial central character. For such a representation \(\pi\), it is known that the angles \(\theta_{\pi}(\mathfrak{p})\) follow the Sato-Tate distribution. Fixing an interval \(I\subseteq[0,\pi]\), we prove a central limit theorem for the number of angles \(\theta_{\pi}(\mathfrak{p})\) that lie in \(I\), as \(\mathrm{N}(\mathfrak{p})\to\infty\). The result assumes \(\mathfrak{n}\) to be a squarefree integral ideal, and that the components in the weight vector \(\underline{k}\) grow suitably fast as a function of \(x\).
2020 Mathematics Subject Classification: Primary: 11F41, 11F72, Secondary: 11F30
## 1. Introduction
The statistics of eigenvalues of Hecke operators have been a topic of interest for a few decades. More recently, following the series of papers which settled the Sato-Tate conjecture in various settings, such as [10], [1][1], the study of error terms in these theorems has received significant attention, see [14, 15, 16, 17] for example. This article investigates the statistics of the error term in the Sato-Tate theorem for Hilbert Modular Forms, building on a similar study conducted in [14], which we briefly describe. Let \(\mathcal{F}_{N,k}\) be the set of normalized non-CM cusp forms of weight \(k\) and level \(N\) that are also eigenforms for Hecke operators \(T_{n}\) acting on spaces of cusp forms \(S(N,k)\). In particular, any \(f\in\mathcal{F}_{N,k}\) has a Fourier expansion
\[f(z)=\sum_{n=1}^{\infty}n^{\frac{k-1}{2}}a_{f}(n)q^{n}\]
with \(a_{f}(1)=1\) and \(q=e^{2\pi iz}\). By the work of Deligne, it is known that for \(p\) prime with \(\gcd(p,N)=1\), the sequence \(a_{f}(p)\) lies in \([-2,2]\) and the Sato-Tate theorem reveals the distribution of this sequence. That is, for a fixed \(f\in\mathcal{F}_{N,k}\), if we set \(a_{f}(p)=2\cos\theta_{f}(p)\), and fix an interval \(I\subseteq[0,\pi]\) and let
\[N_{I}(f,x):=\#\{p\leq x:\gcd(p,N)=1,\theta_{f}(p)\in I\},\]
then
\[\lim_{x\to\infty}\frac{N_{I}(f,x)}{\pi(x)}=\int_{I}\mu_{\infty}(t)\;dt.\]
Here, \(\pi(x)\) denotes the number of primes not exceeding \(x\), and \(\mu_{\infty}(t)=\frac{2}{\pi}\sin^{2}t\) is the function associated with the Sato-Tate measure of the interval \(I\). In [14], it was shown that under suitable growth conditions of the weight \(k=k(x)\), the error term in the above asymptotic statement exhibits a Gaussian distribution when one averages over \(f\) in \(\mathcal{F}_{N,k}\). In this article, we prove the analogous result in the Hilbert modular setting.
Let \(L\) denote a totally real field of degree \(d\) over \(\mathbb{Q}\) and \(\mathcal{O}\) be its ring of integers. Let \(\nu=\nu_{\mathfrak{p}}\) be the discrete valuation associated with the prime ideal \(\mathfrak{p}\) in \(\mathcal{O}.\) The local field \(L_{\nu}\) is the completion of \(L\) with respect to the topology induced by \(\nu\), and \(\mathcal{O}_{\nu}\) denotes the ring of integers in the local field \(L_{\nu}.\) Let \(\mathbb{A}\) denote the Adele ring of \(\mathbb{Q}\) and \(\mathbb{A}_{f}\) denote the finite adeles. Let \(G\) denote the algebraic group that is the Weil restriction of scalars of \(\mathrm{GL}_{2/L}\) from \(L\) to \(\mathbb{Q}\). Let \(G_{f}=G(\mathbb{A}_{f})\) and \(G_{\infty}=G(\mathbb{R})\). For an integral ideal \(\mathfrak{n}\), let \(K_{0}(\mathfrak{n})\) be the congruence subgroup of level \(\mathfrak{n}.\) If the prime factorisation of \(\mathfrak{n}\) is given by \(\mathfrak{n}=\mathfrak{q}_{1}^{a_{1}}\ldots\mathfrak{q}_{r}^{a_{r}}\), then the congruence subgroup \(K_{0}(\mathfrak{n})\) is defined as
\[K_{0}(\mathfrak{n})=\Bigg{\{}\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\prod_{\nu}GL_{2}(\mathcal{O}_{\nu})\,\Big{|}\,c_{ \mathfrak{q}_{i}}\equiv 0\,\mathrm{mod}\,\mathfrak{q}_{i}^{a_{i}}\,\mathrm{for\; all}\,i=1,\ldots,r\Bigg{\}}.\]
Let the weight be given by the \(d\)-tuple \(\underline{k}=(k_{1},\ldots,k_{d})\) where \(k_{i}\) is an even integer and \(k_{i}\geq 4\) for all \(i=1,\ldots,d\). Let \(\mathcal{A}_{\underline{k}}^{\mathrm{cusp}}(G(\mathbb{Q})\backslash G(\mathbb{A}))\) denote the space of cuspidal automorphic forms on \(G(\mathbb{A})\) with trivial central character (see for instance [1, Section 4] for definitions). The group \(G(\mathbb{A})\) acts on \(\mathcal{A}_{\underline{k}}^{\mathrm{cusp}}(G(\mathbb{Q})\backslash G(\mathbb{ A}))\) by right translations. An irreducible representation \(\pi\) of \(G(\mathbb{A})\) is called a cuspidal automorphic representation if it is isomorphic to a subrepresentation of \(\mathcal{A}_{\underline{k}}^{\mathrm{cusp}}(G(\mathbb{Q})\backslash G( \mathbb{A})).\) We have the following decomposition of \(\pi=\pi_{f}\otimes\pi_{\infty}\) where \(\pi_{f}\) and \(\pi_{\infty}\) are representations of \(G_{f}\) and \(G_{\infty}\) respectively.
Let \(\Pi_{\underline{k}}(\mathfrak{n})\) be the set of cuspidal unitary automorphic representations \(\pi\) with respect to \(G\) such that
* \(\pi_{f}\) has a \(K_{0}(\mathfrak{n})\) fixed vector,
* \(\pi_{\infty}=\otimes_{i=1}^{d}D_{k_{i}-1}\), where \(D_{k}\) is the discrete series representation of \(\mathrm{GL}_{2}(\mathbb{R})\) with minimal \(K\)-type of weight \(k+1\).
The set \(\Pi_{\underline{k}}(\mathfrak{n})\) is finite (see (2.4)). Let \(\pi\in\Pi_{\underline{k}}(\mathfrak{n})\) and consider \(\mathfrak{p}\), a prime ideal in \(L\) for which \(\pi_{\mathfrak{p}}\) is unramified. The Satake parameter associated with \(\pi_{\mathfrak{p}}\) is a conjugacy class
\[\begin{pmatrix}e^{i\pi\theta_{\pi}(\mathfrak{p})}&\\ &e^{-i\pi\theta_{\pi}(\mathfrak{p})}\end{pmatrix}\in\mathrm{SU}(2)/\sim\]
with \(\theta_{\pi}(\mathfrak{p})\in[0,1].\) For the classical setting of \(L=\mathbb{Q}\) and a classical eigenform \(f\), let \(\pi(f)\) denote the automorphic representation associated with \(f\). The angles \(\theta_{f}(p)\) considered in [10] are exactly the \(\theta_{\pi}(\mathfrak{p})\) with \(\mathfrak{p}=\langle p\rangle\).
We set
\[\pi_{L}(x)=\#\{\mathfrak{p}:\mathfrak{p}\text{ prime ideal in }L\text{ with } \mathfrak{p}\nmid\mathfrak{n},\mathrm{N}(\mathfrak{p})\leq x\}.\]
For an interval \(I=[\alpha,\beta]\subseteq[0,\pi]\), the Sato-Tate theorem studies the distribution of
\[N_{I}(\pi,x):=\sum_{\begin{subarray}{c}\mathrm{N}(\mathfrak{p})\leq x\\ \mathfrak{p}\nmid\end{subarray}}\chi_{I}\left(\theta_{\pi}(\mathfrak{p})\right)\]
and from the work of Barnet-Lamb, Gee and Geraghty [1], it is known that
\[N_{I}(\pi,x)\sim\pi_{L}(x)\mu_{\infty}(I). \tag{1.1}\]
We are interested in studying the statistics of the error term in this theorem. To be precise, let \(\phi\) be a complex-valued function defined on \(\Pi_{\underline{k}}(\mathfrak{n})\). We denote the average
\[\langle\phi(\pi)\rangle:=\frac{1}{\#\Pi_{\underline{k}}(\mathfrak{n})}\sum_{ \pi\in\Pi_{\underline{k}}(\mathfrak{n})}\phi(\pi).\]
The task then, is to study the behaviour of the moments
\[\langle(N_{I}(\pi,x)-\pi_{L}(x)\mu_{\infty}(I))^{r}\rangle \tag{1.2}\]
for each \(r\in\mathbb{N}\). We prove
**Theorem 1.1**.: _Consider a family \(\Pi_{\underline{k}}(\mathfrak{n})\) for fixed squarefree level \(\mathfrak{n}\) and even weights \(\underline{k}=\underline{k}(x)\) such that \(\frac{\sum_{x=1}^{d}\log k_{i}}{\sqrt{x}\log x}\to\infty\) as \(x\to\infty\). Fix an interval \(I\subseteq[0,\pi]\). Then for any continuous real-valued function \(g\) on \(\mathbb{R}\), we have_
\[\lim_{x\to\infty}g\left(\frac{N_{I}(\pi,x)-\pi_{L}(x)\mu_{\infty}(I)}{\sqrt{ \pi_{L}(x)(\mu_{\infty}(I)-\mu_{\infty}(I)^{2})}}\right)=\frac{1}{\sqrt{2\pi} }\int_{-\infty}^{\infty}g(t)e^{-\frac{t^{2}}{2}}\ dt.\]
The motivation for the above result comes from ideas in probability theory. For a given interval \(I\subseteq[0,\pi]\) and a representation \(\pi\) in \(\Pi_{\underline{k}}(\mathfrak{n})\), consider a prime ideal \(\mathfrak{p}\) such that \(\pi_{\mathfrak{p}}\) is unramified. Note that since \(\mathfrak{p}\nmid\mathfrak{n}\), choosing \(\pi\) with a \(K_{0}(\mathfrak{n})\) fixed vector gives us that the representations \(\pi_{\mathfrak{p}}\) are unramified. Let
\[X_{\mathfrak{p},\pi}=\chi_{I}\left(\theta_{\pi}(\mathfrak{p})\right).\]
Consider a non-archimedean valuation \(\nu\) corresponding to \(\mathfrak{p}\) and set
\[d\mu_{\nu}(\theta)=\frac{\mathrm{N}(\mathfrak{p})+1}{\left(\mathrm{N}( \mathfrak{p})^{\frac{1}{2}}+\mathrm{N}(\mathfrak{p})^{-\frac{1}{2}}\right)^{2 }-4\cos^{2}\theta}d\mu_{\infty}(\theta),\]
where \(d\mu_{\infty}(\theta)=\frac{2}{\pi}\sin^{2}\theta\ d\theta.\) From the work of Li [11], we know that
\[\langle X_{\mathfrak{p},\pi}\rangle\sim\mu_{\nu}(I)\]
if we assume appropriate growth conditions on \(\underline{k}\) or \(\mathfrak{n}\). More generally, Lau-Li-Wang [12, Theorem 1.1] prove an effective joint distribution result for these angles. Using their result, we infer that if \(I_{1},\ldots,I_{h}\) are each intervals in \([0,\pi]\) and \(\mathfrak{p}_{1},\ldots,\mathfrak{p}_{h}\) are distinct prime ideals that are relatively prime to \(\mathfrak{n}\), then
\[\langle X_{\mathfrak{p}_{1},\pi}\cdots X_{\mathfrak{p}_{h},\pi}\rangle\sim \prod_{j=1}^{T}\mu_{\nu_{j}}(I_{j})\]
under suitable growth conditions on \(\underline{k}\). Here, \(\nu_{j}\) is the valuation corresponding to \(\mathfrak{p}_{j}\). Thus, viewed as random variables, the \(X_{\mathfrak{p},\pi}\) have distributions that depend on \(\mathfrak{p}\), but behave like independent random variables for large enough \(\underline{k}\).
It is then natural to investigate the distribution of the sum of random variables \(\sum_{\mathrm{N}(\mathfrak{p})\leq x}X_{\mathfrak{p},\pi}\). Firstly, we view \(\mu_{\nu}(I)\) to be \(\mathrm{E}[X_{\mathfrak{p},\pi}]\). Then a guess for the variance would be
\[\mathrm{Var}[X_{\mathfrak{p},\pi}]=\mathrm{E}[X_{\mathfrak{p},\pi}^{2}]- \mathrm{E}[X_{\mathfrak{p},\pi}]^{2}=\mu_{\nu}(I)-\mu_{\nu}(I)^{2}.\]
However, keeping (1.1) in mind, and observing that \(d\mu_{\nu}(\theta)\to d\mu_{\infty}(\theta)\) as \(\mathrm{N}(\mathfrak{p})\to\infty\), the quantity \(\pi_{L}(x)\mu_{\infty}(I)\) is a reasonable expression for the expectation of the sum of random variables \(\sum_{\mathrm{N}(\mathfrak{p})\leq x}X_{\mathfrak{p},\pi}\) for \(x\) large enough. Similarly, the variance of the sum of random variables is heuristically \(\pi_{L}(x)(\mu_{\infty}(I)-\mu_{\infty}(I)^{2})\) as \(x\to\infty\). With these ideas in mind, we are motivated to find necessary conditions for a central limit theorem to hold. This also adds to the literature on central limit theorems for Hecke eigenvalues in various settings, for example [13], [14], [15] and [16].
The proof of Theorem 1.1 adapts the underlying technique used in the proof of [12, Theorem 1.1], namely the method of moments applied to quantities approximating the moments (1.2), and the Eichler-Selberg trace formula. However, the proof of Theorem 1.1 is cleaner, and is achieved by two processes. First, we expand the approximating functions using a basis of Chebyshev polynomials. Secondly, we simplify the proof of the theorem pertaining to the calculation of higher moments. This proof is adapted from [12], who worked out a simplification of the theorem on higher moments in [12]. This is achieved by expressing the main term of the analogous trace formula, given in (2.3), as an integral involving Chebyshev polynomials (see Lemma 5.2).
The structure of the paper is as follows. Section 2 covers the properties of Chebyshev polynomials and Beurling-Selberg polynomials needed to follow the proof of the theorem. Section 3 gives the details of the first-moment calculation, culminating in the average Sato-Tate result for Hilbert Modular Forms. After explaining the outline of the proof of Theorem 1.1 in Section 4, we give the details of the simplified proof of higher moments in Section 5. We conclude the article by stating a smooth version of Theorem 4.1 that should hold, resulting in a smooth version of the central limit theorem with weaker growth conditions on \(\underline{k}=\underline{k}(x)\).
**Acknowledgements** The first named author is supported by a Ph.D. fellowship from CSIR. The second named author's research is supported by a DST-INSPIRE Faculty Fellowship from the Government of India and acknowledges Savitribai Phule Pune University, where she was working when this work began. The authors are grateful to Baskar Balasubramanyam and Kaneenika Sinha for their valuable comments on a previous version of this article.
## 2. Preliminaries
### Chebyshev polynomials
The Chebyshev polynomials of the second kind, denoted by \(U_{n}\), are defined by the recurrence relations
\[U_{0}(x) =1\] \[U_{1}(x) =2x\] \[U_{n}(x) =2x\,U_{n-1}(x)-U_{n-2}(x)\qquad\text{for $n\geq 2$}.\]
These polynomials form an orthonormal basis with respect to the Sato-Tate measure. More explicitly, the following holds. For non-negative integers \(m,n\)
\[\frac{2}{\pi}\int_{0}^{\pi}U_{m}(\cos\theta)U_{n}(\cos\theta)\sin^{2}\theta\ d \theta=\begin{cases}1&\text{ if }m=n\\ 0&\text{ otherwise.}\end{cases} \tag{2.1}\]
The following recursive relations also hold. For non-negative integers \(m\) and \(n\) with \(m\geq n\),
\[U_{m}(x)U_{n}(x)=\sum_{k=0}^{n}U_{m+n-2k}(x). \tag{2.2}\]
### A trace formula
We state the version of Arthur's trace formula proved in [11] that we will frequently use. This also appears in [10].
**Proposition 2.1** ([10], Proposition 18).: _Let \(\mathfrak{p}_{1},\dots,\mathfrak{p}_{h}\) be distinct primes coprime to squarefree \(\mathfrak{n}\). Let \(\underline{m}=(m_{1},\dots,m_{h})\) be a tuple of non-negative integers, and \(\mathfrak{a}=\mathfrak{p}_{1}^{m_{1}}\cdots\mathfrak{p}_{h}^{m_{h}}\). Then_
\[\sum_{\pi\in\Pi_{\underline{k}}(\mathfrak{n})}\prod_{i=1}^{h}U_{m_{i}}(\cos \theta_{\pi}(\mathfrak{p}_{i}))=C_{L}\mathrm{N}(\mathfrak{n})\delta_{2| \underline{m}}\prod_{i=1}^{d}\frac{k_{i}-1}{4\pi}\mathrm{N}(\mathfrak{a})^{- \frac{1}{2}}+O\left(\mathrm{N}(\mathfrak{n})^{\epsilon}\mathrm{N}(\mathfrak{a })^{\frac{3}{2}}\right). \tag{2.3}\]
_Here,_
* \(C_{L}\) _is a constant that depends only on the number field_ \(L\)_._
* _The quantity_ \(\delta_{2|\underline{m}}\) _is equal to one if all the_ \(m_{i}\) _are even, and zero otherwise._
As an immediate consequence, we have
\[\#\Pi_{\underline{k}}(\mathfrak{n})=C_{L}\mathrm{N}(\mathfrak{n})\prod_{i=1} ^{d}\frac{k_{i}-1}{4\pi}+O\left(\mathrm{N}(\mathfrak{n})^{\epsilon}\right). \tag{2.4}\]
**Remark**.: _For simplicity, we stick to squarefree level \(\mathfrak{n}\) keeping Proposition 2.1 in mind. A similar result may be obtained by taking \(\mathfrak{n}\) not necessarily squarefree, using [11], Theorem 6.3]._
We also state well-known results on sums of powers of prime ideal norms that will be useful.
**Lemma 2.1**.: _We have_
\[\sum_{\mathrm{N}(\mathfrak{p})\leq x}\frac{1}{\mathrm{N}(\mathfrak{p})}=\log \log x+O_{L}(1) \tag{2.5}\]
_and_
\[\sum_{r\geq 2}\sum_{\mathrm{N}(\mathfrak{p})\leq x}\frac{1}{\mathrm{N}( \mathfrak{p})^{r}}=O_{L}(1). \tag{2.6}\]
Proof.: Equation (2.5) follows from Lemma 2.4 of [14]. For (2.6), recall that for a prime ideal \(\mathfrak{p}\), the value of \(\mathrm{N}(\mathfrak{p})=p^{t}\) where \(\langle p\rangle=\mathfrak{p}\cap\mathbb{Z}\) and \(1\leq t\leq d.\) Let the ideal counting function be given by
\[a_{l}=|\{\mathfrak{m}\subset\mathcal{O}:\mathfrak{m}\,\text{ideal with }\mathrm{N}( \mathfrak{m})=l.\}|.\]
Then, it is known that \(a_{l}\leq\tau(l)^{d-1}\) (see [12, equation (68)]), where \(\tau\) is the usual divisor function.
Therefore,
\[\sum_{r\geq 2}\sum_{\mathrm{N}(\mathfrak{p})\leq x}\frac{1}{ \mathrm{N}(\mathfrak{p})^{r}} =\sum_{r\geq 2}\sum_{p\leq x}\sum_{t=1}^{d}\frac{a_{pt}}{p^{tr}} \leq\sum_{r\geq 2}\sum_{p\leq x}\sum_{t=1}^{d}\frac{(t+1)^{d-1}}{p^{tr}}\] \[\leq\sum_{r\geq 2}\sum_{p\leq x}\frac{d(d+1)^{d-1}}{p^{r}}\ll_{L} \sum_{r\geq 2}\sum_{p\leq x}\frac{1}{p^{r}}=O_{L}(1).\]
### Beurling-Selberg polynomials
The main underlying technique used in this article is to approximate the characteristic function by appropriate trigonometric polynomials of finite degree. This technique has been used widely to obtain fluctuations in error terms in equidistribution theorems, following the work of [10]. Here, we state the properties that we need, and refer the reader to a detailed exposition in [14, Chapter 1]. Let \(J=[\alpha,\beta]\subseteq[-\frac{1}{2},\frac{1}{2}]\) and \(M\geq 1\) be an integer. There exist trigonometric polynomials \(S^{+}_{J,M}(x)\) and \(S^{-}_{J,M}(x)\) of degree not exceeding \(M\) such that for all \(x\in\mathbb{R}\),
\[S^{-}_{J,M}(x)\leq\chi_{J}(x)\leq S^{+}_{J,M}(x),\]
where \(\chi_{J}\) is the usual indicator function of the interval \(J\). While the explicit definition of these polynomials can be found in [14], we will heavily use properties of the coefficients in their Fourier expansions for our calculations. Before we describe these properties, we fix some notation. Let \(e(t):=e^{2\pi it}.\) Then, the Fourier expansion of \(\chi_{I}\) is given by
\[\chi_{J}(x)=\sum_{n\in\mathbb{Z}}\hat{\chi}_{J}(n)e(nx),\qquad\qquad\hat{\chi }_{J}(n)=\int_{J}e(-nt)\ dt.\]
Note that
\[\hat{\chi}_{J}(0)=\beta-\alpha,\qquad\hat{\chi}_{J}(n)=\frac{e(-n\alpha)-e(-n \beta)}{2\pi in}\quad\text{ for }|n|\geq 1. \tag{2.7}\]
The Beurling-Selberg polynomials \(S^{+}_{J,M}\) and \(S^{-}_{J,M}\) are good approximations of the indicator function and this is evident from the properties of its Fourier coefficients. For \(0\leq|m|\leq M\),
\[\hat{S}^{\pm}_{J,M}(m)=\hat{\chi}_{J}(m)+O\left(\frac{1}{M+1}\right) \tag{2.8}\]
and \(\hat{S}^{\pm}_{J,M}(m)=0\) for \(|m|>M\). Since we are interested in finer statistics of the sequence of angles \(\{\theta_{\pi}(\mathfrak{p})\}\), which are equidistributed with respect to the measure \(\frac{2}{\pi}\sin^{2}\theta\), it is useful to consider Fourier expansions of the polynomials using the orthonormal basis of Chebyshev polynomials \(U_{n}(\cos\theta)\). The following lemma accomplishes this.
**Lemma 2.2** ([10, Lemma 1.3]).: _Let \(I=[a,b]\subseteq[0,\pi]\), and let \(M\) be a positive integer. There exist trigonometric polynomials_
\[F^{\pm}_{I,M}(\theta)=\sum_{m=0}^{M}\hat{F}^{\pm}_{I,M}(n)U_{m}(\cos\theta)\]
_such that for \(0\leq\theta\leq\pi\), we have_
\[F^{-}_{I,M}(\theta)\leq\chi_{I}(\theta)\leq F^{+}_{I,M}(\theta).\]
The key idea in the proof is to take \(\alpha=\frac{a}{2\pi}\) and \(\beta=\frac{b}{2\pi}\) so that we now have an interval in \([-\frac{1}{2},\frac{1}{2}]\), and study the properties of
\[F^{\pm}_{I,M}(\theta)=S^{\pm}_{J,M}\left(\frac{\theta}{2\pi}\right)+S^{\pm}_ {J,M}\left(-\frac{\theta}{2\pi}\right).\]
The relation between the Fourier coefficients \(\hat{S}^{\pm}_{J,M}(m)\) and \(\hat{F}^{\pm}_{I,M}(m)\) is the following. For \(0\leq m\leq M\), let
\[\hat{\mathcal{S}}^{\pm}_{J,M}(m)=\hat{S}^{\pm}_{J,M}(m)+\hat{S}^{\pm}_{J,M}(- m). \tag{2.9}\]
Then
\[\hat{F}^{\pm}_{I,M}(m)=\hat{\mathcal{S}}^{\pm}_{J,M}(m)-\hat{\mathcal{S}}^{ \pm}_{J,M}(m+2). \tag{2.10}\]
Moreover, using (2.7) and (2.8) in (2.9), we get that for \(0<m\leq M\),
\[\hat{\mathcal{S}}^{\pm}_{J,M}(m) =\frac{\sin(2\pi m\beta)-\sin(2\pi m\alpha)}{m\pi}+O\left(\frac{ 1}{M+1}\right)\] \[=\frac{\sin(mb)-\sin(ma)}{m\pi}+O\left(\frac{1}{M+1}\right),\quad \text{ and } \tag{2.12}\] \[\hat{\mathcal{S}}^{\pm}_{J,M}(0) =2(\beta-\alpha)=\frac{a}{\pi}-\frac{b}{\pi}. \tag{2.11}\]
Henceforth, to simplify notation we will fix the interval \(I=[a,b]\) throughout the rest of the article, and write \(F_{M}^{\pm}\) and \(\hat{F}_{M}^{\pm}\) without mentioning \(I\) in the subscript.
We now record an important result about the Fourier coefficients \(\hat{F}_{M}^{\pm}(m)\).
**Proposition 2.2**.: (2.13) \[\hat{F}_{M}^{\pm}(0) =\mu_{\infty}(I)+O\left(\frac{1}{M+1}\right).\] (2.14) \[\sum_{m=1}^{M}\hat{F}_{M}^{\pm}(m)^{2} =\mu_{\infty}(I)-\mu_{\infty}(I)^{2}+O\left(\frac{\log M}{M} \right).\]
Proof.: Equation (2.13) follows easily from noting that
\[\hat{F}_{M}^{\pm}(0) =\hat{\mathcal{S}}_{J,M}^{\pm}(0)-\hat{\mathcal{S}}_{J,M}^{\pm}(2)\] \[=\frac{a}{\pi}-\frac{b}{\pi}-\frac{\sin(2b)-\sin(2a)}{2\pi}+O \left(\frac{1}{M+1}\right)\] \[=\frac{2}{\pi}\int_{a}^{b}\sin^{2}\theta\ d\theta+O\left(\frac{1 }{M+1}\right)\]
Equation (2.14) is less straightforward, and has been worked out in [14, Proposition 3.6.1].
## 3. First Moment
Throughout the rest of this article, whenever we write \(\mathrm{N}(\mathfrak{p})\leq x\), in a sum, we will assume that the sum runs over prime ideals \(\mathfrak{p}\nmid\mathfrak{n}\).
Approximating the quantity \(N_{I}(\pi,x)\) above and below by the Beurling-Selberg polynomials, we have
\[\sum_{\mathrm{N}(\mathfrak{p})\leq x}F_{M}^{-}(\theta_{\pi}(\mathfrak{p})) \leq N_{I}(\pi,x)\leq\sum_{\mathrm{N}(\mathfrak{p})\leq x}F_{M}^{+}(\theta_{ \pi}(\mathfrak{p})).\]
Writing
\[\sum_{\mathrm{N}(\mathfrak{p})\leq x}F_{M}^{\pm}(\theta_{\pi}(\mathfrak{p}))= \pi_{L}(x)\hat{F}_{M}^{\pm}(0)+\sum_{m=1}^{M}\hat{F}_{M}^{\pm}(m)\sum_{\mathrm{ N}(\mathfrak{p})\leq x}U_{m}(\cos(\theta_{\pi}(\mathfrak{p}))),\]
and using (2.13), there exist constants \(C\) and \(D\) such that
\[D\frac{\pi_{L}(x)}{M+1}+F^{-}(M,\pi)(x)\leq N_{I}(\pi,x)-\pi_{L}(x)\mu_{\infty }(I)\leq F^{+}(M,\pi)(x)+C\frac{\pi_{L}(x)}{M+1}\]
where
\[F^{\pm}(M,\pi)(x):=\sum_{m=1}^{M}\hat{F}_{M}^{\pm}(m)\sum_{\mathrm{N}( \mathfrak{p})\leq x}U_{m}(\cos\theta_{\pi}(\mathfrak{p})). \tag{3.1}\]
We compute the first moment by estimating
\[\frac{1}{\#\Pi_{\underline{k}}(\mathfrak{n})}\sum_{\pi\in\Pi_{\underline{k}}( \mathfrak{n})}F^{\pm}(M,\pi)(x).\]
We first focus on the upper bound:
\[N_{I}(\pi,x)\leq\pi_{L}(x)\hat{F}_{M}^{+}(0)+\sum_{m=1}^{M}\hat{F}_{M}^{+}(m) \sum_{\mathrm{N}(\mathfrak{p})\leq x}U_{m}(\cos(\theta_{\pi}(\mathfrak{p}))).\]
Since the first summand on the left-hand side is independent of \(\pi\), in order to compute the first moment we need to estimate
\[\sum_{m=1}^{M}\hat{F}_{M}^{+}(m)\sum_{\mathrm{N}(\mathfrak{p})\leq x}\frac{1} {\#\Pi_{\underline{k}}(\mathfrak{n})}\sum_{\pi\in\Pi_{\underline{k}}(\mathfrak{ n})}U_{m}(\cos(\theta_{\pi}(\mathfrak{p})).\]
Using (2.3) we see that this is
\[=\frac{1}{\#\Pi_{\underline{k}}(\mathfrak{n})}\sum_{\begin{subarray}{ c}m=2\\ m\text{ even}\end{subarray}}^{M}\hat{F}_{M}^{+}(m)\sum_{\text{N}(\mathfrak{p})\leq x }\left(C_{L}\text{N}(\mathfrak{n})\prod_{i=1}^{d-1}\frac{k_{i}-1}{4\pi}\text{N} (\mathfrak{p})^{-\frac{m}{2}}+O(\text{N}(\mathfrak{p})^{\frac{3m}{2}}\text{N }(\mathfrak{n})^{\epsilon})\right)\] \[\quad+O_{L}\left(\sum_{m=1}^{M}\left|\hat{F}_{M}^{+}(m)\right| \sum_{\text{N}(\mathfrak{p})\leq x}\frac{\text{N}(\mathfrak{p})^{\frac{3m}{2}} \text{N}(\mathfrak{n})^{\epsilon}}{\prod_{i=1}^{d-1}(k_{i}-1)}\right).\]
Using Proposition 2.1 with \(h=1\), the estimate \(\hat{F}_{M}^{+}(m)\ll\frac{1}{m}\) and Lemma 2.1, we see that
\[\frac{1}{\#\Pi_{\underline{k}}(\mathfrak{n})}\sum_{m=1}^{M}\hat{F}_{M}^{+}(m) \sum_{\text{N}(\mathfrak{p})\leq x}\sum_{\pi\in\Pi_{\underline{k}}(\mathfrak{ n})}U_{m}(\cos(\theta_{\pi}(\mathfrak{p}))\ll_{L}\log\log x+\frac{\pi_{L}(x)x^{ \frac{3}{2}M}}{\prod_{i=1}^{d-1}k_{i}}.\]
Keeping (2.13) in mind we therefore get,
\[\frac{1}{\#\Pi_{\underline{k}}(\mathfrak{n})}\sum_{\pi\in\Pi_{\underline{k}}( \mathfrak{n})}N_{I}(\pi,x)-\pi_{L}(x)\mu_{\infty}(I)\ll_{L}\log\log x+\frac{ \pi_{L}(x)x^{\frac{3}{2}M}}{\prod_{i=1}^{d-1}k_{i}}+\frac{\pi_{L}(x)}{M+1}.\]
On choosing \(M=\left[\frac{2d\sum_{i=1}^{d-1}\log k_{i}}{3\log x}\right]\) for some \(0<d<1\), we have proved the following proposition.
**Proposition 3.1**.: _Let \(\sum_{i=1}^{d-1}\log k_{i}\) be a function of \(x\). Then, for any \(I\subseteq[0,\pi]\) we have_
\[\frac{1}{\#\Pi_{\underline{k}}(\mathfrak{n})}\sum_{\pi\in\Pi_{\underline{k}} (\mathfrak{n})}N_{I}(\pi,x)=\pi_{L}(x)\mu_{\infty}(I)+O_{L}\left(\frac{\pi_{L} (x)\log x}{\sum_{i=1}^{d-1}\log k_{i}}+\log\log x\right).\]
**Remark:** The above proposition is the analogue of the effective average Sato-Tate theorem for holomorphic cusp forms [13, Proposition 4.1] and Maass forms [20, Theorem 1.1].
## 4. Outline of the proof of the main theorem.
We compute all higher moments by proving the following result.
**Theorem 4.1**.: _Let \(M=\lfloor\sqrt{\pi_{L}(x)}\log\log x\rfloor\) and \(F^{\pm}(M,\pi)(x)\) be as defined in (3.1). Suppose \(\frac{\sum_{i=1}^{d}\log k_{i}}{\sqrt{x}\log x}\to\infty\) as \(x\to\infty\). Then,_
\[\lim_{x\to\infty}\left\langle\left(\frac{F^{\pm}(M,\pi)(x))}{\sqrt{\pi_{L}(x) }}\right)^{n}\right\rangle=\begin{cases}0&\text{if $n$ is odd,}\\ \frac{n!}{2^{\frac{n}{2}}\frac{n}{2^{\frac{n}{2}}!}}\left(\mu_{\infty}(I)-\mu _{\infty}(I)^{2}\right)^{\frac{n}{2}}&\text{if $n$ is even.}\end{cases} \tag{4.1}\]
Following the strategy in [13, Section 6], and making the necessary modifications i.e., using the trace formula in Proposition 2.3 instead of the Eichler-Selberg trace formula, one can prove the following analogue of [13, Proposition 6.2]. The proof is very similar, so we omit details.
**Proposition 4.1**.: _Let \(I=[a,b]\subseteq[0,\pi]\) and \(M=\lfloor\sqrt{\pi_{L}(x)}\log\log x\rfloor\). Suppose \(\frac{\sum_{i=1}^{d}\log k_{i}}{\sqrt{x}\log x}\to\infty\) as \(x\to\infty\),_
\[\lim_{x\to\infty}\left\langle\left|\frac{N_{I}(\pi,x)-\pi_{L}(x)\mu_{\infty}(I) -F^{\pm}(M,\pi)(x)}{\sqrt{\pi_{L}(x)(\mu_{\infty}(I)-\mu_{\infty}(I)^{2})}} \right|^{2}\right\rangle=0.\]
This implies that under the conditions of the above proposition, the quantity
\[\frac{F^{\pm}(M,\pi)(x)}{\sqrt{\pi_{L}(x)(\mu_{\infty}(I)-\mu_{\infty}(I)^{2})}}\]
converges in mean square to
\[\frac{N_{I}(\pi,x)-\pi_{L}(x)\mu_{\infty}(I)}{\sqrt{\pi_{L}(x)(\mu_{\infty}(I)- \mu_{\infty}(I)^{2})}} \tag{4.2}\]
as \(x\to\infty\). Since convergence in mean square implies convergence in distribution (see [10, Chapter 6, Theorems 5 and 7]), and the normal distribution is characterized by its moments, Theorem 4.1 immediately gives us that the quantity in (4.2) follows the Gaussian distribution. This completes the proof of Theorem 1.1.
## 5. Higher Moments
In this section we give the details of the proof of Theorem 4.1. In order to simplify calculations while computing higher moments, it is useful to express the main term of the trace formula 2.3 using an integral. The following lemma is Proposition 29.11 in [11], with the prime \(p\) replaced with \(\mathrm{N}(\mathfrak{p})\). We prove it here for completeness.
**Lemma 5.1**.: _For a non-archimedean valuation \(\nu\) corresponding to a prime ideal \(\mathfrak{p}\), we define_
\[d\mu_{\nu}(\theta)=\frac{\mathrm{N}(\mathfrak{p})+1}{\left(\mathrm{N}( \mathfrak{p})^{\frac{1}{2}}+\mathrm{N}(\mathfrak{p})^{-\frac{1}{2}}\right)^{2 }-4\cos^{2}\theta}d\mu_{\infty}(\theta), \tag{5.1}\]
_where \(d\mu_{\infty}(\theta)=\frac{2}{\pi}\sin^{2}\theta\ d\theta.\) Then,_
\[\int_{0}^{\pi}U_{m}(\cos\theta)d\mu_{\nu}(\theta)=\begin{cases}\mathrm{N}( \mathfrak{p})^{-\frac{m}{2}}&\text{ if $m$ is even}\\ 0&\text{ if $m$ is odd}.\end{cases} \tag{5.2}\]
Proof.: For \(|t|<1\) and \(x\in[-1,1]\), the generating function of the Chebyshev polynomials of the second kind is given by
\[\sum_{n=0}^{\infty}U_{n}(x)t^{n}=\frac{1}{1-2xt+t^{2}}. \tag{5.3}\]
Substituting \(t=\pm\mathrm{N}(\mathfrak{p})^{-\frac{1}{2}}\) and adding, we obtain
\[\sum_{n=0}^{\infty}U_{2n}(x)\mathrm{N}(\mathfrak{p})^{-n}=\frac{\mathrm{N}( \mathfrak{p})+1}{\left(\mathrm{N}(\mathfrak{p})^{\frac{1}{2}}+\mathrm{N}( \mathfrak{p})^{-\frac{1}{2}}\right)^{2}-4x^{2}}.\]
Denoting the above quantity by \(u_{\mathfrak{p}}(x)\), and letting \(x=\cos\theta\), we see that
\[\int_{0}^{\pi}U_{m}(\cos\theta)d\mu_{\nu}(\theta) =\int_{0}^{\pi}U_{m}(\cos\theta)\left(\sum_{n=0}^{\infty}U_{2n}(x )\mathrm{N}(\mathfrak{p})^{-n}\right)d\mu_{\infty}(\theta)\] \[=\begin{cases}\mathrm{N}(\mathfrak{p})^{-\frac{m}{2}}&\text{ if $m$ is even}\\ 0&\text{ if $m$ is odd},\end{cases}\]
using (2.1), the orthogonality of the Chebyshev polynomials.
As a consequence of Lemma 5.1, the following alternate expression for the trace formula (2.3) holds.
**Lemma 5.2**.: _Let \(\underline{m}=(m_{1},\ldots,m_{h})\) be a tuple of non-negative integers. Further, let \(\mathfrak{p}_{1},\ldots,\mathfrak{p}_{h}\) be distinct prime ideals coprime to the squarefree ideal \(\mathfrak{n}.\) Then,_
\[\frac{1}{\#\Pi_{\underline{k}}(\mathfrak{n})}\sum_{\pi\in\Pi_{\underline{k}}( \mathfrak{n})}\prod_{i=1}^{h}U_{m_{i}}(\cos\theta_{\pi}(\mathfrak{p}_{i}))= \prod_{i=1}^{h}\int_{0}^{\pi}U_{m_{i}}(\cos\theta)d\mu_{\nu_{i}}(\theta)+O_{ L,\mathfrak{n}}\left(\frac{\prod_{i=1}^{h}\mathrm{N}(\mathfrak{p}_{i})^{\frac{3m_{i}}{2}}}{ \prod_{i=1}^{d}k_{i}}\right). \tag{5.4}\]
Proof.: This follows easily by using (5.2) in the main term of trace formula in (2.3) and noting (2.4).
We now proceed to calculate the higher moments \(\left\langle\left(\frac{F^{\pm}(M,\pi)(x)}{\sqrt{\pi_{L}(x)}}\right)^{n}\right\rangle\). Observe that
\[\left\langle\left(\frac{F^{\pm}(M,\pi)(x))}{\sqrt{\pi_{L}(x)}}\right)^{n} \right\rangle=\frac{1}{\#\Pi_{\underline{k}}(\mathfrak{n})}\frac{1}{\pi_{L}(x )^{\frac{1}{2}}}\sum_{\pi\in\Pi_{\underline{k}}(\mathfrak{n})}\left(\sum_{ \mathrm{N}(\mathfrak{p})\leq x}\sum_{m=1}^{M}\hat{F}_{M}^{\pm}(m)U_{m}(\cos \theta_{\pi}(\mathfrak{p}))\right)^{n} \tag{5.5}\]
Let \(Z_{M}^{\pm}(\theta)=\sum_{m=1}^{M}\hat{F}_{M}^{\pm}(m)U_{m}(\cos\theta)\). Then
\[\left(\sum_{N(\mathfrak{p})\leq x}\sum_{m=1}^{M}\hat{F}_{M}^{\pm}(m )U_{m}(\cos\theta_{\pi}(\mathfrak{p}))\right)^{n} =\left(\sum_{N(\mathfrak{p})\leq x}Z_{M}^{\pm}(\theta_{\pi}( \mathfrak{p}))\right)^{n}\] \[=\sum_{u=1}^{n}\sum_{(r_{1},\ldots,r_{u})}^{(1)}\frac{n!}{r_{1}! \cdots r_{u}!}\frac{1}{u}\sum_{(\mathfrak{p}_{1},\ldots,\mathfrak{p}_{u})}^{(2 )}\prod_{i=1}^{u}Z_{M}^{\pm}(\theta_{\pi}(\mathfrak{p}_{i}))^{r_{i}},\]
where
* The sum \(\sum\limits_{(r_{1},\ldots,r_{u})}^{(1)}\) is taken over tuples of positive integers \((r_{1},\ldots,r_{u})\) such that \(r_{1}+\cdots+r_{u}=n\).
* The sum \(\sum\limits_{(\mathfrak{p}_{1},\ldots,\mathfrak{p}_{u})}^{(2)}\) is taken over tuples of distinct prime ideals \(\mathfrak{p}_{1},\ldots,\mathfrak{p}_{u}\) coprime to \(\mathfrak{n}\) with \(\mathrm{N}(\mathfrak{p}_{i})\leq x\) for each \(i=1,\ldots,u\).
Let \([f(\theta)\bullet U_{n}(\cos\theta)]\) denote the \(n\)-th Fourier coefficient when \(f\) is expanded as a Fourier series using the orthogonal basis of Chebyshev polynomials of the second kind. Explicitly,
\[[f(\theta)\bullet U_{n}(\cos\theta)]=\frac{2}{\pi}\int_{0}^{\pi}f(\theta)\,U_ {n}(\cos\theta)\sin^{2}\theta\ d\theta.\]
Expanding each \(Z_{M}^{\pm}(\theta_{\pi}(\mathfrak{p}_{i}))^{r_{i}}\) in this way we have
\[\prod_{i=1}^{u}Z_{M}^{\pm}(\theta_{\pi}(\mathfrak{p}_{i}))^{r_{i}} =\prod_{i=1}^{u}\left(\sum_{m_{i}=1}^{Mr_{i}}\left[Z_{M}^{\pm}( \theta)^{r_{i}}\bullet U_{m_{i}}(\cos\theta)\right]U_{m_{i}}(\cos\theta_{\pi} (\mathfrak{p}_{i}))\right)\] \[=\sum_{(m_{1},\ldots,m_{u})}^{(3)}\prod_{i=1}^{u}\left[Z_{M}^{\pm }(\theta)^{r_{i}}\bullet U_{m_{i}}(\cos\theta)\right]U_{m_{i}}(\cos\theta_{\pi} (\mathfrak{p}_{i})).\]
Here, \(\sum\limits_{(m_{1},\ldots,m_{u})}^{(3)}\) denote that the sum is taken over tuples where each \(m_{i}\) ranges from \(1\) to \(Mr_{i}\). Averaging over \(\pi\in\Pi_{\underline{k}}(\mathfrak{n})\), and using the trace formula given in equation (5.4),
\[\frac{1}{\#\Pi_{\underline{k}}(\mathfrak{n})}\sum_{\pi\in\Pi_{ \underline{k}}(\mathfrak{n})}\prod_{i=1}^{u}Z_{M}^{\pm}(\theta_{\pi}( \mathfrak{p}_{i}))^{r_{i}}\] \[=\sum_{(m_{1},\ldots,m_{u})}^{(3)}\prod_{i=1}^{u}\left[Z_{M}^{\pm }(\theta)^{r_{i}}\bullet U_{m_{i}}(\cos\theta)\right]\frac{1}{\#\Pi_{ \underline{k}}(\mathfrak{n})}\sum_{\pi\in\Pi_{\underline{k}}(\mathfrak{n})} \prod_{i=1}^{u}U_{m_{i}}(\cos\theta_{\pi}(\mathfrak{p}_{i}))\] \[=\sum_{(m_{1},\ldots,m_{u})}^{(3)}\prod_{i=1}^{u}\left[Z_{M}^{\pm }(\theta)^{r_{i}}\bullet U_{m_{i}}(\cos\theta)\right]\left(\prod_{i=1}^{u}\int _{0}^{\pi}U_{m_{i}}(\cos\theta)d\mu_{\nu_{i}}(\theta)+O_{L,\mathfrak{n}}\left( \frac{\prod_{i=1}^{u}\mathrm{N}(\mathfrak{p}_{i})^{\frac{3}{2}m_{i}}}{\prod_{i= 1}^{d}k_{i}}\right)\right)\] \[=\prod_{i=1}^{u}\int_{0}^{\pi}Z_{M}^{\pm}(\theta)^{r_{i}}\ d\mu_{ \nu_{i}}(\theta)+O_{L,\mathfrak{n}}\left(\frac{1}{\prod_{i=1}^{d}k_{i}}\prod_{ i=1}^{u}\sum_{m_{i}=1}^{Mr_{i}}\langle Z_{M}^{\pm}(\cdot)^{r_{i}},U_{m_{i}}(\cos \cdot)\rangle\mathrm{N}(\mathfrak{p}_{i})^{\frac{3}{2}m_{i}}\right).\]
Let us estimate the error term. Observe that \(Z_{M}^{\pm}(\theta)=F_{M}^{\pm}(\theta)-\hat{F}_{M}^{\pm}(0)\), so it is absolutely bounded. Therefore, \([Z_{M}^{\pm}(\theta)^{r_{i}}\bullet U_{m_{i}}(\cos\theta)]\ll m_{i}\) for each \(i=1,\ldots,u\) using the trivial bound \(|U_{n}(\cos\theta)|\leq n+1.\) We also know that \(\mathrm{N}(\mathfrak{p}_{i})\leq x\), so
\[\prod_{i=1}^{u}\sum_{m_{i}=1}^{Mr_{i}}\left[Z_{M}^{\pm}(\theta)^{r_{i}}\bullet U _{m_{i}}(\cos\theta)\right]\mathrm{N}(\mathfrak{p}_{i})^{\frac{3}{2}m_{i}}\ll_ {n}M^{2u}x^{\frac{3}{2}Mu}.\]
Therefore, the higher moments
\[\left\langle\left(\frac{F^{\pm}(M,\pi)(x)}{\sqrt{\pi_{L}(x)}}\right)^{n}\right\rangle\]
are given by
\[\frac{1}{\pi_{L}(x)^{\frac{n}{2}}}\sum_{u=1}^{n}\sum_{(r_{1},\dots,r_{u})}^{(1)} \frac{n!}{r_{1}!\cdots r_{u}!}\frac{1}{u}\sum_{(\mathfrak{p}_{1},\dots, \mathfrak{p}_{u})}^{(2)}\prod_{i=1}^{u}\int_{0}^{\pi}Z_{M}^{\pm}(\theta)^{r_{i} }\ d\mu_{\nu}(\theta)+O_{L,n}\left(\frac{M^{2n}x^{\frac{3}{2}Mn}\pi_{L}(x)^{ \frac{n}{2}}}{\prod_{i=1}^{d}k_{i}}\right). \tag{5.6}\]
We now analyze the main term in the above equation. First, we prove:
**Lemma 5.3**.: _Assume the notations introduced earlier. Then, the following hold._
\[\int_{0}^{\pi}Z_{M}^{\pm}(\theta)^{r}\ d\mu_{\nu}(\theta) =\int_{0}^{\pi}Z_{M}^{\pm}(\theta)^{r}\ d\mu_{\infty}(\theta)+O \left(\frac{1}{\mathrm{N}(\mathfrak{p})}\right) \tag{5.8}\] \[\int_{0}^{\pi}Z_{M}^{\pm}(\theta)\ d\mu_{\nu}(\theta) =O\left(\frac{1}{\mathrm{N}(\mathfrak{p})}\right)\] (5.9) \[\int_{0}^{\pi}Z_{M}^{\pm}(\theta)^{r}\ d\mu_{\nu}(\theta) =\begin{cases}\sum_{m=1}^{M}\hat{F}_{M}^{\pm}(m)^{2}+O\left( \frac{1}{\mathrm{N}(\mathfrak{p})}\right)&\text{ if }r=2\\ O(1)&\text{ for }r>2.\end{cases} \tag{5.7}\]
Proof.: Equation (5.7) follows from noting that \(Z_{M}^{\pm}(\theta)\) is bounded and that for any prime ideal \(\mathfrak{p}\),
\[\frac{2}{\pi}\frac{(\mathrm{N}(\mathfrak{p})+1)\sin^{2}\theta}{\left(\mathrm{ N}(\mathfrak{p})^{\frac{1}{2}}+\mathrm{N}(\mathfrak{p})^{-\frac{1}{2}}\right)^{2}-4 \cos^{2}\theta}=\frac{2}{\pi}\sin^{2}\theta+O\left(\frac{1}{\mathrm{N}( \mathfrak{p})}\right).\]
Equation (5.8) follows from letting \(r=1\) in (5.7) and the orthogonality of Chebyshev polynomials \(U_{m}(\cos\theta)\). Finally,
\[\int_{0}^{\pi}Z_{M}^{\pm}(\theta)^{2}\ d\mu_{\nu}(\theta) =\int_{0}^{\pi}Z_{M}^{\pm}(\theta)^{2}\ d\mu_{\infty}(\theta)+O \left(\frac{1}{\mathrm{N}(\mathfrak{p})}\right)\quad\text{ using \eqref{eq:p-1}}\] \[=\int_{0}^{\pi}\left(\sum_{m=1}^{M}\hat{F}_{M}^{\pm}(m)U_{m}(\cos \theta)\right)^{2}\ d\mu_{\infty}(\theta)+O\left(\frac{1}{\mathrm{N}( \mathfrak{p})}\right)\] \[=\int_{0}^{\pi}\sum_{m_{1},m_{2}=1}^{M}\hat{F}_{M}^{\pm}(m_{1}) \hat{F}_{M}^{\pm}(m_{2})\sum_{k=0}^{\min\{m_{1},m_{2}\}}U_{m_{1}+m_{2}-2k}( \cos\theta)\ d\mu_{\infty}(\theta)+O\left(\frac{1}{\mathrm{N}(\mathfrak{p})}\right)\] \[=\sum_{m=1}^{M}\hat{F}_{M}^{\pm}(m)^{2}+O\left(\frac{1}{\mathrm{ N}(\mathfrak{p})}\right)\]
using orthogonality. If \(r>2\), the claimed estimate follows because \(Z_{M}^{\pm}\) is bounded. This proves (5.9).
We now have the tools to work out the integrals in (5.6). We do so by dividing the partitions into three types.
**Case 1:** If \((r_{1},\dots,r_{u})=(2,\dots,2)\), i.e., each part is equal to \(2\), then \(n\) is even and \(u=n/2\). In this case, the sum corresponding to this partition is
\[\frac{1}{\pi_{L}(x)^{\frac{n}{2}}}\frac{n!}{2^{\frac{n}{2}}\frac{n!}{2^{\frac{n}{2}}\frac{n!}{2^{\frac{n}{2}}}}}\sum_{(\mathfrak{p}_{1},\dots, \mathfrak{p}_{u})}^{(2)}\prod_{i=1}^{n/2}\int_{0}^{\pi}Z_{M}^{\pm}(\theta)^{2} \ d\mu_{\nu}(\theta)\] \[=\frac{1}{\pi_{L}(x)^{\frac{n}{2}}}\frac{n!}{2^{\frac{n}{2}}\frac {n!}{2^{\frac{n}{2}}}}\sum_{(\mathfrak{p}_{1},\dots,\mathfrak{p}_{u})}^{(2)} \prod_{i=1}^{n/2}\left(\sum_{m=1}^{M}\hat{F}_{M}^{\pm}(m)^{2}+O\left(\frac{1} {\mathrm{N}(\mathfrak{p}_{i})}\right)\right)\] \[=\frac{n!}{2^{\frac{n}{2}}\frac{n!}{2^{\frac{n}{2}}!}}\left(\sum_ {m=1}^{M}\hat{F}_{M}^{\pm}(m)^{2}\right)^{\frac{n}{2}}+o(\pi_{L}(x)).\]
Using equation (2.14) and noting that \(M\) is an increasing function of \(x\), we conclude
\[\lim_{x\to\infty}\frac{1}{\pi_{L}(x)^{\frac{n}{2}}}\frac{n!}{2^{\frac{n}{2}}\frac {n!}{2^{\frac{n}{2}}}}\sum_{(\mathfrak{p}_{1},\dots,\mathfrak{p}_{u})}^{(2)} \prod_{i=1}^{n/2}\int_{0}^{\pi}Z_{M}^{\pm}(\theta)^{2}\ d\mu_{\nu}(\theta)= \frac{n!}{2^{\frac{n}{2}}\frac{n!}{2^{\frac{n}{2}}!}}(\mu_{\infty}(I)-\mu_{ \infty}(I)^{2})^{\frac{n}{2}}. \tag{5.10}\]
**Case 2:** If \((r_{1},\ldots,r_{u})\) has \(\ell\) parts equal to \(1\) with \(1\leq\ell\leq n\). Then
\[n=r_{1}+\cdots+r_{u}\geq\ell+2(u-\ell).\]
Therefore in this case,
\[u-\ell\leq\frac{n-\ell}{2}\leq\frac{n-1}{2}.\]
Using equations (5.8) and (5.9),
\[\frac{1}{\pi_{L}(x)^{\frac{n}{2}}}\frac{n!}{2^{\frac{n}{2}}\frac{ n!}{2!}}\sum_{({\mathfrak{p}}_{1},\ldots,{\mathfrak{p}}_{u})}^{(2)}\prod_{i=1}^{u }\int_{0}^{\pi}Z_{M}^{\pm}(\theta)^{r_{i}}\ d\mu_{\nu}(\theta)\] \[\ll\frac{1}{\pi_{L}(x)^{\frac{n}{2}}}(\log\log x)^{\ell}\pi_{L}(x )^{(u-\ell)}\] \[\ll\pi_{L}(x)^{-\frac{1}{2}}(\log\log x)^{\ell}.\]
**Case 3:** The remaining case is where \((r_{1},\ldots,r_{u})\) has all parts \(r_{i}\geq 2\) and at least one part greater than or equal to \(3\). In this case, it is easy to see that \(u\leq\frac{n}{2}-1\). So we have
\[\frac{1}{\pi_{L}(x)^{\frac{n}{2}}}\frac{n!}{2^{\frac{n}{2}}\frac {n!}{2!}}\sum_{({\mathfrak{p}}_{1},\ldots,{\mathfrak{p}}_{u})}^{(2)}\prod_{i= 1}^{u}\int_{0}^{\pi}Z_{M}^{\pm}(\theta)^{r_{i}}\ d\mu_{\nu}(\theta)\] \[\ll\pi_{L}(x)^{-1},\]
using (5.9) for each part. Therefore, for partitions \((r_{1},\ldots,r_{u})\) described in Case 2 and Case 3,
\[\lim_{x\to\infty}\frac{1}{\pi_{L}(x)^{\frac{n}{2}}}\frac{n!}{2^{\frac{n}{2}} \frac{n!}{2!}}\sum_{({\mathfrak{p}}_{1},\ldots,{\mathfrak{p}}_{u})}^{(2)}\prod _{i=1}^{n/2}\int_{0}^{\pi}Z_{M}^{\pm}(\theta)^{r_{i}}\ d\mu_{\nu}(\theta)=0. \tag{5.11}\]
Gathering equations (5.10), (5.11) and choosing \(M=\lfloor\sqrt{\pi_{L}(x)}\log\log x\rfloor\), we have proved
\[\lim_{x\to\infty}\left\langle\left(\frac{F^{\pm}(M,\pi)(x)}{\sqrt{\pi_{L}(x)} }\right)^{n}\right\rangle=\begin{cases}0&\text{ if $n$ is odd},\\ \frac{n!}{2^{\frac{n}{2}}\frac{n!}{2!}}\left(\mu_{\infty}(I)-\mu_{\infty}(I)^{ 2}\right)^{\frac{n}{2}}&\text{ if $n$ is even}.\end{cases}\]
This completes the proof.
## 6. Improvements
We can obtain an improvement on the growth conditions on the weight vector \(\underline{k}=\underline{k}(x)\) in Theorem 4.1, if the indicator function \(\chi_{I}\) were to be replaced by a smooth test function, adapting the line of proof in [1, Theorem 1.6]. As in Theorem 1.1, \(\underline{k}(x)\) is a vector \((k_{1}(x),\ldots,k_{d}(x))\) where the components run over even integers \(\geq 4\). More precisely, the following holds.
**Theorem 6.1**.: _Let \(\Phi\in C^{\infty}(\mathbb{R})\) be a real-valued, even function in the Schwartz class and \(\widehat{\Phi}\) its Fourier transform. Fix a real number \(M\geq 1\) and define_
\[\phi_{M}(t)=\sum_{m\in\mathbb{Z}}\Phi(M(t+m))\text{ and }V_{\Phi,M}=\int_{0}^{1} \phi_{M}(t)^{2}\mu_{\infty}(t)dt-\left(\int_{0}^{1}\phi_{M}(t)\mu_{\infty}(t) dt\right)^{2}.\]
_For \(\pi\in\Pi_{\underline{k}}({\mathfrak{n}}),\) define_
\[N_{\Phi,M,\pi}(x)=\sum_{\begin{subarray}{c}{\mathbb{N}}({\mathfrak{p}})\leq x \\ {\mathfrak{p}}|{\mathfrak{n}}\end{subarray}}\phi_{M}(\theta_{\pi}({\mathfrak{p} })).\]
* _Suppose_ \(\widehat{\Phi}\) _is compactly supported and_ \(\underline{k}=\underline{k}(x)\) _satisfies_ \(\frac{\sum_{i=1}^{d}\log k_{i}}{\log x}\to\infty\) _as_ \(x\to\infty.\) _Then, for any integer_ \(r\geq 0,\)__ (6.1) \[\lim_{x\to\infty}\frac{1}{|\Pi_{\underline{k}}({\mathfrak{n}})|}\sum_{\pi\in\Pi_ {\underline{k}}({\mathfrak{n}})}\left(\frac{N_{\Phi,M,\pi}(x)-\pi_{L}(x)\int_{ 0}^{1}\phi_{M}(t)\mu_{\infty}(t)dt}{\sqrt{\pi}(x)V_{\Phi,M}}\right)^{r}=\begin{cases} 0&\text{ if $r$ is odd}\\ \frac{r!}{\left(\frac{r}{2}\right)!^{2r/2}}&\text{ if $r$ is even}.\end{cases}\]
**(b)**: _For fixed_ \(\lambda,\omega>0\)_, suppose the Fourier transform_ \(\widehat{\Phi}\) _satisfies_ \(\widehat{\Phi}(t)\ll e^{-\lambda|t|^{\omega}}\)_, as_ \(|t|\to\infty\)_. Then, the asymptotic_ (6.1) _holds if_ \(\underline{k}=\underline{k}(x)\) _satisfies_ \(\frac{\sum_{i=1}^{d}\log k_{i}}{(\log x)^{1+1/\omega}}\to\infty\) _as_ \(x\to\infty\)_._
|
2303.01623
|
Elasticity of spheres with buckled surfaces
|
The buckling instabilities of core-shell systems, comprising an interior
elastic sphere, attached to an exterior shell, have been proposed to underlie
myriad biological morphologies. To fully discuss such systems, however, it is
important to properly understand the elasticity of the spherical core. Here, by
exploiting well-known properties of the solid harmonics, we present a simple,
direct method for solving the linear elastic problem of spheres and spherical
voids with surface deformations, described by a real spherical harmonic. We
calculate the corresponding bulk elastic energies, providing closed-form
expressions for any values of the spherical harmonic degree (l), Poisson ratio,
and shear modulus. We find that the elastic energies are independent of the
spherical harmonic index (m). Using these results, we revisit the buckling
instability experienced by a core-shell system comprising an elastic sphere,
attached within a membrane of fixed area, that occurs when the area of the
membrane sufficiently exceeds the area of the unstrained sphere [C. Fogle, A.
C. Rowat, A. J. Levine and J. Rudnick, Phys. Rev. E 88, 052404 (2013)]. We
determine the phase diagram of the core-shell sphere's shape, specifying what
value of l is realized as a function of the area mismatch and the core-shell
elasticity. We also determine the shape phase diagram for a spherical void
bounded by a fixed-area membrane.
|
Yingzhen Tian, Megan McCarthy, Megan King, S. G. J. Mochrie
|
2023-03-02T23:13:49Z
|
http://arxiv.org/abs/2303.01623v1
|
# Elasticity of spheres with buckled surfaces
###### Abstract
The buckling instabilities of core-shell systems, comprising an interior elastic sphere, attached to an exterior shell, have been proposed to underlie myriad biological morphologies. To fully discuss such systems, however, it is important to properly understand the elasticity of the spherical core. Here, by exploiting well-known properties of the solid harmonics, we present a simple, direct method for solving the linear elastic problem of spheres and spherical voids with surface deformations, described by a real spherical harmonic. We calculate the corresponding bulk elastic energies, providing closed-form expressions for any values of the spherical harmonic degree (\(l\)), Poisson ratio, and shear modulus. We find that the elastic energies are independent of the spherical harmonic index (\(m\)). Using these results, we revisit the buckling instability experienced by a core-shell system comprising an elastic sphere, attached within a membrane of fixed area, that occurs when the area of the membrane sufficiently exceeds the area of the unstrained sphere [C. Fogle, A. C. Rowat, A. J. Levine and J. Rudnick, _Phys. Rev. E_**88**, 052404 (2013)]. We determine the phase diagram of the core-shell sphere's shape, specifying what value of \(l\) is realized as a function of the area mismatch and the core-shell elasticity. We also determine the shape phase diagram for a spherical void bounded by a fixed-area membrane.
## I Introduction
There has been longstanding interest in the mechanical instabilities of core-shell systems, comprising an elastic sphere on the inside, surrounded by and attached to an elastic exterior shell. Although idealized, such a model has been proposed to underlie myriad buckled or wrinkled biological morphologies, such as those of fruits and vegetables [1; 2; 3], insect eggs [4], pollen grains [5; 6], neutrophils and B cells [7; 8; 9], mammalian brains [10; 11], and growing tumors [12]. In addition to these biological examples, swelling gels often show similar mechanical instabilities [13; 14; 15; 16], as do inorganic core-shell systems [17; 18].
To fully discuss spherical core-shell systems, it is important to properly understand the elasticity of the spherical core. For an isotropic material with Poisson ratio, \(\nu\), in mechanical equilibrium, according to linear elasticity theory, the elastic displacement field, \(\mathbf{u}\), must satisfy
\[\nabla(\nabla\cdot\mathbf{u})+(1-2\nu)\nabla^{2}\mathbf{u}=0, \tag{1}\]
which is the statement that the force density is zero everywhere within the material of the spherical core. Eq. 1 plays an analogous role in elasticity theory to that played in electrostatics by Laplace's equation, whose solutions are well-known to be the regular and irregular solid harmonics, namely \(r^{l}Y_{l}^{m}(\theta,\phi)\) and \(r^{-l-1}Y_{l}^{m}(\theta,\phi)\), respectively. From this point of view, it is surprising that analytic solutions of Eq. 1 in near spherical situations have been little discussed. The corresponding elastic energies of these solutions also remain unknown, as far as we are aware. Ref. [19] sought to remedy this situation, by, first, solving Eq. 1 for an elastic sphere subject to the boundary condition that the sphere's surface is displaced radially with an amplitude given by a real spherical harmonic, and, then, by calculating the corresponding elastic energies. However, as described below, we disagree with Ref. [19]'s result that the elastic energy depends on the spherical harmonic index, \(m\).
The goal of this paper is threefold: (1) to find the displacement field both within a sphere, with a real-spherical-harmonic surface displacement, and outside a spherical void, with a real-spherical-harmonic surface displacement; (2) to calculate corresponding bulk elastic energies; and (3) to use the resultant elastic energy to determine the shape phase diagram both of a core-shell system, comprising an elastic sphere, attached within a membrane of fixed area [19], and of a spherical void, which is lined by a membrane of fixed area, that is attached to the surrounding elastic medium. A number of recent contributions have focused on post-buckling pattern selection in core-shell systems, which depends on non-linear effects [3; 6; 10; 20; 21; 22; 23; 20]. However, such phenomena lie beyond our scope, which is confined to linear elasticity only.
The outline of the paper is as follows. By exploiting well-known properties of the solid harmonics, we first present a straightforward, direct method for solving Eq. 1 in general, near-spherical situations, both for spheres (Sec. II) and spherical voids (Sec. III). Then, we fit the general solutions to boundary conditions corresponding to a spherical core (Sec. IV) or a spherical void (Sec. V), whose surface is displaced radially with an amplitude given by a real spherical harmonic. In Sec. VI, we calculate the bulk elastic energies corresponding to these boundary conditions. We provide analytic expressions for the energies for any value of the spherical harmonic
degree, \(l\), Poisson ratio, \(\nu\), and shear modulus, \(\mu\). The elastic energies are independent of the spherical harmonic index, \(m\). In Sec. VII, following Ref. [19], we revisit the buckling instability experienced by a core-shell system comprising an elastic sphere, attached within a membrane of fixed area, that occurs when the area of the membrane sufficiently exceeds the area of the unstrained sphere. We determine the phase diagram of the core-shell sphere's shape, specifying what value of \(l\) is realized as a function of area mismatch and sphere and membrane elasticity. Similarly, we also determine the analogous shape phase diagram for a spherical void bounded by a fixed-area membrane. A Mathematica notebook containing all of our calculations is available at Github [24].
## II Regular solution for spheres
To find solutions to Eq. 1, applicable to (slightly deformed) spheres, we first introduce two trial functions, that when summed together with appropriate relative weighting, indeed satisfy Eq. 1. To this solution, we then add an additional trial function that satisfies Eq. 1 on its own, yielding a final result, that can be conveniently matched to the applicable boundary conditions.
Trial function 1 takes the form
\[\mathbf{u_{1}}=ar^{2}\nabla(r^{l}Y_{l}^{m}), \tag{2}\]
where \(a\) is a constant. Eq. 2 converges at \(r=0\), and eventually will be part of the so-called regular solution. It follows from Eq. 2 that,
\[\begin{split}\nabla\cdot\mathbf{u_{1}}&=a(\nabla r ^{2})\cdot\nabla(r^{l}Y_{l}^{m})+ar^{2}\nabla^{2}(r^{l}Y_{l}^{m})\\ &=2lar^{l}Y_{l}^{m},\end{split} \tag{3}\]
and, in turn, that
\[\nabla(\nabla\cdot\mathbf{u_{1}})=2la\nabla(r^{l}Y_{l}^{m}). \tag{4}\]
We also have that
\[\nabla^{2}\mathbf{u_{1}}=2(2l+1)a\nabla(r^{l}Y_{l}^{m}). \tag{5}\]
Combining Eq. 4 and Eq. 5 yields
\[\nabla(\nabla\cdot\mathbf{u_{1}})+(1-2\nu)\nabla^{2}\mathbf{u_{1}}=(2l+2(1-2 \nu)(2l+1))a\nabla(r^{l}Y_{l}^{m}) \tag{6}\]
Thus, Eq. 1 produces a non-zero result for \(\mathbf{u_{1}}\), and another trial function is needed to cancel \(\mathbf{u_{1}}\) in order to satisfy Eq. 1.
To this end, we introduce trial function 2:
\[\mathbf{u_{2}}=\mathbf{b}r^{l+1}Y_{l+1}^{m}, \tag{7}\]
where \(\mathbf{b}=(b_{x},b_{y},b_{z})\) is a constant vector. Then,
\[\begin{split}\nabla\cdot\mathbf{u_{2}}&=\mathbf{b }\cdot\nabla(r^{l+1}Y_{l+1}^{m})\\ &=\alpha r^{l}Y_{l}^{m+1}+\beta r^{l}Y_{l}^{m}+\gamma r^{l}Y_{l}^ {m-1}\end{split} \tag{8}\]
where \(\alpha\), \(\beta\) and \(\gamma\) are all known quantities, given explicitly in the Appendix (Eq. A4, Eq. A5, and Eq. A6, respectively). Since \(\nabla^{2}\mathbf{u_{2}}=0\), we have that
\[\nabla(\nabla\cdot\mathbf{u_{2}})+(1-2\nu)\nabla^{2}\mathbf{u_{2}}=\nabla( \nabla\cdot\mathbf{u_{2}})=\alpha\nabla(r^{l}Y_{l}^{m+1})+\beta\nabla(r^{l}Y_ {l}^{m})+\gamma\nabla(r^{l}Y_{l}^{m-1}). \tag{9}\]
The terms on the right-hand side of Eq. 9 are of the same form as the right-hand side of Eq. 6, except for the appearance of additional terms with spherical harmonic indeces equal to \(m\pm 1\). However, we can use a modified version of \(\mathbf{u_{1}}\), augmented to cancel all three terms arising from \(\mathbf{u_{2}}\). Because \(\nabla(r^{l}Y_{l}^{m})\) satisfies Eq. 1 on its own, we can also add additional terms of this form to \(\mathbf{u_{1}}\), with a view to the solution for a surface displacement given by a single spherical harmonic. Specifically, we can pick
\[\begin{split}\mathbf{u_{1}^{\prime}}=& a_{1}(r^{2}-R^{2}) \nabla(r^{l}Y_{l}^{m+1})\\ &+a_{0}(r^{2}-R^{2})\nabla(r^{l}Y_{l}^{m})\\ &+a_{-1}(r^{2}-R^{2})\nabla(r^{l}Y_{l}^{m-1}),\end{split} \tag{10}\]
where \(R\) is the radius of the undeformed sphere,
\[a_{1}=\frac{-\alpha}{2l+2(1-2\nu)(2l+1)}, \tag{11}\]
\[a_{0}=\frac{-\beta}{2l+2(1-2\nu)(2l+1)}, \tag{12}\]
and
\[a_{-1}=\frac{-\gamma}{2l+2(1-2\nu)(2l+1)}. \tag{13}\]
By construction, Eq. 1 is now satisfied by
\[\begin{split}\mathbf{u}_{lm}=&\mathbf{u}_{1}^{\prime}+ \mathbf{u}_{2}\\ =& a_{1}(r^{2}-R^{2})\nabla(r^{l}Y_{l}^{m+1})\\ &+a_{0}(r^{2}-R^{2})\nabla(r^{l}Y_{l}^{m})\\ &+a_{-1}(r^{2}-R^{2})\nabla(r^{l}Y_{l}^{m-1})\\ &+(b_{x},b_{y},b_{z})r^{l+1}Y_{l+1}^{m}.\end{split} \tag{14}\]
Using the expressions for \(\alpha\), \(\beta\), and \(\gamma\), given in the Appendix, we have
\[a_{1}=\frac{(b_{x}-ib_{y})\sqrt{\frac{(2l+3)(l-m+1)!}{(l+m+1)!}}}{2(l(8\nu-6)+ 4\nu-2)\sqrt{\frac{(2l+1)(l-m-1)!}{(l+m+1)!}}}, \tag{15}\]
\[a_{0}=\frac{b_{z}(l+m+1)\sqrt{\frac{(2l+3)(l-m+1)!}{(l+m+1)!}}}{(l(8\nu-6)+4 \nu-2)\sqrt{\frac{(2l+1)(l-m)!}{(l+m)!}}}, \tag{16}\]
and
\[a_{-1}=-\frac{(b_{x}+ib_{y})(l+m)(l+m+1)\sqrt{\frac{(2l+3)(l-m+1)!}{(l+m+1)!}} }{2(l(8\nu-6)+4\nu-2)\sqrt{\frac{(2l+1)(l-m+1)!}{(l+m-1)!}}}. \tag{17}\]
While \(\mathbf{u}_{1}^{\prime}\) involves spherical harmonics of degree \(l-1\), by contrast, \(\mathbf{u}_{2}\) involves spherical harmonics of degree \(l+1\). Thus, solutions to Eq. 1 necessarily involve at least one pair of values of \(l\) that differ by 2. We also see that solutions to Eq. 1 naturally involve three consecutive values of \(m\).
For \(r=R\), we see that only \(\mathbf{u_{2}}\) survives. Because \(\mathbf{u_{2}}\) involves a single spherical harmonic, this approach facilitates matching surface displacements, that are given by a spherical harmonic or a sum of spherical harmonics.
## III Irregular solution for spherical voids
Using an analogous procedure to that followed in Sec. II, we can also find a solution that remains finite as \(r\rightarrow\infty\), namely the irregular solution, which is applicable within elastic material surrounding a spherical void. To this end, we again introduce two trial functions,
\[\begin{split}\mathbf{v_{1}}=& a_{1}(r^{2}-R^{2}) \nabla(r^{l-1}Y_{l}^{m+1})\\ &+a_{0}(r^{2}-R^{2})\nabla(r^{-l-1}Y_{l}^{m})\\ &+a_{-1}(r^{2}-R^{2})\nabla(r^{-l-1}Y_{l}^{m-1})\\ &,\end{split} \tag{18}\]
and
\[\begin{split}\mathbf{v_{2}}&=\mathbf{b}r^{-l}Y_{l-1 }^{m}\\ &=(b_{x},b_{y},b_{z})r^{-l}Y_{l-1}^{m}.\end{split} \tag{19}\]
To ensure that \(\mathbf{v}_{1}+\mathbf{v}_{2}\) is a solution to Eq. 1, we must pick
\[a_{0} =\frac{b_{z}\sqrt{\frac{-1+2l}{3+2l}}\sqrt{l^{2}-m^{2}}}{2\sqrt{ \frac{1+2l}{3+2l}}(-2-3l+2\nu+4l\nu)}, \tag{20}\] \[a_{1} =-\frac{(b_{x}-ib_{y})\sqrt{\frac{-1+2l}{3+2l}}\sqrt{(l+m)(1+l+m) }}{4\sqrt{\frac{1+2l}{3+2l}}(-2-3l+2\nu+4l\nu)},\] (21) \[a_{-1} =\frac{(b_{x}+ib_{y})\sqrt{\frac{1+2l}{3+2l}}\sqrt{l+l^{2}-2lm+( -1+m)m}}{4\sqrt{\frac{1+2l}{3+2l}}(-2-3l+2\nu+4l\nu)}, \tag{22}\]
so that the contributions of the two trial functions to Eq. 1 cancel.
The irregular solution is
\[\begin{split}\mathbf{v}_{lm}=&\mathbf{v_{1}}+ \mathbf{v_{2}}\\ =& a_{1}(r^{2}-R^{2})\nabla(r^{-l-1}Y_{l}^{m+1})\\ &+a_{0}(r^{2}-R^{2})\nabla(r^{-l-1}Y_{l}^{m})\\ &+a_{-1}(r^{2}-R^{2})\nabla(r^{-l-1}Y_{l}^{m-1})\\ &+(b_{x},b_{y},b_{z})r^{-l}Y_{l-1}^{m}.\end{split} \tag{23}\]
Two values of \(l\) are involved in the irregular solution too, and only the coefficients in \(\mathbf{v_{2}}\) need be considered to fit boundary conditions at \(r=R\).
## IV Sphere with a spherical harmonic shape deformation
Next, we consider a (slightly deformed) sphere, whose shape deviates from a perfect sphere by a single, real spherical harmonic, \(Y_{lm}\), defined as
\[Y_{lm}=\frac{1}{\sqrt{2}}[Y_{l}^{m}+(-1)^{m}Y_{l}^{-m}] \tag{24}\]
for \(m>0\) and as \(Y_{l0}=Y_{l}^{0}\) for \(m=0\). The amplitude of the displacement of the elastic medium immediately behind the surface is proportional to the surface displacement. We furthermore suppose that this displacement is directed along the radial direction. Thus, the relevant boundary condition is that the displacement at the surface is
\[\mathbf{u}(R)=gRY_{lm}\hat{\mathbf{r}}, \tag{25}\]
where \(g\) is a dimensionless measure of the amplitude of the surface displacement. The radial unit vector, \(\hat{\mathbf{r}}=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)\), may be expressed in terms of \(Y_{1}^{1}\), \(Y_{1}^{0}\) and \(Y_{1}^{-1}\):
Figure 1: Plot of the shape and displacement field within the \(xy\)-plane (left) and the \(xz\)-plane (right) of a sphere with a surface deformation given by a spherical harmonic with degree \(l=11\) and index \(m=11\) and \(\nu=0.3\). The direction of the arrows represents the direction of the elastic displacements within the sphere. The color of the arrows represents the magnitude of these displacements. The buckled shape, represented by the red curve, has a spherical harmonic amplitude of \(g=0.20\), corresponding to the excess area at the transition from the isotropically-expanded phase to the buckled phase (Sec. VII). The smaller blue circle represents the undeformed sphere. The larger green circle represents the isotropically-expanded sphere with the same surface area as the buckled shape.
Figure 2: Plot of the shape and displacement field within the \(xy\)-plane (left) and the \(xz\)-plane (right) of a spherical void with a surface deformation given by a spherical harmonic with degree \(l=12\) and index \(m=12\) and \(\nu=0.3\). The direction of the arrows represents the direction of the elastic displacements within the sphere. The color of the arrows represents the magnitude of these displacements. The buckled shape, represented by the red curve, has a spherical harmonic amplitude of \(g=0.25\), corresponding to the excess area at the transition from the isotropically-expanded phase to the buckled phase (Sec. VII). The smaller blue circle represents the undeformed sphere. The larger green circle represents the isotropically-expanded sphere with the same surface area as the buckled shape.
\[\hat{\mathbf{r}}=\left(\right.-\sqrt{\frac{2\pi}{3}}\left(Y_{1}^{1}(\theta,\phi)-Y_ {1}^{-1}(\theta,\phi)\right),i\sqrt{\frac{2\pi}{3}}\left(Y_{1}^{-1}(\theta,\phi )+Y_{1}^{1}(\theta,\phi)\right),2\sqrt{\frac{\pi}{3}}Y_{1}^{0}(\theta,\phi)), \tag{26}\]
implying that Eq. 25 consists of pairwise products of spherical harmonics. It is well-known, however, that pairwise products of spherical harmonics may be expressed as a linear combination of spherical harmonics with degrees and indeces and weights, specified by the Wigner 3-j symbols. Thus, we find that Eq. 25 contains spherical harmonics with degree \(l\pm 1\), and indeces \(m\pm 1\) for the \(x\)- and \(y\)-components, and index \(m\) for the \(z\)-component, and the complex conjugates of these terms, which is a total of twelve spherical harmonics each with a different combination of \(l\) and \(m\) than the others (Table 1). This form of Eq. 25 is given in the accompanying Mathematica notebook. To satisfy these boundary conditions, we must select a solution that is a superposition of twelve \(\mathbf{u}_{lm}\)'s containing the values of \(l\) and \(m\) needed, and we must set the components of \(\mathbf{b}\) for each \(\mathbf{u}_{lm}\) in the superposition equal to the coefficient of the corresponding spherical harmonic in Eq. 25. Thus, we find the following solution for spheres:
\[u_{x}=R\sum_{m^{\prime}=m\pm 1}(A_{m^{\prime}}(r^{2}-R^{2})r^{l-1}Y_{l-1}^{m^{ \prime}}+B_{m^{\prime}}r^{l+1}Y_{l+1}^{m^{\prime}}+C_{m^{\prime}}r^{l-1}Y_{l- 1}^{m^{\prime}})+\text{c.c.} \tag{27}\]
\[u_{y}=R\sum_{m^{\prime}=m\pm 1}(D_{m^{\prime}}(r^{2}-R^{2})r^{l-1}Y_{l-1}^{m^{ \prime}}+E_{m^{\prime}}r^{l+1}Y_{l+1}^{m^{\prime}}+F_{m^{\prime}}r^{l-1}Y_{l- 1}^{m^{\prime}})+\text{c.c.} \tag{28}\]
\[u_{z}=R\left(G(r^{2}-R^{2})r^{l-1}Y_{l-1}^{m}+Hr^{l+1}Y_{l+1}^{m}+Ir^{l-1}Y_{l- 1}^{m}\right)+\text{c.c.} \tag{29}\]
where the coefficients (\(A_{m^{\prime}}\), \(B_{m^{\prime}}\), _etc._) are all known functions of \(l,\ m,\ R,\ g\) and \(\nu\), and are given in Appendix C. It turns out that the coefficients vanish for all terms of the form \((r^{2}-R^{2})Y_{l-3}^{m^{\prime}}\), that would otherwise appear in Eqs. 27, 28, and 29.
The displacement field (\(\mathbf{u}\)) for a sphere with a spherical harmonic surface deformation with \(l=11\) and \(m=11\) is illustrated in Fig. 1 for \(g=0.22\) and \(\nu=0.3\). This representation shows how the interior of the original sphere (blue, smaller circle) is deformed to the buckled shape (red curve). The larger green circle has the same surface area as the buckled shape, and is included for reference.
## V Spherical voids with a spherical harmonic shape deformation
Similarly to Sec. IV, our solution for spherical voids is:
\[v_{x}=R\sum_{m^{\prime}=m\pm 1}(J_{m^{\prime}}(r^{2}-R^{2})r^{-2-l}Y_{l+1}^{m^{ \prime}}+K_{m^{\prime}}r^{-l}Y_{l-1}^{m^{\prime}}+L_{m^{\prime}}r^{-2-l}Y_{l+ 1}^{m^{\prime}})+\text{c.c.} \tag{30}\]
\[v_{y}=R\sum_{m^{\prime}=m\pm 1}(M_{m^{\prime}}(r^{2}-R^{2})r^{-2-l}Y_{l+1}^{m^{ \prime}}+N_{m^{\prime}}r^{-l}Y_{l-1}^{m^{\prime}}+O_{m^{\prime}}r^{-2-l}Y_{l+ 1}^{m^{\prime}})+\text{c.c.} \tag{31}\]
\[v_{z}=R\left(P(r^{2}-R^{2})r^{-2-l}Y_{l+1}^{m}+Qr^{-l}Y_{l-1}^{m}+Sr^{-2-l}Y_{ l+1}^{m}\right)+\text{c.c.} \tag{32}\]
where the coefficients here are given in Appendix D. The displacement field for a spherical void (\(\mathbf{v}\)) with a spherical harmonic surface deformation with \(l=12\) and \(m=12\) is similarly plotted in Fig. 2 for \(\nu=0.3\) and \(g=0.25\).
## VI Bulk elastic energies
Elasticity theory informs us that the elastic energy density, \(w\), can be directly calculated from the derivatives of the displacement \(\mathbf{u}\), namely the strains,
\(\frac{1}{2}(\partial_{i}u_{j}+\partial_{j}u_{i})\):
\[\begin{split} w=&\mu[\frac{1-\nu}{1-2\nu}(\epsilon_{ xx}^{2}+\epsilon_{yy}^{2}+\epsilon_{zz}^{2})\\ &+\frac{2\nu}{1-2\nu}(\epsilon_{xx}\epsilon_{yy}+\epsilon_{yy} \epsilon_{zz}+\epsilon_{zz}\epsilon_{xx})\\ &+2(\epsilon_{xy}^{2}+\epsilon_{yz}^{2}+\epsilon_{xx}^{2})]. \end{split} \tag{33}\]
Then, to find the total bulk energy, \(W\), we must integrate the energy density over the volume of the sphere (or over the volume outside the spherical void).
Using Eqs. 27, 28, and 29, in conjunction with Eqs. A4, A5, and A6 from Appendix A, we can calculate each strain component with the result that each strain component comprises a sum of up to twenty spherical harmonics:
\[\epsilon_{ij}=\sum_{\begin{subarray}{c}l^{\prime}=l-2,l\\ m^{\prime}=\pm m,\pm m\pm 1,\pm m\pm 2\end{subarray}}d_{l^{\prime},m^{ \prime}}Y_{l^{\prime}}^{m^{\prime}} \tag{34}\]
where \(d_{l^{\prime},m^{\prime}}\) are the coefficients of \(Y_{l^{\prime}}^{m^{\prime}}\) in \(\epsilon_{ij}\) and depend on cartesian coordinates \(i\) and \(j\) (Table 1).
The spherical harmonics are orthogonal and normalized, that is,
\[\int Y_{l}^{m}(Y_{l^{\prime}}^{m^{\prime}})^{*}d\Omega=\delta_{ll^{\prime}} \delta_{mm^{\prime}} \tag{35}\]
where \((Y_{l}^{m})^{*}\) is the complex conjugate of \(Y_{l}^{m}\). Since \((Y_{l}^{m})^{*}=(-1)^{m}Y_{l}^{-m}\), it follows that
\[\int Y_{l}^{m}Y_{l^{\prime}}^{-m^{\prime}}d\Omega=(-1)^{m}\delta_{ll^{\prime}} \delta_{mm^{\prime}}. \tag{36}\]
We can use this result to facilitate integration of the energy density over angles by first representing \(\epsilon_{ij}\) as two vectors, each of 10 components, one corresponding to spherical harmonics of degree \(l\) and the other corresponding to spherical harmonics of degree \(l-2\) (\(l\) and \(l+2\) for irregular solution):
\[\mathbf{d}_{\epsilon_{ij}}=(d_{l^{\prime},m+2},d_{l^{\prime},m+1},d_{l^{\prime },m},d_{l^{\prime},m-1},d_{l^{\prime},m-2},d_{l^{\prime},-m+2},d_{l^{\prime},-m +1},d_{l^{\prime},-m},d_{l^{\prime},-m-1},d_{l^{\prime},-m-2}) \tag{37}\]
where \(l^{\prime}=l-2,\ l\). Next, for each of \(l\) and \(l-2\), we construct a \(10\times 10\) matrix, whose entries derive from the left hand side of Eq. 36:
\[M=\left(\begin{array}{cccccccc}\delta_{-2,m}&0&-\delta_{-1,m}&0&\delta_{0,m }&0&0&0&(-1)^{m}\\ 0&\delta_{-1,m}&0&-\delta_{0,m}&0&0&0&(-1)^{m+1}&0\\ -\delta_{-1,m}&0&\delta_{0,m}&0&-\delta_{1,m}&0&0&(-1)^{m}&0&0\\ 0&-\delta_{0,m}&0&\delta_{1,m}&0&0&(-1)^{m+1}&0&0&0\\ \delta_{0,m}&0&-\delta_{1,m}&0&\delta_{2,m}&(-1)^{m}&0&0&0&0\\ 0&0&0&0&(-1)^{m}&\delta_{2,m}&0&-\delta_{1,m}&0&\delta_{0,m}\\ 0&0&0&(-1)^{m+1}&0&0&\delta_{1,m}&0&-\delta_{0,m}&0\\ 0&0&(-1)^{m}&0&0&-\delta_{1,m}&0&\delta_{0,m}&0&-\delta_{-1,m}\\ 0&(-1)^{m+1}&0&0&0&0&-\delta_{0,m}&0&\delta_{-1,m}&0\\ (-1)^{m}&0&0&0&0&\delta_{0,m}&0&-\delta_{-1,m}&0&\delta_{-2,m}\end{array} \right). \tag{38}\]
It then follows that the required integrals over angles now correspond to matrix multiplication:
\[\int\epsilon_{ij}\epsilon_{pq}d\Omega=\mathbf{d}_{\epsilon_{ij}}M\mathbf{d}_{ \epsilon_{pq}}^{\ \
For general values of \(l\), the expression for the bulk elastic energy appears unwieldly (as can be seen from the Mathematica notebook). However, for any specific value of \(l\), the elastic energy reduces to a remarkably simple form. Examination of this energy for values of \(l\) from 1 to 25, using Mathematica's FindSequenceFunction, indicates that the bulk elastic energies are given for general \(l\) by
\[W=g^{2}\mu R^{3}\left(\frac{\left(2l^{2}-3l-1\right)\nu-\left(2l^{2}-l+1\right) }{2(2l+1)\nu-(3l+1)}\right) \tag{42}\]
for spheres, and
\[W=g^{2}\mu R^{3}\left(\frac{\left(4+7l+2l^{2}\right)\nu-\left(4+5l+2l^{2} \right)}{2(1+2l)\nu-(2+3l)}\right) \tag{43}\]
for spherical voids. Eq. 42 and Eq. 43 are key results of this paper.
Fig. 3 and Fig. 4 present the energy density, averaged over angles, within a shell at radius \(r\) for spheres and spherical voids, respectively. Inspection of Fig. 3 and Fig. 4 makes it clear that for increasing \(l\), most of the energy density, displacement and strain is confined to an increasingly narrow near-surface layer. In Fig. 3 for spheres, each curve displays a peak at a radius less than \(R\), which appears progressively closer to the surface for progressively larger \(l\) values. By contrast, in Fig. 4, the curves for spherical voids appear to decrease monotonically as \(r\) increases.
For boundary conditions, described by the sum of two spherical harmonics, \(Y_{lm}\) and \(Y_{l^{\prime}m^{\prime}}\), the solution for the displacement \(\mathbf{u}\) is the sum of the two solutions, satisfying boundary conditions described by \(Y_{lm}\) and \(Y_{l^{\prime}m^{\prime}}\) separately. This result is inevitable given that Eq. 1 is linear in \(\mathbf{u}\). We furthermore find that the corresponding bulk elastic energy is also additive, _i.e._ the energy for the two-spherical-harmonic boundary condition, \(Y_{lm}+Y_{l^{\prime}m^{\prime}}\), is the sum of the energy for boundary condition, \(Y_{lm}\), and the energy for boundary condition, \(Y_{l^{\prime}m^{\prime}}\). The reason is clear for cases in which \(l\) and \(l^{\prime}\) are far apart, because from Eq. 34, \(\mathbf{u}_{lm}\) and \(\mathbf{u}_{l^{\prime}m^{\prime}}\) then have no spherical harmonics in common. However, even in cases where \(l-l^{\prime}=2\), so that the same spherical harmonics may appear in both \(\mathbf{u}_{lm}\) and \(\mathbf{u}_{l^{\prime}m^{\prime}}\), we find that the energy is additive.
Finally, as the alternatives to a buckled sphere and a buckled spherical void, we consider the elastic energy of a isotropically expanded sphere and an isotropically expanded spherical void. In the case of an isotropically expanded sphere, the displacement is \(\mathbf{u}=g\mathbf{r}\) (which also satisfies Eq. 1), so that \(u_{x}=gx,u_{y}=gy,u_{z}=gz\), and \(\epsilon_{xx}=\epsilon_{yy}=\epsilon_{zz}=g\) while \(\epsilon_{ij}=0\) for \(i\neq j\). Substituting these results for the strains into Eq. 33, we find, for the energy density,
\[w=3\mu g^{2}\frac{1+\nu}{1-2\nu} \tag{44}\]
and, for the total elastic energy of an isotropically expanded sphere,
\[E_{\mathrm{isotropic}}=4\pi R^{3}\mu g^{2}\frac{1+\nu}{1-2\nu}. \tag{45}\]
In the case of an isotropically expanded spherical void, the displacement is \(\mathbf{u}=g\frac{R^{3}}{r^{2}}\mathbf{r}\). The corresponding energy density is
\[w=\frac{6\mu g^{2}R^{6}}{r^{6}} \tag{46}\]
and the corresponding total energy is
\[E_{\mathrm{isotropic}}=8\pi\mu g^{2}R^{3}. \tag{47}\]
## VII Core-shell system
In this section, we revisit the buckling instability that occurs in a spherical core-shell system, when the area mismatch between a stiff shell and a soft core exceeds a critical value, corresponding to the elastic energy of
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline & \(l\) (sphere) & \(m\) (sphere) & \(l\) (void) & \(m\) (void) \\ \hline Shape & \(l\) & \(\pm m\) & \(l\) & \(\pm m\) \\ \hline \(u_{x,y}(R),v_{x,y}(R)\) & \(l\pm 1\) & \(\pm m\pm 1\) & \(l\pm 1\) & \(\pm m\pm 1\) \\ \hline \(u_{x}(R),v_{z}(R)\) & \(l\pm 1\) & \(\pm m\) & \(l\pm 1\) & \(\pm m\) \\ \hline \(u_{x,y},v_{x,y}\) & \(l\pm 1\) & \(\pm m\pm 1\) & \(l\pm 1\) & \(\pm m\pm 1\) \\ \hline \(u_{z},v_{z}\) & \(l\pm 1\) & \(\pm m\) & \(l\pm 1\) & \(\pm m\) \\ \hline \(\epsilon_{xx,xy,yx,yy}\) & \(l,l-2\) & \(\pm m\pm 2,\pm m\) & \(l,l+2\) & \(\pm m\pm 2,\pm m\) \\ \hline \(\epsilon_{xz,xx,yx,zy}\) & \(l,l-2\) & \(\pm m\pm 1\) & \(l,l+2\) & \(\pm m\pm 1\) \\ \hline \(\epsilon_{xz}\) & \(l,l-2\) & \(\pm m\) & \(l,l+2\) & \(\pm m\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Spherical harmonics components of shape, displacement and strain
a isotropically expanded state exceeding the elastic energy of a buckled state. To generally treat a core-shell system, composed of two materials with different elastic properties, in addition to the regular solution applicable within the core, we would also need the solution to Eq. 1 within a spherical shell. The solution within a shell is the superposition of the regular and the irregular solutions, which must then together be matched to the appropriate boundary conditions at the inner radius, where the core and the shell meet, and at the outer radius of the shell. With these solutions in hand, we would then calculate the strains and elastic energies.
Instead of this route, we follow Ref. [19] and consider the limiting case that the shell can be described as a thin membrane of fixed area, \(A\), and bending stiffness, \(\kappa\). The surface energy is calculated by integrating the square of the mean curvature, \(H\), over the surface:
\[E_{\rm surface}=\frac{\kappa}{2}\oint_{r=R}H^{2}dS. \tag{48}\]
Then, when the shape of the membrane is described by a real spherical harmonic, \(Y_{lm}\), the \(l\)-dependent part of the membrane elastic energy is
\[E_{\rm surface}=\frac{\kappa}{8}g^{2}[l(l+2)(l^{2}-1)], \tag{49}\]
independent of \(m\)[19; 25].
In the context of a fixed-area membrane, the buckling instability is controlled by relative excess area, namely the difference between the area of the membrane and the area of the spherical core, normalized by the area of the core:
\[\Delta=\frac{A}{4\pi R^{2}}-1. \tag{50}\]
Therefore, we must relate the buckling amplitude, \(g\), to the relative excess area, \(\Delta\). For buckled shapes, described by real spherical harmonics, \(Y_{lm}\), Ref. [19] showed that
\[g=\sqrt{\Delta}(\frac{8\pi}{l(l+1)+2})^{1/2}. \tag{51}\]
In this case, the energies of both the core and the shell are proportional to \(g^{2}\). Therefore, the total energy of the core-shell system with shape \(Y_{lm}\) is proportional to \(\Delta\).
Combining Eq. 42, Eq. 49, and Eq. 51 and introducing \(\alpha\), given by
\[\alpha=\frac{\mu R^{3}}{\kappa} \tag{52}\]
we find that the total core-shell energy for spheres is
\[E_{\rm total}=\frac{8\kappa\Delta}{l^{2}+l+2}\pi\left(\alpha\frac{\left(2l^{2 }-3l-1\right)\nu-\left(2l^{2}-l+1\right)}{2(2l+1)\nu-(3l+1)}+\frac{1}{8}(l-1) l(l+1)(l+2)\right). \tag{53}\]
Similarly, the total energy for spherical voids, with a membrane surrounding the void, is
\[E_{\rm total}=\frac{8\kappa\Delta}{l^{2}+l+2}\pi\left(\alpha\frac{\left(4+7l+ 2l^{2}\right)\nu-\left(4+5l+2l^{2}\right)}{2(1+2l)\nu-(2+3l)}+\frac{1}{8}(l-1 )l(l+1)(l+2)\right). \tag{54}\]
Fig. 5 and Fig. 6 plot these energies for \(\nu=0.2\). In these plots, each line represents the energy associated with a particular value of \(l\). It is clear from these figures that which value of \(l\) corresponds to the lowest energy steps from one value to the next as \(\alpha\) increases. The \(l\) value of the lowest total energy state is plotted versus \(\alpha\) in Fig. 7 for spheres and in Fig. 8 for spherical voids. As \(\alpha\) increases, the minimum energy \(l\) increases. We pick four values of Poisson's ratio \(\nu\) to illustrate the trend. Poisson's ratio is the material property describing the deformation of a material in directions perpendicular to the direction of loading, which lies between \(-1\) and \(\frac{1}{2}\) for stable, isotropic, linear elastic material. A Poisson's ratio of \(\frac{1}{2}\) means that the material is incompressible.
To further make sense of Fig. 7, we consider the limit of large \(l\) and treat \(l\) as a continuous variable. Then, for spheres
\[E_{\rm total}\simeq 8\kappa\Delta\left(\frac{2\alpha(1-\nu)}{(3-4\nu)l}+\frac{l ^{2}}{8}\right), \tag{55}\]
and we can find the value of \(l\) that minimizes the total energy (\(l^{*}\)). The result is
\[l^{*}=2\left(\frac{\alpha(1-\nu)}{3-4\nu}\right)^{\frac{1}{3}} \tag{56}\]
The value of \(l^{*}\) varies as \(\alpha^{\frac{1}{3}}\), consistent with the behavior apparent in Fig. 7. The elastic energy corresponding to \(l^{*}\) is
\[E_{\rm total}^{*}=12\kappa\Delta\left(\frac{\alpha(1-\nu)}{3-4\nu}\right)^{ \frac{3}{3}}, \tag{57}\]
reminiscent of the minimum energy envelope in Fig. 5.
For the isotropically expanded state, \(g=\Delta/2\) to linear order. Therefore, in contrast to the linear-in-\(\Delta\) buckled state energy, the energy of an isotropically expanded sphere is,
\[E_{\rm isotropic}=\pi R^{3}\mu\Delta^{2}\frac{1+\nu}{1-2\nu}, \tag{58}\]
proportional to \(\Delta^{2}\). Thus, for small \(\Delta\), the isotropic state inevitably has a lower energy than the buckled state, while for large \(\Delta\), the opposite is true.
To find the critical value of \(\Delta\) at which the core-shell system transitions from isotropically expanded to buckled, we set the energies for both cases to be equal, and then solve for the corresponding value of \(\Delta\), namely \(\Delta_{c}\):
\[\Delta_{c}=\frac{E_{\rm total}}{\pi R^{3}\mu\frac{1+\nu}{1-2\nu}\Delta_{c}}= \frac{E_{\rm total}/(\kappa\Delta_{c})}{\pi\alpha\frac{1+\nu}{1-2\nu}}. \tag{59}\]
Since \(E_{\rm total}/(\kappa\Delta_{c})\) is independent of \(\Delta_{c}\), the right-hand side of Eq. 59 is the desired solution for \(\Delta_{c}\). The analogous result for spherical voids with an interior shell is
\[\Delta_{c}=\frac{E_{\rm total}/(\kappa\Delta_{c})}{2\pi\alpha}. \tag{60}\]
The deformations shown in Fig. 1 and Fig. 2 for spheres and spherical voids, respectively, both correspond to \(\Delta_{c}\) for \(\alpha=600\). We plot \(\Delta_{c}\) as a function of \(\alpha\) as the curved lines in Fig. 9 for spheres and in Fig. 10 for spherical voids. The region below the \(\Delta_{c}\)-versus-\(\alpha\) curve corresponds to an isotropically expanded phase, while the region above is the buckled phase. The vertical lines in these figures separate buckled phases with different \(l\) values. Thus, Fig. 9 and Fig. 10 represent shape phase diagrams.
In general, a larger value of \(\alpha\) requires a lower relative excess area in order for there to be a transition into the buckled phase above the \(\Delta_{c}\) curve. A larger value of \(\alpha\) also gives rise to a larger \(l\) value in the buckled phase. Clearly, the vertical lines separating buckled states with different values of \(l\) do not align at the same values of \(\alpha\) for different Poisson ratios. In the large-\(l\) limit, for a core-shell system, we have that
\[\Delta_{c}=\frac{12}{\pi}\alpha^{-\frac{1}{3}}\left(\frac{1-2\nu}{1+\nu} \right)\left(\frac{1-\nu}{3-4\nu}\right)^{\frac{2}{3}}. \tag{61}\]
## VIII Conclusion
By applying linear elasticity theory and exploiting well-known properties of the solid harmonics, we have described how to find the displacements either inside solid spheres or outside spherical voids, assuming in both cases that the surface of the sphere or the void shows a radial surface deformations, whose amplitude is given by a real spherical harmonic. Using the displacements so-obtained, we then calculated the corresponding bulk elastic energies, providing closed-form expressions for these energies, for any values of the spherical harmonic degree (\(l\)), Poisson ratio, and shear modulus. We found that the elastic energies are independent of the spherical harmonic index (\(m\)), consistent with expectations based on symmetry considerations. These collected results represent an important addition to our knowledge of the linear elasticity of systems with (near) spherical symmetry. In addition to their relevance to the buckling/wrinkling transitions of core-shell systems, because any shape can be described as a superposition of spherical harmonics, our results will be valuable for researchers broadly interested in the elasticity of spheres or spherical voids, that experience surface shape deformations. We also revisited the buckling instability experienced by a core-shell system comprising an elastic sphere, attached within a membrane of fixed area, that occurs when the area of the membrane sufficiently exceeds the area of the unstrained sphere. By finding the state which possesses the smallest total energy the sum of the bulk and surface elastic energies within linear elasticity, we determined the phase diagram of the core-shell sphere's shape, specifying what value of \(l\) is realized as a function of the area mismatch and the core-shell elasticity. Similarly, we also determined the shape phase diagram for a spherical void bounded by a fixed-area membrane.
## Supplementary Material
A Mathematica notebook that performs the calculations described is available as supplementary material [24].
## Acknowledgements
This work was supported by an Allen Distinguished Investigator Award, a Paul G. Allen Frontiers Group advised grant of the Paul G. Allen Family Foundation. We are especially grateful to David Poland for finding the simple form of the bulk energy for general values of \(l\), and Nick Read for invaluable discussions.
## Appendix A Properties of regular solid harmonics
We summarize some useful results:
\[\nabla^{2}(r^{l}Y_{l}^{m})=0, \tag{62}\]
\[\nabla\cdot(r^{2}\nabla(r^{l}Y_{l}^{m})) =(\nabla r^{2})\cdot\nabla(r^{l}Y_{l}^{m}) \tag{63}\] \[=2\mathbf{r}\cdot\nabla(r^{l}Y_{l}^{m})\] \[=2r\frac{\partial}{\partial r}(r^{l}Y_{l}^{m})\] \[=2l(r^{l}Y_{l}^{m}),\]
\[\nabla^{2}(r^{2}\nabla(r^{l}Y_{l}^{m}))=2(2l+1)\nabla(r^{l}Y_{l}^{m}), \tag{64}\]
\[\frac{\partial}{\partial x}(r^{l}Y_{l}^{m})=\frac{1}{2}r^{l-1}\sqrt{\frac{(2l+1)(l-m )!}{(l+m)!}}\left(\frac{Y_{l-1}^{m+1}}{\sqrt{\frac{(2l-1)(l-m-2)!}{(l+m)!}}}- \frac{(l+m-1)(l+m)Y_{l-1}^{m-1}}{\sqrt{\frac{(2l-1)(l-m)!}{(l+m-2)!}}}\right), \tag{10}\]
\[\frac{\partial}{\partial y}(r^{l}Y_{l}^{m})=-\frac{1}{2}ir^{l-1}\sqrt{\frac{(2 l+1)(l-m)!}{(l+m)!}}\left(\frac{(l+m-1)(l+m)Y_{l-1}^{m-1}}{\sqrt{\frac{(2l-1)(l-m )!}{(l+m-2)!}}}+\frac{Y_{l-1}^{m+1}}{\sqrt{\frac{(2l-1)(l-m-2)!}{(l+m)!}}} \right), \tag{11}\]
and
\[\frac{\partial}{\partial z}(r^{l}Y_{l}^{m})=\frac{(l+m)r^{l-1}\sqrt{\frac{(2 l+1)(l-m)!}{(l+m)!}}Y_{l-1}^{m}}{\sqrt{\frac{(2l-1)(l-m-1)!}{(l+m-1)!}}}. \tag{12}\]
It follows that
\[\alpha=\frac{(b_{x}-ib_{y})\sqrt{\frac{(2l+3)(l-m+1)!}{(l+m+1)!}}}{2\sqrt{ \frac{(2l+1)(l-m-1)!}{(l+m+1)!}}}, \tag{13}\]
\[\beta=\frac{b_{z}(l+m+1)\sqrt{\frac{(2l+3)(l-m+1)!}{(l+m+1)!}}}{\sqrt{\frac{(2 l+1)(l-m)!}{(l+m)!}}}, \tag{14}\]
## Appendix B Properties of irregular solid harmonics
\[\frac{\partial}{\partial x}(r^{-l-1}Y_{l}^{m})=\frac{1}{2}\sqrt{\frac{2l+1}{ 2l+3}}\left(\sqrt{(l+m+1)(l+m+2)}r^{-l-2}Y_{l+1}^{m+1}-\sqrt{(l-m+1)(l-m+2)}r ^{-l-2}Y_{l+1}^{m-1}\right) \tag{15}\]
\[\frac{\partial}{\partial y}(r^{-l-1}Y_{l}^{m})=-\frac{1}{2}i\sqrt{\frac{2l+1}{ 2l+3}}\left(\sqrt{(l-m+1)(l-m+2)}r^{-l-2}Y_{l+1}^{m-1}+\sqrt{(l+m+1)(l+m+2)}r ^{-l-2}Y_{l+1}^{m+1}\right) \tag{16}\]
\[\frac{\partial}{\partial z}(r^{-l-1}Y_{l}^{m})=-\sqrt{\frac{2l+1}{2l+3}}\sqrt {(l-m+1)(l+m+1)}r^{-l-2}Y_{l+1}^{m} \tag{17}\]
## Appendix C Regular solution coefficients
\[A_{m+1}=\left\{\begin{array}{ll}0&l<m+2\\ \frac{g(l+1)(2l+3)e^{2i\pi(l+m)}R^{-l-1}\Gamma(l-m+1)}{4\sqrt{2}(l(4\nu-3)+2 \nu-1)\sqrt{(4l^{2}-1)(l-m)!}\Gamma(l-m-1)}&\mbox{otherwise}\end{array}\right. \tag{18}\]
\[A_{m-1}=\left\{\begin{array}{ll}\frac{g(-1)^{-2l}\sqrt{\frac{(2l+3)(l+m)}{ 8l^{2}-2}}R^{-l-1}\left(-(l+1)(2l+3)e^{2i\pi(2l+m)}\right)}{4(l(4\nu-3)+2\nu-1 )}&l+m\geq 2\\ 0&\mbox{otherwise}\end{array}\right. \tag{19}\]
\[B_{m+1}=-\frac{1}{2}g(-1)^{2(l+m)}\sqrt{\frac{(l+m+1)(l+m+2)}{8l(l+2)+6}}R^{-l-1} \tag{10}\]
\[B_{m-1}=\frac{1}{2}g(-1)^{2(l+m)}\sqrt{\frac{(l-m+1)(l-m+2)}{8l(l+2)+6}}R^{-l-1} \tag{11}\]
\[C_{m+1}=\left\{\begin{array}{ll}\frac{1}{2}g(-1)^{-2l}\sqrt{\frac{(l-m-1)(l- m)}{8l^{2}-2}}R^{l+1}\left(R^{2}\right)^{-l}&l\geq m+2\\ 0&\text{otherwise}\end{array}\right. \tag{12}\]
\[C_{m-1}=\left\{\begin{array}{ll}-\frac{1}{2}g(-1)^{-2l}\sqrt{\frac{(l+m-1)( l+m)}{8l^{2}-2}}R^{l+1}\left(R^{2}\right)^{-l}&l+m\geq 2\\ 0&\text{otherwise}\end{array}\right. \tag{13}\]
\[D_{m+1}=\left\{\begin{array}{ll}\frac{1}{8(2l+1)^{2}(l(4\nu-3)+2\nu-1)\sqrt {l(8l(l+1)-6)\Gamma(l-m-1)}}(ig(-1)^{m}e^{i\pi(3l+m)}(-R)^{-l-1}\\ (l^{2}\sqrt{l(2l+1)^{3}(2l+3)\Gamma(l-m+1)}+l(2m\sqrt{l(2l+1)^{3}(2l+3)\Gamma (l-m+1)}\\ +3\sqrt{l(2l+1)^{3}(2l+3)\Gamma(l-m+1)}+2\sqrt{l(2l+1)^{3}(2l+3)(l-m+1)\Gamma (l-m+2)}\\ +2\sqrt{l(2l+1)(2l+3)(l-m+1)(l-m+2)\Gamma(l-m+3)})+2\sqrt{l(2l+1)^{3}(2l+3) \Gamma(l-m+1)}\\ +2\sqrt{l(2l+1)^{3}(2l+3)(l-m+1)\Gamma(l-m+2)}+m(m\sqrt{l(2l+1)^{3}(2l+3) \Gamma(l-m+1)}\\ +3\sqrt{l(2l+1)^{3}(2l+3)\Gamma(l-m+1)}+2\sqrt{l(2l+1)^{3}(2l+3)(l-m+1)\Gamma (l-m+2)})\\ +\sqrt{l(2l+1)(2l+3)(l-m+1)(l-m+2)\Gamma(l-m+3)}))\\ 0\end{array}\right. \tag{14}\]
\[D_{m-1}=\left\{\begin{array}{ll}\frac{ig(l+1)(2l+3)e^{i\pi(3l+2m)}\sqrt{ \frac{(l+m-1)(l+m)}{8l^{2}-2}}(-R)^{-l-1}}{4(l(4\nu-3)+2\nu-1)}&l+m\geq 2\\ 0&\text{otherwise}\end{array}\right. \tag{15}\]
\[E_{m+1}=\frac{1}{2}ig(-1)^{2(l+m)}\sqrt{\frac{(l+m+1)(l+m+2)}{8l(l+2)+6}}R^{- l-1} \tag{16}\]
\[E_{m-1}=\frac{1}{2}ig(-1)^{2(l+m)}\sqrt{\frac{(l-m+1)(l-m+2)}{8l(l+2)+6}}R^{- l-1} \tag{17}\]
\[F_{m+1}=\left\{\begin{array}{ll}-\frac{1}{2}ig(-1)^{-2l}\sqrt{\frac{(l-m-1)( l-m)}{8l^{2}-2}}R^{l+1}\left(R^{2}\right)^{-l}&l\geq m+2\\ 0&\text{otherwise}\end{array}\right. \tag{18}\]
\[F_{m-1}=\left\{\begin{array}{ll}-\frac{1}{2}ig(-1)^{-2l}\sqrt{\frac{(l+m-1)( l+m)}{8l^{2}-2}}R^{l+1}\left(R^{2}\right)^{-l}&l+m\geq 2\\ 0&\text{otherwise}\end{array}\right. \tag{19}\]
\[I=\left\{\begin{array}{ll}g(-1)^{-2l}\sqrt{\frac{(l-m)(l+m)}{8l^{2}-2}}R^{l+1} \left(R^{2}\right)^{-l}&l\geq m+1\\ 0&\text{otherwise}\end{array}\right. \tag{10}\]
\[H=g(-1)^{2(l+m)}\sqrt{\frac{(l-m+1)(l+m+1)}{8l(l+2)+6}}R^{-l-1} \tag{11}\]
\[G=\left\{\begin{array}{ll}\frac{1}{4\sqrt{2}(l(4\nu-3)+2\nu-1) \sqrt{(2l-1)\Gamma(l-m)\Gamma(l+m)}}(g(-1)^{2(l+m)}R^{-l-1}\\ (((l+m+1)(l+m+2))^{3/2}\Gamma(l+m+1)\sqrt{\frac{\Gamma(l-m+1)}{(2l+1)\Gamma(l+m +3)}}\\ +\sqrt{\frac{(l-m+1)(l-m+2)\Gamma(l-m+3)\Gamma(l+m+1)}{2l+1}}\\ +2\sqrt{\frac{(l-m+1)(l+m+1)\Gamma(l-m+2)\Gamma(l+m+2)}{2l+1}}))\\ 0&\text{otherwise}\end{array}\right. \tag{12}\]
## Appendix D Irregular solution coefficients
\[J_{m+1}= \frac{1}{8(4l\nu-3l+2\nu-2)}g(-1)^{m-l}\sqrt{\frac{(l+m+1)(l+m+2) }{4l(l+2)+3}}R^{l}(\sqrt{l\left(4l^{2}-1\right)\left(l-m-1\right)(l-m)} \tag{13}\] \[\left(\begin{array}{ll}\left(-1\right)^{-l-m}\sqrt{\frac{(l-m-1 )(l-m)}{8l^{3}-2l}}&l\geq m+2\\ 0&\text{otherwise}\end{array}\right)\] \[+\sqrt{2}\sqrt{l\left(4l^{2}-1\right)\left(l-m\right)(l+m)}\left( \begin{array}{ll}\left(-1\right)^{-l-m}\sqrt{-\frac{(l-m)(l+m)}{l-4l^{3}}}&l \geq m+1\\ 0&\text{otherwise}\end{array}\right)\] \[+\sqrt{l\left(4l^{2}-1\right)\left(l+m-1\right)(l+m)}\left( \begin{array}{ll}\left(-1\right)^{-l-m}\sqrt{\frac{(l+m-1)(l+m)}{8l^{3}-2l} }&l+m\geq 2\\ 0&\text{otherwise}\end{array}\right)\right)\]
\[J_{m-1}=\left\{\begin{array}{ll}\frac{g(-1)^{-2l}(1-2l)\sqrt{\frac{(l-m+1)( l-m+2)}{8l(l+2)+6}}R^{l}}{4(4l\nu-3l+2\nu-2)}&l\geq m+2\\ -\frac{g(-1)^{-2l}\sqrt{\frac{(l-m+1)(l-m+2)}{8l(l+2)+6}}(3l-m-1)(l+m)R^{l}}{ 8l(4l-3l+2\nu-2)}&l\geq m+1\wedge l+m\geq 2\\ -\frac{g(-1)^{-2l}(l-m)\sqrt{\frac{(l-m+1)(l+m+2)}{4l(l+2)+6}}(1+m)R^{l}}{4(4l \nu-3l+2\nu-2)}&l\geq m+1\\ -\frac{g(-1)^{-2l}\sqrt{\frac{(l-m+1)(l+m+2)}{8l(l+2)+6}}(l+m-1)(l+m)R^{l}}{8 (4l\nu-3l+2\nu-2)}&l+m\geq 2\end{array}\right. \tag{14}\]
\[K_{m+1}=\left\{\begin{array}{ll}\frac{1}{2}g(-1)^{-2l}\sqrt{\frac{(l-m-1)(l -m)}{8l^{2}-2}}R^{l}&l\geq m+2\\ 0&\text{otherwise}\end{array}\right. \tag{15}\]
\[K_{m-1}=\left\{\begin{array}{ll}-\frac{1}{2}g(-1)^{-2l}\sqrt{\frac{(l+m-1)( l+m)}{8l^{2}-2}}R^{l}&l+m\geq 2\\ 0&\text{otherwise}\end{array}\right. \tag{16}\]
\[L_{m+1}=-\frac{1}{2}g(-1)^{2(l+m)}\sqrt{\frac{(l+m+1)(l+m+2)}{8l(l+2)+6}}R^{l+2} \tag{49}\]
\[L_{m-1}=\frac{g(-1)^{2(l+m)}R^{l+2}}{2\sqrt{\frac{8l(l+2)+6}{(l-m+1)(l-m+2)}}} \tag{50}\]
\[M_{m+1}= -\frac{1}{8(4l\nu-3l+2\nu-2)}ig(-1)^{m-l}\sqrt{\frac{(l+m+1)(l+m+2)}{4l( l+2)+3}}R^{l}(\sqrt{l(4l^{2}-1)\left(l-m-1\right)(l-m)} \tag{51}\] \[\left(\left\{\begin{array}{ll}(-1)^{-l-m}\sqrt{\frac{(l-m-1)(l-m )}{8l^{3}-2l}}&l\geq m+2\\ 0&\mbox{otherwise}\end{array}\right.\right)\] \[+\sqrt{2}\sqrt{l\left(4l^{2}-1\right)(l-m)(l+m)}\left(\left\{ \begin{array}{ll}(-1)^{-l-m}\sqrt{-\frac{(l-m)(l+m)}{l-d^{3}}}&l\geq m+1\\ 0&\mbox{otherwise}\end{array}\right.\right)\] \[+\sqrt{l\left(4l^{2}-1\right)(l+m-1)(l+m)}\left(\left\{ \begin{array}{ll}(-1)^{-l-m}\sqrt{\frac{(l+m-1)(l+m)}{8l^{3}-2l}}&l+m\geq 2 \\ 0&\mbox{otherwise}\end{array}\right.\right)\] \[M_{m-1}= -\frac{1}{8(4l\nu-3l+2\nu-2)}ig(-1)^{m-l}R^{l}(\sqrt{\frac{l(2l- 1)(l-m-1)(l-m)(l-m+1)(l-m+2)}{2l+3}}\] (52) \[\left(\left\{\begin{array}{ll}(-1)^{-l-m}\sqrt{\frac{(l-m-1)(l -m)}{8l^{3}-2l}}&l\geq m+2\\ 0&\mbox{otherwise}\end{array}\right.\right)\] \[+\sqrt{2}\sqrt{\frac{l(2l-1)(l-m)(l-m+1)(l-m+2)(l+m)}{2l+3}}\left( \left\{\begin{array}{ll}(-1)^{-l-m}\sqrt{-\frac{(l-m)(l+m)}{l-d^{3}}}&l\geq m +1\\ 0&\mbox{otherwise}\end{array}\right.\right)\] \[+\sqrt{\frac{l(2l-1)(l-m+1)(l-m+2)(l+m-1)(l+m)}{2l+3}}\left( \left\{\begin{array}{ll}(-1)^{-l-m}\sqrt{\frac{(l+m-1)(l+m)}{8l^{3}-2l}}&l+m \geq 2\\ 0&\mbox{otherwise}\end{array}\right.\right)\] \[N_{m+1}= \left\{\begin{array}{ll}-\frac{1}{2}ig(-1)^{-2l}\sqrt{\frac{(l- m-1)(l-m)}{8l^{2}-2}}R^{l}&l\geq m+2\\ 0&\mbox{otherwise}\end{array}\right.\] (53) \[N_{m-1}= \left\{\begin{array}{ll}-\frac{1}{2}ig(-1)^{-2l}\sqrt{\frac{(l +m-1)(l+m)}{8l^{2}-2}}R^{l}&l+m\geq 2\\ 0&\mbox{otherwise}\end{array}\right.\] (54) \[O_{m+1}= \frac{1}{2}ig(-1)^{2(l+m)}\sqrt{\frac{(l+m+1)(l+m+2)}{8l(l+2)+6}}R ^{l+2} \tag{55}\]
\[O_{m-1}=\frac{ig(-1)^{2(l+m)}R^{l+2}}{2\sqrt{\frac{8l(l+2)+6}{(l-m+1)(l-m+2)}}} \tag{56}\]
\[S=g(-1)^{2(l+m)}\sqrt{\frac{(l-m+1)(l+m+1)}{8l(l+2)+6}}R^{l+2} \tag{57}\]
\[Q=\left\{\begin{array}{ll}g(-1)^{-2l}\sqrt{\frac{(l-m)(l+m)}{8l^{2}-2}}R^{l}&l\geq m +1\\ 0&\text{otherwise}\end{array}\right. \tag{101}\]
\[P=\left\{\begin{array}{ll}-\frac{g(-1)^{-2l}l(2l-1)\sqrt{\frac{(l-m+1)(l+m+1)} {8l(l+2)+6}}R^{l}}{2(4l\nu-3l+2\nu-2)}&l\geq m+2\\ -\frac{g(-1)^{-2l}(3l-m-1)(l+m)\sqrt{\frac{(l-m+1)(l+m+1)}{8l(l+2)+6}}R^{l}}{4(4 l\nu-3l+2\nu-2)}&l\geq m+1\wedge l+m\geq 2\\ -\frac{g(-1)^{-2l}(l-m)(l+m)\sqrt{\frac{(l-m+1)(l+m+1)}{8l(l+2)+6}}R^{l}}{2(4 l\nu-3l+2\nu-2)}&l\geq m+1\\ -\frac{g(-1)^{-2l}(l+m-1)(l+m)\sqrt{\frac{(l-m+1)(l+m+1)}{8l(l+2)+6}}R^{l}}{4( 4l\nu-3l+2\nu-2)}&l+m\geq 2\end{array}\right. \tag{102}\]
|
2301.08718
|
Reversing The Twenty Questions Game
|
Twenty questions is a widely popular verbal game. In recent years, many
computerized versions of this game have been developed in which a user thinks
of an entity and a computer attempts to guess this entity by asking a series of
boolean-type (yes/no) questions. In this research, we aim to reverse this game
by making the computer choose an entity at random. The human aims to guess this
entity by quizzing the computer with natural language queries which the
computer will then attempt to parse using a boolean question answering model.
The game ends when the human is successfully able to guess the entity of the
computer's choice.
|
Parth Parikh, Anisha Gupta
|
2023-01-19T12:51:59Z
|
http://arxiv.org/abs/2301.08718v1
|
# Reversing the Twenty Questions game
###### Abstract
Twenty questions is a widely popular verbal game. In recent years, many computerized versions of this game have been developed in which a user thinks of an entity and a computer attempts to guess this entity by asking a series of boolean-type (yes/no) questions. In this research, we aim to reverse this game by making the computer choose an entity at random. The human aims to guess this entity by quizzing the computer with natural language queries which the computer will then attempt to parse using a boolean question answering model. The game ends when the human is successfully able to guess the entity of the computer's choice.
Twenty questions game Query Reformulation Passage Retrieval Boolean Question-Answering Model Natural Language Inference
## 1 Introduction
For our course project, we aim to reverse the roles of the computer and human, such that _the computer will act as an answerer and a human as a questioner_. In the past, no such study has been conducted as this problem presented sophisticated challenges of Natural Language Inference and Textual Entailment. However, with the advent of transformer-based machine learning techniques such as BERT [1], RoBERTa [2], GPT-2 [3], and datasets such as BoolQ [4], such a model can be constructed.
As this problem has not been formally defined, our goal is to formalize it and present preliminary results regarding the same. Furthermore, while there are several pre-trained question-answering models that select the start and end points of a corpus containing an answer, a simple yes/no answering task is surprisingly challenging and complex. A model for such a task would have to examine entailment as well as investigate if the corpus makes a positive answer to the question unlikely, even if it doesn't directly state a negative answer [4]. Our reverse Akinator model could be used for any sort of factual checker to examine whether a statement is true or not, given a knowledge corpus.
## 2 Methodology
### History
Historically, Twenty Questions has been a popular multi-player parlour game wherein some participants would act as the _questioners_ and the others would be the _answerers_. The answerers would come up with a random entity which the questioners would then try and deduce by asking a series of yes/no questions. A \(19^{th}\) century rule-book [5] details the format of the game and introduces the concept of umpires (who resolve any dispute) and captains (an official spokesperson). Interestingly, though the rule book never constrained the _subject_ (guess), every Sunday, it was mandatory for the participants to pick an object, person, or thing mentioned in the Bible.
Constrained versions of the game soon became popular and a variant known as the _animal, vegetable, minerals_ was widely played in parlours. As constraints produced tractability, one of the earliest computerized implementations of this game solely used _Animals_ as its subject [6]. This game was part of the _101 BASIC Computer Games_ (1973). Around 1988, _20Q_ created by Robin Burgenerer emerged. This version used an artificial neural network to answer questions based on a human's interpretation of that question. Today, popular internet-based variants such as _Akinator_ deals with a wide category of entities and includes _Probably_, _Probably not_ and _Don't know_ as potential answers for a human.
### Entity Formulation and Pronoun Resolution
Our proposed model starts by selecting a random Wikipedia page of a named entity. This entity acts as our model's main entity - _the guess_. These random Wikipedia pages can be extracted by passing SPARQL queries to Wikidata [7]. The model then accepts natural language queries from a user. As the first step, each of these queries undergoes a basic pronoun resolution wherein a pronoun gets replaced with the model's main entity. For example, the model is likely to predict better results if we formulate the query in the following manner -
Is **it** an animated character? \(\rightarrow\) Is **Mickey Mouse** an animated character?
This step ensures that our model does not easily get confused when it sees another entity with a similar context.
### Paragraph/Sentence Retrieval
To obtain a relevant passage from the entity's Wikipedia text, we require a passage retrieval phase. Here, _relevance_ can be defined as a passage from the main entity's text-body which unambiguously answers a boolean-type query. For example -
Is _Mickey Mouse_ a comic book character?
"Beginning in 1930, **Mickey has also been featured extensively in comic strips and comic books**. The Mickey Mouse comic strip, drawn primarily by Floyd Gottfredson, ran for 45 years. Mickey has also appeared in comic books such as Mickey Mouse, Disney Italy's Topolino and MM - Mickey Mouse Mystery Magazine, and Wizards of Mickey."
- _From the Wikipedia page of Mickey Mouse (paragraph 3)_
As mentioned in [8], a trivial solution to this problem would be to perform sentence segmentation on the entire Wikipedia page and pass all the sentences to the question answering model. However, this can significantly affect the computational complexity as certain phases in BERT such as the _multi-headed attention layer_ requires \(n^{2}\cdot d+n\cdot d^{2}\) operations (here \(n\) is the sequence length and \(d\) is the depth) [9].
A sophisticated variant would be to rank the passages based on the query and retrieve the first \(N\) passages. We can use a ranking function such as Okapi BM25 [10] for such a task. However, as [10] uses a bag-of-words-based approach, its rankings can be too literal and devoid of any implicit context. To resolve this, we introduce a hybrid approach wherein a large subset of \(N_{1}\subseteq P\) passages is retrieved using BM25 and a much smaller subset \(N_{2}\subseteq N_{1}\) is then obtained using _Siamese BERT-Networks_[11]. Here, sentences/paragraphs are mapped to a dense vector representation using transformer networks such as BERT, which can then be compared using cosine similarity. We plan on embedding the query \(Q\) and comparing it against the embeddings of each \(n\in N_{2}\), keeping a track of the top \(N\) passages. A Python library - _Sentence Transformers_[12] provides pre-trained models for this task.
The above mentioned model uses a _sparse-first search_ mechanism wherein we retrieve the \(N_{1}\) documents using a statistical approach which is followed by a neural model. The drawback of this is that we may propagate errors from the document retrieval phase. That is, if we retrieve the wrong documents then it might affect the performance of the Transformer models. To mitigate this, Facebook Research developed _Dense Passage Retrieval_[13] which uses the concept of indexing phrases using a dual-encoder framework. Here, they enumerate a document for all phrases in that document and use a phrase encoder to embed each phrase in vector space. The queries are mapped to the same vector space and Nearest Neighbour Search is used to obtain the most relevant answers.
### Boolean Question Answering Model
To guess the boolean-type response, we propose a transformer-based model which takes as its input a query and \(N_{2}\) relevant paragraphs. We plan on experimenting with a BERT model pre-trained on entailment tasks and fine-tuned using the BoolQ dataset [4]. [4] showed that the highest accuracy is obtained when we pre-train models on entailment tasks that have large datasets (such as MultiNLI [14] and SNLI [15]) and fine-tuning them on BoolQ's dataset.
While playing games with Akinator, we observed that a certain class of questions can be answered using knowledge repositories such as Wikidata and DBpedia [16]. These questions involve highly distinguishing characteristics of the entity such as its gender, species, hypernyms, and significant others.
## 3 Experiments
As mentioned in Report 1's evaluation section, we verified our model's performance by playing it against the pre-existing Akinator using the Python library _akinatorpy_[17]. This library acts as the original Akinator, posing questions to our model and trying to guess which entity our model has in mind. The number of questions asked by the Akinator is not constrained in our experiments. We only stop the game once the Akinator guesses an entity with a probability greater than 80%.
### Akinator API
The Akinator API [17] allows us to access the Akinator's top guesses at a particular time, with a guess probability and a rank. The first guess is used to evaluate if the Akinator won (that is, if the Akinator was able to guess the answer correctly). The API also allows us to go back to a previous question and change our answers. Furthermore, we are able to select a nature of the entities that we want to guess. This comprises of language options (such as English, Chinese, German), and entity types (like animals, characters, and objects).
### Baseline model
Our initial baseline model answers the Akinator's questions at random with _Yes_, _Probably_, _I don't know_, _Probably not_ and _No_. However, when we performed our experiments, we observed that too many _i don't know_ or _probably yes/no_ responses would make the Akinator guess something along the lines of _a guy who plays randomly_ (this statement is one of Akinator's named entity which it assigns to anyone who guesses randomly). So we allocated these responses a much lower probability of \(0.05\) each, and distributed the remaining probabilities uniformly among the rest of the answer options, such that the baseline model could make a probabilistic random choice.
An entity only shows up when it is within the top few guesses of the Akinator. From our experiments on our initial baseline model, we hardly ever see the desired item show up in the list of top few guesses of the Akinator.
From the results shown in Figure 1, we see that the Akinator's guess converges to a _Sharktopus_ with a final probability \(>80\%\). However, the guess is incorrect, as is expected, since it's a random model. The desired animal (_Cheetah_) never features in the Akinator's guess list. In this model, the correct answer can only show up in the list of top guesses by chance, and this happens very rarely.
We performed some preliminary analysis using anaphora resolution on the questions asked by the Akinator. However, in some cases (ex. Table2), the extracted answer excerpts are more unrelated to the question after applying anaphora resolution to the question. As part of our preliminary analysis, we also explored the BERT Question Answering model. However, based on manual inspection of the results, the excerpts extracted using the BERT Question Answering model are less relevant to the question than that extracted using our pipeline. This could be supported by Reimers et al.'s work [11], where they show that averaging the [CLS] tokens for the BERT embeddings "...yields rather bad sentence embeddings, often worse than averaging GloVe embeddings".
### Improved Model
For our improved model, we implemented the _Okapi-BM25/SBERT_ pipeline proposed in Section 2.3. We fixed \(N_{1}\) to 100 and \(N_{2}\) to 5. For our current experiments, our pipeline outputs these top \(N_{2}\)_most similar_ excerpts that answers the Akinator's question at each step, and lets the human developer answer a _Yes/No_ based on these top five excerpts.
An example output of the same is shown in Figure 2.
#### 3.3.1 Constraining the domain
Our initial experiments using the aforementioned pipeline did not produce good results for general entities, including movie characters such as _Harry Potter_, as can be observed from the example in Figure3. This is often because the complexity of information is more for such characters, with both real life and reel life data, as well as information about a lot of other characters/persons documented in the Wikipedia articles. Given the lack of access to knowledge graphs, trivia questions are more difficult for our model to answer. We thus constrain our domain to English animal names.
The Wikipedia articles corresponding to a certain animal usually only talks about this animal and does not have a lot of content on other animals or information that requires a knowledge base for answering questions, thus making our problem more tractable for our purposes.
#### 3.3.2 Simple Wikipedia pipeline
In a lot of cases, our model was unable to distinguish between excerpts that referred to the actual animal and cultural references to that animal. For instance, when asked the question _'Does your animal [cheetah] still exist?'_, the following text excerpt is extracted from our cheetah wikipedia corpus with a very high confidence score:
_The Bill Thomas Cheetah American racing car, a Chevrolet-based coupe first designed and driven in 1963, was an attempt to challenge Carroll Shelby's Shelby Cobra in American sports car competition of the 1960s era. Because only two dozen or fewer chassis were built, with only a dozen complete cars, the Cheetah was never homological for competition beyond prototype status; its production ended in 1966._
Based on this excerpt, the yes/no model answers _'No'_, indicating that cheetahs are extinct, which immediately throws off the Akinator and it starts thinking of types of dinosaurs. However, we do not wish to completely disregard cultural references - these excerpts are helpful when questions such as _'Is there a car named after your animal?'_ are posed by
Figure 1: Random baseline model results
the Akinator. To avoid confusing our pipeline with such cultural references that do not directly relate to the animal in general, we ask the same question to the Simple Wikipedia corpus for our animal, and append the answer we get from here to the answer excerpt we get from the original Wikipedia article. If the average text confidence scores for the Simple Wikipedia and original Wikipedia articles is less than one standard deviation of the average negative sample scores on the same question, the pipeline outputs 'idk' as a response. Otherwise, we output yes/no based on our boolean answer model prediction on a combination of text answers from Simple Wikipedia and original Wikipedia.
#### 3.3.3 Detecting comparisons
For certain questions such as _'Is your animal smaller than a human?'_ or _'Is your animal bigger than your hand?'_, the model requires real world knowledge to provide accurate answers - _How tall is a regular human?_ and _How big is an average human hand?_. Handling such cases is challenging and beyond the scope of this project. However, to mitigate the consequences of answering these questions incorrectly, we inspect the question for _'comparison'_ words included in NLTK's comparative_sentences dictionary, such as _'smaller'_, '_shorter'_, etc. If the question contains such comparison words, the pipeline outputs an 'idk' response. If a correct answer to this question was expected to boost the probability of our animal in the Akinator's guess list, it might reduce the probability by a bit, but not as dramatically as an incorrect answer would lower the probability.
#### 3.3.4 Converting answer excerpts to Yes/No
Multilayer Perceptron ClassifierFor Report 1, we designed a baseline model for this classification task. We trained a Multilayer Perceptron Classifier model on the BoolQ dataset [4] to predict a Yes/No answer, given a question, and an answer excerpt from a passage. Each question and answer excerpt was first converted to an embedding vector by computing the GloVe embeddings of each token and averaging these over all the tokens. NLTK's TweetTokenizer [18] was used for word tokenization. The average of the question and excerpt embedding was then performed to obtain a semantic embedding representing the QnA phase, which was passed as an input to our classifier. The results of this model are shown in Figure 4.
DistilBERTFrom our results we see that the model has a low F1 score for prediction of _No_. For Report 2, we improved upon the Multilayer Perceptron Classifier model by architecturing an entailment model and fine-tuning it on the BoolQ dataset. The authors of the BoolQ paper observed their best performance by using the pretrained BERT-large
Figure 2: Sample results using improved pipeline
Figure 4: Baseline results obtained after converting answer excerpts to Yes/No labels for the BoolQ dataset
Figure 3: Answer excerpts extracted for fictional character Harry Potter
transformer model and fine-tuning it on their dataset. For our model, we experimented with DistilBERT - a lighter version of BERT with 97% of its language understanding capabilities and 60% faster. To train this model, we utilized its SequenceClassification model with batch size of 32, learning rate of \(10^{-5}\) and Adam optimization for stochastic gradient descent with gradient clipping. This model was fine-tuned on the BoolQ dataset. We trained it for 3 different epochs - 5 (35 minutes), 10 (110 minutes), 20 (230 minutes), and observed that 5 epochs severely overfitted on "Yes" response. However, 10 epochs reduced the overfitting, decreased the training loss to nearly 10% and provided a dev accuracy of 73.3%. Moreover, with 20 epochs, we experienced a severe overfitting on BoolQ with the model having difficulty converging due to a high learning rate. The figures detailing the same are figures 5 and 6.
#### 3.3.5 Negative Sampling
Usually, if an animal does not possess a certain characteristic, it is not mentioned in the Wikipedia article for that animal. In such cases, the results obtained by BM25 and cosine similarity might be misleading. Despite computing similarity scores for the relevant answers, the threshold determining which answer is appropriate for the question could be difficult to determine. For instance, if the entity is a _cheetah_ and we want to find out if _it is an animal that can be used in shows_, the most relevant answer from the Wikipedia article for _cheetah_ is _The cheetah has been widely portrayed in a variety of artistic works_. However, this does not answer the original question in the sense in which it was asked. To tackle this challenge, if we do not get _Yes_ as an answer to our question on the correct animal, we propose a negative sampling technique where we design a taxonomy of animals and select one entity at random from each broad category and treat these as negative samples to our model. The taxonomy uses a sample of well-known animals from ten broad categories - amphibians, birds, carnivores, domestic, fish, herbivores, invertebrates, mammals, primates and reptiles. We ask the same question with respect to all these negative samples and select the top-most ranking answer excerpts for each animal. We then compare the scores of these top answers with our current animal.
If the score of a negative sample is more than one standard deviation of that of our top answer (for the correct animal), it reflects low probability of finding the answer in the Wikipedia file for our correct animal. In this case, we check if the score of our top answer is within one standard deviation of the mean score of all the negative samples considered - if not, it will indicate that the score for our top answer is really low and there is no mention of the answer in the Wikipedia file, which would mean that the model doesn't know the answer and should output _idk_. Otherwise, if the top answer score is not within one standard deviation of the best negative sampling score but more than the mean score, we output _probably yes_ if the BoolQ yes/no answering model outputs _yes_ or _probably no_ if the yes/no answering model outputs _no_. An example of how this works is shown in Table 1. An example game excerpt incorporating negative sampling with the BoolQ outputs is shown in Figure 9.
#### 3.3.6 Training improved Yes/No model using negative samples
To further leverage answers extracted from the randomly selected negative samples, we hand-annotated 250 questions asked by the animal, for a list of 15 animals. We recorded the yes/no answers generated by our automated question answering pipeline, as well as the text score statistics (average, best and standard deviation) of the negative samples on the same question. We tried to train a model that aims to identify situations where the initial yes/no answer must be modified if the negative sample scores hint that the answer may not be present in the corpus for our animal. Given the limited number of hand-annotated samples, we used simple models like MLP, SVC and decision trees. However, most of the yes/no answers (>78%) matched with the human annotations and did not need any correction, resulting in the model overfitting on the initial yes/no answer and not utilizing the negative sample score statistics to make an improved prediction. We believe that an increased number of hand-annotated samples will improve the predictive performance of such models and can be incorporated as an improvement step after obtaining the initial yes/no answer from the model.
#### 3.3.7 Detecting and fixing a detour
Based on our experiments, we noticed that the Akinator's guess list is extremely volatile and sensitive to all answers. Even if the correct animal shows up in the guess list with the highest probability, the answer to the immediate next question can reduce its probability drastically, to the point of it getting eliminated entirely from the guess list. Fortunately, the Akinator has a weak long term memory, giving more importance to recent answers. This helps the Akinator return to animals similar to the correct animal after taking a long detour, and the answer often converges to the correct animal after the Akinator recovers from the detour. However, this might take a long time, and we might hit the maximum
Figure 8: Training loss and Dev accuracy after fine-tuning on RoBeRTa for 20 epochs
number of questions (80) after which the Akinator throws an error. We want to detect such detours early without allowing our pipeline to peek into the Akinator's guess list at any given time. This is challenging, given that we do not know where the Akinator's guesses are headed at any given time, and we are not aware of whether our past answers are correct or incorrect. We propose a technique to detect misleading answers using negative sampling results, and bring the Akinator back using positive samples - animals that are most similar to the correct animal.
Negative sampling to detect a detourTo judge which animals are similar/dissimilar to the correct animal, we extract the word embeddings for each animal in the negative sampling list and our vocabulary of animals, and compare these with the word embedding for the correct animal. We expect the embeddings for animals such as 'dog' and 'cat' to be more similar to each other and different from 'crocodile' and 'giraffe'. We consider a fixed negative sampling list (sampled randomly) for the entire game. For each question that the Akinator asks, we answer yes/no for all the animals in our negative sampling list, as well the correct animal. We store W most recent yes/no answers for all animals in the negative sampling list and for the correct animal. After answering each question, we check to see if our last W yes/no answers have been too similar to an animal in the negative sampling list that is very dissimilar to the correct animal. If so, we report a detour.
Fixing a detourIf we detect a detour, we inspect our animal vocabulary to identify N animals that are most similar to the correct animal. We call this our positive sampling list. We answer the next question with a majority yes/no vote from these positive samples. We do not answer every question this way because the Akinator is not likely to converge to the correct animal if we answer specific questions such as '_Does your animal have spots?_' incorrectly. Once we have fixed a detour, we empty the past W yes/no answers list for all animals in the negative samples - we do not want to apply this technique too early.
Figure 9: Example game for animal cheetah
## 4 Model Evaluation
We use accuracy, recall, precision and F1 score on the BoolQ test set as the evaluation metric for the submodel used to convert extracted answers to _Yes/No_. We can evaluate the submodels on pre-existing benchmarks. GLUE [19] contains several tasks such as similarity, paraphrasing and inference tasks, and can be used to evaluate the quality of sentence embeddings used in our model. SuperGLUE [20] can be used to test our question answering model. QNLI [19] dataset can be used to determine whether our selected answer excerpt contains the answer to the question posed by the Akinator. WNLI [19] can be used to evaluate our model's anaphora resolution performance, if we include this as a component of our final model.
We hand-annotated answers to 250 questions and compared these answers to the yes/no outputs of our pipeline. The answers matched with an accuracy of 78.69% and F1 scores 81.67% (class no) and 74.61% (class yes). Since it requires a lot of manual effort to hand-annotate these answers and play long games with the Akinator, we devised an approximate answering technique that guesses the correct yes/no answer for each question. This technique can only be applied to answers that result in the correct animal appearing in the Akinator's guess list. If the probability of the correct animal in the Akinator's guess list increases after answering a question, we estimate that answer to be a correct answer (correct answer equals the pipeline's output). Otherwise, we mark the answer as incorrect (correct answer is the opposite of the pipeline's output). Using this estimated correct answer, we labeled another 264 question-answer pairs that were automatically generated in games with the Akinator. 62.88% questions were answered correctly, considering
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Entity name & Sentence & Probability & Positive Sample? \\ \hline Cheetah & They have been widely depicted in art, literature, advertising, and animation. & 0.17 & Yes \\ \hline Cheetah & An open area with some cover, such as diffused bushes, is probably ideal for the cheetah because it needs to stalk and pursue its prey over a distance. & 0.10 & Yes \\ \hline Dog & In conformation shows, also referred to as breed shows, a judge familiar with the specific dog breed evaluates individual purebred dogs for conformity with their established breed type as described in the breed standard. & 0.26 & No \\ \hline Dog & In 2015, a study found that pet owners were significantly more likely to get to know people in their neighborhood than non-pet owners.Using dogs and other animals as a part of therapy dates back to the late 18th century, when animals were introduced into mental institutions to help socialize patients with mental disorders. & 0.17 & No \\ \hline Frog & It is typically used when the frog has been grabbed by a predator and may serve to distract or disorient the attacker so that it releases the frog. & 0.19 & No \\ \hline Frog & Frogs are used for dissections in high school and university anatomy classes, often first being injected with coloured substances to enhance contrasts among the biological systems. & 0.15 & No \\ \hline Penguin & Several species are found in the temperate zone, and one species, the Galapagos penguin, lives near the Equator. & 0.11 & No \\ \hline Penguin & In the 60s Batman TV series, as played by Burgess Meredith, he was one of the most popular characters, and in Tim Burton’s reimagining of the character in the 1992 film Batman Returns, he employed an actual army of penguins (mostly African penguins and king penguins). & 0.09 & No \\ \hline Snail & Sanils have considerable human relevance, including as food items, as pests, and as vectors of disease, and their shells are used as decorative objects and are incorporated into jewelry. & 0.15 & No \\ \hline Snail & Land snails are known as an agricultural and garden pest but some species are an edible delicacy and occasionally household pets. & 0.11 & No \\ \hline \end{tabular}
\end{table}
Table 1: Negative Sampling
the expected correct answer as the ground truth. However, there might be inconsistencies in this ground truth. For instance, there have been instances of cheetahs being tamed in human history, and a cheetahs is technically not able to roar - but the Akinator reduces the probability of 'cheetah' when it asks these questions and the pipeline answers correctly based on the Wikipedia article. So a probability reduction in the guess list may not always be indicative of an incorrect answer. The Akinator, at the end of the game, asks for the actual answer if it fails to identify the entity that the user had in mind, suggesting that it updates its knowledge by some sort of crowdsourcing, which may result in these anomalous results.
For further evaluation, we designed a couple of metrics - number of questions it takes the akinator to reconsider the correct animal after being thrown off by an incorrect answer (_detour recovery time_) and the best probability of the animal in the Akinator's guess list over the entire game (_best guess probability_).
Detour recovery timeWe measure the time (measured by the number of questions) taken by the Akinator to recover from an incorrect answer that knocks off the correct animal from the guess list to the point where it is reintroduced in the Akinator's guess list. Evaluating on our automated game results, we observe an average span of approximately 8 questions before the Akinator is able to come back on track. This gives us an intuition of how fast the model is able to redirect the Akinator's focus - the lesser the detour recovery time, the better. A longer detour recovery time would indicate that the pipeline has answered incorrectly multiple times in succession, which might cause the Akinator to drift further away from the actual answer. The Akinator is able to come back on track eventually most of the time because it does not seem to have a strong long term memory and focuses more on recent answers.
Best guess probabilityWe record the highest probability with which the correct animal features in the Akinator's guess list over the course of a single game. The average best guess probability for an experiment on 15 animals was 25.91%. This is a relatively high probability, given that most of the times when the Akinator considers an animal in the guess list, it starts off with a probability of less than 1%.
We propose an additional metric for future implementation to get a better understanding of our pipeline's performance - convergence rate. This metric could consider the initial probability (from the time that the correct animal shows up in the Akinator's guess list) and the final probability (highest probability achieved by the Akinator for the correct animal), and the rate of this increase over the number of questions asked between the initial and final probability timestamps. If the correct animal disappears from the Akinator's guess list, the convergence rate metric would be reset to zero. If an item does not converge, the convergence rate for that game would be zero.
## 5 Limitations
As we defined a new problem in NLP and provided preliminary results for the same, we observed some significant shortcomings in the problem-definition, current state of transformer models, our primary dataset BoolQ, using Wikipedia as our primary corpus, and limitations of word2vec models. While working with general entities, our baseline models failed to understand subtleties as it seemed to require a vast amount of global information to decisively answer 'no'. Hence, to make the problem tractable, we modified the problem definition to only include animal names as our 'guess' words. Furthermore, the transformer models we worked with - DistilBERT and RoBeRTa - showed difficulty in performing comparison and counting tasks. For example, our model would often fail when presented with questions such as 'Is it smaller than a monkey?' (comparative type) and 'Does it have 8 legs?' (counting type). While a human can visually comprehend such tasks, it becomes difficult to find such sentences in a corpus which can validate the presence of such sentences. Moreover, we believe as a future-scope in the Computer Vision domain, one can include a multimodal pipeline which combines ours with one that performs question-answering by observing an image.
Another limitation of the transformer model is that the negative results are hard to guess - as mentioned in the BoolQ paper [4], the subtlety of negation lies in understanding that 'a positive assertion in the text excludes, or makes unlikely, a positive assertion in the question'. As mentioned in RoBeRTa and DistilBERT sections, another limitation we observed was overfitting during our finetuning on the BoolQ dataset.
We observed that while BoolQ dataset is modelled to solve a yes/no problem, the subtleties between their and our problem definitions add up significantly. For instance, almost all of our questions start with the word 'is', however, more than 50% of our training data (5234 examples) consists of questions not starting with 'is'. Furthermore, as mentioned before, many 'animal' related questions required prior knowledge of other animals to answer correctly - however, the training corpus was largely devoid of questions from our problem domain. We also observed that both the Spacy and Gensim word2vec models had difficulty understanding the relationship between an animal and its parent class - for example, a 'tiger' had a higher correlation with a reptile, than with a carnivore or a mammal. This made it significantly difficult to perform positive sampling, requiring us to utilize UCI's zoo dataset [21] for obtaining the parent-child
relationships for positive/negative sampling. Lastly, we would like to stress that in spite of the vast sea of resources in Wikipedia articles, we found many instances in which both the Simple Wikipedia and Full Wikipedia were unable to find a relevant sentence. For example, while tigers can swim well, their Wikipedia article has no such reference of it, which in turn confuses our model which is dependent upon a strong reference to base its answer on.
## 6 Applying in practice
The biggest prerequisites to apply this problem in practice would be to fine-tune the yes/no model on a transformer trained on a larger dataset such as GPT-2 (which has 1.5 billion parameters and was trained on a dataset of 8 million web pages) [22]. Another prerequisite would be to build a vast taxonomy to improve the performance of the positive/negative sampling stages of the pipeline. We also propose using a hybrid corpus consisting of answers from Wikipedia and domain-specific knowledge graphs. We observed that knowledge graphs such as DBPedia [16] heavily borrowed their content from Wikipedia, making it less effective for this task. Moreover, if the domain problem requires a broader category of entities, we highly suggest creating a custom dataset for your task, instead of overly relying upon BoolQ due to its limitations (as mentioned in the Limitations section). Lastly, if one expects the questions to include more than one pronoun, we encourage building a pronoun resolution model - starting with a baseline model (like the Hobbs' algorithm) [23] and eventually experimenting with Google's GAP dataset [24].
\begin{table}
\begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Animal** & **Coreference resolution** & **Excerpt extracted** & **Probability** \\ \hline Cheetah & No & They have been widely depicted in art, literature, advertising, and animation. & 0.17 \\ \hline Cheetah & No & An open area with some cover, such as diffused bushes, is probably ideal for the cheetah because it needs to stalk and pursue its prey over a distance. & 0.10 \\ \hline Cheetah & Yes & Generally, the female can not escape on her own; the males themselves leave after they lose interest in her. & 0.41 \\ \hline Cheetah & Yes & — Interaction with humans —n\n\n\n\n\n\n\n\n\n\n,The cheetah shows little aggression toward humans, and can be tamed easily, as it has been since antiquity. & 0.41 \\ \hline Monkey & No & Some are kept as pets, others used as model organisms in laboratories or in space missions. & 0.24 \\ \hline Monkey & No & They are used primarily because of their relative ease of handling, their fast reproductive cycle (compared to apes) and their psychological and physical similarity to humans. & 0.16 \\ \hline Monkey & Yes & The most common monkey species found in animal research are the grivet, the rhesus macaque, and the crab-eating macaque, which are either wild-caught or purpose-bred. & 0.49 \\ \hline Monkey & Yes & Some are kept as pets, others used as model organisms in laboratories or in space missions. & 0.45 \\ \hline Elephant & No & In the past, they were used in war; today, they are often controversally put on display in zoos, or exploited for entertainment in circuses. & 0.26 \\ \hline Elephant & No & It can be used for delicate tasks, such as wiping an eye and checking an orifice, and is capable of cracking a peanut shell without breaking the seed. & 0.13 \\ \hline Elephant & Yes & — Zoos and circuses —n\n\nElephants were historically kept for display in the menageries of Ancient Egypt, China, Greece, and Rome. & 0.50 \\ \hline Elephant & Yes & In the past, they were used in war; today, they are often controversial put on display in zoos, or exploited for entertainment in circuses. & 0.44 \\ \hline \end{tabular}
\end{table}
Table 2: An example showing the Coreference Resolution Dilemma
Future Work
It would be helpful if we could detect questions that require real world knowledge to answer. These questions are often in the form of comparisons to other objects/animals such as _Is your animal bigger than a human_? As future work, it would be interesting to identify questions that present a comparison-type query and answer these questions with an _idk_ to avoid confusing the model with confident but incorrect answers. The original Akinator tends to guess _a guy who answers randomly_ if the model answers _idk_, _probably_ or _probably not_ too many times. We could maintain a penalty for such answers that increases every time the model outputs an uncertain answer and decreases with every definite answer that the model outputs.
|
2302.11719
|
Shield Model Predictive Path Integral: A Computationally Efficient
Robust MPC Approach Using Control Barrier Functions
|
Model Predictive Path Integral (MPPI) control is a type of sampling-based
model predictive control that simulates thousands of trajectories and uses
these trajectories to synthesize optimal controls on-the-fly. In practice,
however, MPPI encounters problems limiting its application. For instance, it
has been observed that MPPI tends to make poor decisions if unmodeled dynamics
or environmental disturbances exist, preventing its use in safety-critical
applications. Moreover, the multi-threaded simulations used by MPPI require
significant onboard computational resources, making the algorithm inaccessible
to robots without modern GPUs. To alleviate these issues, we propose a novel
(Shield-MPPI) algorithm that provides robustness against unpredicted
disturbances and achieves real-time planning using a much smaller number of
parallel simulations on regular CPUs. The novel Shield-MPPI algorithm is tested
on an aggressive autonomous racing platform both in simulation and using
experiments. The results show that the proposed controller greatly reduces the
number of constraint violations compared to state-of-the-art robust MPPI
variants and stochastic MPC methods.
|
Ji Yin, Charles Dawson, Chuchu Fan, Panagiotis Tsiotras
|
2023-02-23T00:51:48Z
|
http://arxiv.org/abs/2302.11719v1
|
Shield Model Predictive Path Integral: A Computationally Efficient Robust MPC Approach Using Control Barrier Functions
###### Abstract
Model Predictive Path Integral (MPPI) control is a type of sampling-based model predictive control that simulates thousands of trajectories and uses these trajectories to synthesize optimal controls on-the-fly. In practice, however, MPPI encounters problems limiting its application. For instance, it has been observed that MPPI tends to make poor decisions if unmodeled dynamics or environmental disturbances exist, preventing its use in safety-critical applications. Moreover, the multi-threaded simulations used by MPPI require significant onboard computational resources, making the algorithm inaccessible to robots without modern GPUs. To alleviate these issues, we propose a novel (Shield-MPPI) algorithm that provides robustness against unpredicted disturbances and achieves real-time planning using a much smaller number of parallel simulations on regular CPUs. The novel Shield-MPPI algorithm is tested on an aggressive autonomous racing platform both in simulation and using experiments. The results show that the proposed controller greatly reduces the number of constraint violations compared to state-of-the-art robust MPPI variants and stochastic MPC methods.
## I Introduction
As robotics technologies develop, autonomous robots are expected to carry out more challenging tasks reliably. To accomplish these tasks in the presence of complex underlying dynamics and unknown environmental conditions, control methods are required to take into account the dynamics along with other user-specified safety constraints. Receding horizon control, also known as Model Predictive Control (MPC), is a control method that has been applied to generate optimal controls for constrained robotic systems [8]. Unlike more traditional PID or LQR controllers, MPC considers the future evolution of the system's behavior given the current observation of the states, thus achieving more robust planning [9, 10]. We refer the interested reader to [3] for a brief review of various categories of MPC algorithms.
Model Predictive Path Integral (MPPI) control is a sampling-based MPC method that relies on forward simulation of randomly sampled trajectories to synthesize an optimal control [11]. Compared with other MPC approaches, MPPI allows for more general forms of cost functions, including non-convex and even non-smooth costs. Typically, MPPI samples a large number of trajectories using a GPU, utilizing the GPU's parallel-computing ability to plan in real time with a sufficiently high control update frequency. Despite its attractive properties (e.g., simplicity and support for general nonlinear dynamics and cost functions), MPPI encounters several practical issues when deployed on actual hardware.
First, there exists a gap between the theory of MPPI and its practical implementation. Theoretically, given unlimited computational resources, MPPI will find the globally optimal control sequence, i.e., the algorithm is globally optimal if its planning horizon and trajectory sample budget are infinite. In practice, however, the available computational power is always limited. In the past, this problem has been mitigated with the use of GPUs using multi-threaded sampling. However, the majority of existing robots still do not have onboard GPUs due to their large size, high cost, and increased power consumption compared to CPUs.
Second, a limited computational budget means that MPPI becomes essentially a local search method. As a result, it requires good-quality samples in order to achieve satisfactory performance. Sampling trajectories close to the optimal solutions will significantly improve the performance of the baseline MPPI, just as the quality of initialization affects the performance of any local optimization method. A bad set of simulated trajectories with no feasible solutions can cause MPPI to make erroneous control decisions, leading to safety violations. In most cases, unexpected dynamical and environmental disturbances cause unsatisfactory behavior, as demonstrated in Fig. 1(a) and 1(b). In Fig. 1(a), the autonomous vehicle has a desirable sampling distribution inside the track, but the vehicle ends up in a state far from the simulated next state due to unexpected disturbances, which may lead to divergence as shown in Fig. 1(b).
Third, the baseline MPPI does not consider uncertainty in the environment or the dynamics, and thus neglects potential risks. Specifically, the original MPPI algorithm assumes deterministic dynamics in its trajectory sampling process and imposes a penalty in the cost function as a soft constraint rather than enforcing hard constraints. This use of cost penalties causes two implementation issues. First, the cost function has to be carefully tuned and weighted between rewards and penalties, creating the possibility that the algorithm can exploit loopholes in the cost design to make undesirable decisions (so-called "reward hacking" [12]). Secondly, MPPI has no firm guarantees of safety, which can be problematic for many time- and safety-critical applications, such as autonomous driving.
### _Related Work_
Many variants of MPPI have been proposed to address the previous practical limitations. These variants fall into three general categories. The first category includes methods designed to address potential planning risks by adding an
extra penalty to the sampled trajectories that come close to areas of high uncertainties or risk, pushing the resulting optimal trajectory to high confidence, safer regions, as demonstrated in Fig. 1(c). For example, [1] uses a data-driven approach to identify uncertainties and avoid potential dangers. The authors of [2] propose a method to generate risk-averse controls by evaluating the risk in real time and accounting for systematic uncertainties. The major drawback of this category of algorithms is that they may still generate infeasible solutions if none of the sampled trajectories is feasible.
The second category of MPPI variants achieves robust planning by adjusting the distribution of the simulated trajectories to improve sampling efficiency, as described in Fig. 1(d). Reference [3] utilizes covariance steering theory to accomplish flexible trajectory distribution control for MPPI, introducing the final state covariance as a hyper-parameter to adjust the sampling distribution. Other similar methods include [5], which uses a control barrier function to create trust regions for reliable samples, and [4], which uses a normalizing flow to produce efficient sampling distributions. The limitation of these controllers is that their distribution generation method may be biased due to insufficient training data, leading to poor performance. In addition, the constraints on the sampling distribution may limit exploration and lead to sub-optimal plans.
The third category of MPPI extensions addresses systematic uncertainties by closing the gap between MPPI simulations and the actual system [6, 7] using an additional complimentary controller, such as iLQG, to track the MPPI optimal trajectory, as demonstrated in Fig. 1(e). These approaches perform well when the sim-to-real gap is small; however, they do not explicitly address risk and they provide no guarantees of safety when the environment changes. Such cases are common in autonomous car and drone racing.
### _Contributions_
In this work, we combine control barrier functions with MPPI to develop a safe control approach for general nonlinear systems. Barrier functions are a commonly used verification approach for safety-critical systems that have gained popularity in recent years due to their ability to ensure safety for a wide variety of dynamical systems with safety constraints [13, 14, 15].
We integrate the discrete-time control barrier functions (DCBF [16]) with the MPPI algorithm. The resulting Shield-MPPI controller uses a DCBF as a shield to guarantee safety, by filtering the control actions chosen by MPPI to ensure that safety constraints are not violated. Our approach is inspired from the use of similar safety shields in reinforcement learning [17], as demonstrated by Fig. 1(f). The proposed Shield-MPPI possesses two properties that ensure robust planning. First, the control actions generated by the Shield-MPPI controller render the specified safe sets forward-invariant, i.e., a Shield-MPPI agent starting inside the safe set will always remain safe. Second, if the agent exits the safe set (for example, due to unexpectedly large disturbances), its state will converge back to the safe set, recovering safety as will be discussed in Section III. We will discuss these properties in more detail in Sections III and IV before providing an experimental characterization of our system in Section VI. In our experiments, the proposed Shield-MPPI controller reduced the chances of a potential car crash to almost zero, while achieving approximately \(10\!-\!15\%\) speed improvement with less than \(0.5\%\) of the trajectory samples used by MPPI.
## II Model Predictive Path Integral Control
Consider a general, discrete nonlinear system,
\[x_{k+1}=f(x_{k},u_{k}), \tag{1}\]
where \(x_{k}\in\mathcal{D}\subseteq\mathbb{R}^{n_{x}}\) is the system state and \(u_{k}\in\mathbb{R}^{n_{u}}\) is the control input at time step \(k=0,\ldots,K-1\). It is assumed that, given some mean control \(v_{k}\in\mathbb{R}^{n_{u}}\) and covariance matrix \(\Sigma_{\epsilon}\in\mathbb{R}^{n\times n}\), the actual control follows a Gaussian distribution according to \(u_{k}\sim\mathcal{N}(v_{k},\Sigma_{\epsilon})\). Consequently, the control sequence \(\mathbf{u}=(u_{0},\ldots,u_{K-1})\) has distribution \(\mathbb{Q}\) with density function,
\[\mathbf{q}(\mathbf{u})=((2\pi)^{n_{u}}|\Sigma_{\epsilon}|)^{-\frac{1}{2}} \prod_{k=0}^{K-1}e^{-\frac{1}{2}(u_{k}-v_{k})^{\intercal}\Sigma_{\epsilon}^{ -1}(u_{k}-v_{k})}. \tag{2}\]
Fig. 1: Comparison of different MPPI variants in the presence of unexpected disturbances. (a)-(b) Environmental disturbances may cause the baseline MPPI to diverge; (c) Some MPPI variants [1, 2] penalize trajectories that enter uncertain regions, but provide no guarantees of feasibility; (d) Others variants [3, 4, 5] tune the sampling distribution to avoid infeasible states, but these methods can be sub-optimal due to limited exploration or can be biased due to insufficient training data; (e) Variants like [6, 7] pair MPPI with a robust tracking controller, which provides good performance when environmental uncertainty is small but do not formally guarantee safety; (f) The proposed Shield-MPPI guarantees safety even when all trajectory samples deviate from the safe regions, generating feasible solutions shown by the yellow trajectory.
Define the objective function
\[J(\mathbf{v})=\mathbb{E}_{\mathbb{Q}}\left[\phi(x_{K})+\sum_{k=0}^{K-1}\left(q(x_{ k})+\frac{\lambda}{2}v_{k}^{T}\Sigma_{\epsilon}^{-1}v_{k}\right)\right], \tag{3}\]
where \(q(x_{k})\) and \(\phi(x_{K})\) are the state-dependent step cost and terminal cost, respectively. As shown in [11], the optimal distribution \(\mathbb{Q}^{*}\) that achieves the minimal value of (3) has a density function given by
\[\mathbf{q}^{*}(\mathbf{u})=\frac{1}{\mu}e^{-\frac{1}{\lambda}\left(\phi(x_{K}) +\sum_{k=0}^{K-1}q(x_{k})\right)}\,\mathbf{p}(\mathbf{u}), \tag{4}\]
where \(\mathbf{p}(\mathbf{u})\) is the density function of an (uncontrolled) base distribution \(\mathbb{P}\) resulting from a zero-mean control sequence (\(\mathbf{v}=0\)), and,
\[\mu=\int e^{-\frac{1}{\lambda}\left(\phi(x_{K})+\sum_{k=0}^{K-1}q(x_{k}) \right)}\,\mathbf{p}(\mathbf{u})\,\mathrm{d}\mathbf{u}. \tag{5}\]
Consequently, the problem of optimizing (3) is converted to minimizing the KL divergence between (4) and (2). Applying importance sampling, the resulting optimal controls \(v_{k}^{+}\) can be evaluated using the distribution \(\mathbb{Q}\) as,
\[v_{k}^{+}=\mathbb{E}_{\mathbb{Q}}[u_{k}w(\mathbf{u})], \tag{6}\]
where,
\[w(\mathbf{u})=\frac{1}{\eta}e^{-\frac{1}{\lambda}S(\mathbf{u})}. \tag{7}\]
and the trajectory cost \(S(\mathbf{u})\) is given by
\[S(\mathbf{u})=\phi(x_{K})+\sum_{k=0}^{K-1}q(x_{k})+\lambda\sum_{k=0}^{K-1}v_{ k}^{\intercal}\Sigma_{\epsilon}^{-1}u_{k}. \tag{8}\]
The denominator \(\eta\) in (7) is
\[\eta=\int e^{-\frac{1}{\lambda}S(\mathbf{u})}\,\mathrm{d}\mathbf{u}. \tag{9}\]
In practice, (6) can be calculated using Monte-Carlo sampling as follows. Let \(u_{k}=v_{k}+\epsilon_{k}^{m}\), where \(\epsilon_{k}^{m}\sim\mathcal{N}(0,\Sigma_{\epsilon})\) is the sampled control noise for the \(m^{\text{th}}\) simulated trajectory at the \(k^{\text{th}}\) time step. The control update law (6) can then be converted to,
\[v_{k}^{+}=\mathbb{E}_{\mathbb{Q}}[(v_{k}+\epsilon_{k})w(\mathbf{u})]\approx v _{k}+\sum_{m=1}^{M}\omega_{k}^{m}\epsilon_{k}^{m}/\sum_{m=1}^{M}\omega_{k}^{m}, \tag{10}\]
where \(\omega_{k}^{m}\) is the weight for \(\epsilon_{k}^{m}\) given by (7), which can be evaluated as,
\[\omega^{m}=\text{exp}\left(-\frac{1}{\lambda}\left(S^{m}-\min_{m=1,\ldots,M}S ^{m}\right)\right), \tag{11}\]
where the hyper-parameter \(\lambda\) can be used to determine how selective the MPPI algorithm is for the sampled trajectories. For simplicity, in (11) we use \(S^{m}\) in place of \(S(\mathbf{u}^{m})\) to denote the cost of the \(m^{\text{th}}\) simulated trajectory, and the term \(\min_{m=1,\ldots,M}S^{m}\) is introduced to ensure numerical stability without changing the solution. It follows from (8) that the cost of the \(m^{\text{th}}\) trajectory sample is evaluated as,
\[S^{m}=\phi(x_{K}^{m})+\sum_{k=0}^{K-1}q(x_{k}^{m})+\lambda\,(v_{k}^{m})^{ \intercal}\Sigma_{\epsilon}^{-1}(v_{k}^{m}+\epsilon_{k}^{m}). \tag{12}\]
## III Discrete-time Control Barrier Function
Let a Lipschitz continuous function \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\), and define a safe set \(\mathcal{S}\subseteq\mathcal{D}\subset\mathbb{R}^{n}\), such that,
\[\mathcal{S}\coloneqq\{x\in\mathcal{D}|h(x)\geq 0\}. \tag{13}\]
Let \(\mathcal{U}\) denote the set of feasible controls. The function \(h\) is a DCBF for system (1) if, for all \(x\in\mathcal{D}\), there exists a control \(v\in\mathcal{U}\), such that,
\[h(f(x,v))-h(x)\geq-p(h(x)), \tag{14}\]
for a class-\(\kappa\) function \(p:\mathbb{R}\rightarrow\mathbb{R}\). In this work, we use the specific form of class-\(\kappa\) function as follows
\[p(r)=\beta\,r,\quad\beta\in(0,1). \tag{15}\]
**Property III.1**: _Given an initial condition \(x_{0}\in\mathcal{S}\) and a control sequence \(\{v_{k}\}_{k=0}^{\infty}\) such that all (\(x_{k}\), \(v_{k}\)) pairs satisfy (14), then \(x_{k}\in\mathcal{S}\) for all \(k\in\mathbb{Z}_{\geq 0}\)._
Condition (14) implies that \(h(x_{k})\geq(\mathrm{Id}-\beta)\circ h(x_{k-1})\), where \(\circ\) denotes function composition and \(\mathrm{Id}\) denotes the identity function [16]. Since \(h(x_{1})\geq(\mathrm{Id}-\beta)\circ h(x_{0})\), it follows that,
\[h(x_{k})\geq(\mathrm{Id}-\beta)^{k}\circ h(x_{0}). \tag{16}\]
Since \((\mathrm{Id}-\beta)\) is a class-\(\kappa\) function for \(\beta\in(0,1)\), it follows from \(h(x_{0})\geq 0\) that \(h(x_{k})\geq 0\). Hence, the set \(\mathcal{S}\) is forward invariant.
**Property III.2**: _Let \(x_{0}\in\mathcal{D}\setminus\mathcal{S}\) and let a control sequence \(\{v_{k}\}_{k=0}^{\infty}\) such that, for all \(k\in\mathbb{Z}_{\geq 0}\), the pair (\(x_{k}\), \(v_{k}\)) satisfies (14). Then, the state \(x_{k}\) converges to the safe set \(\mathcal{S}\) asymptotically._
Note that, as \(k\rightarrow\infty\), \((\mathrm{Id}-\beta)^{k}\circ h(x_{0})\to 0\). Hence, (16) yields \(h(x_{k})\geq 0\).
## IV Double-layer Safety Shield using a DCBF
Integrating safety constraints into an MPPI controller is non-trivial. Since the transition from safe to unsafe states can be abrupt, the controller must consider a sufficiently long planning horizon in order to ensure that the system remains safe far into the future. Unfortunately, the need to consider a long planning horizon increases the computation required to evaluate the controller, particularly when the MPPI controller also needs to consider a large number of these long trajectories in order to find near-optimal actions.
To reduce the computational burden required to implement a version of safe MPPI, we make two key modifications to the baseline MPPI controller. First, to allow the controller to preserve safety while using a shorter planning horizon, we integrate a control barrier function term into our cost function; this CBF enables the controller to determine whether an action is safe or not, while only considering a handful of steps into the future. However, even including a CBF term in the cost may not be enough to ensure safety if the MPPI controller does not consider a large enough sample of trajectories (as this can result in sub-optimal behavior and violation of the CBF's safety guarantee). To mitigate this issue and allow the controller to maintain safety even when considering only a small population of trajectories, we combine the CBF-augmented MPPI controller with a local repair step, as shown in Fig. 2.
### _Safe Shielding by Modified Trajectory Costs_
The first component of the proposed control architecture is a standard MPPI sampling process with a state-dependent barrier function term included in the costs of the sampled trajectories. To this end, let \(\alpha=1-\beta\in(0,1)\), we define a DCBF constraint violation penalty cost,
\[C_{\text{cbf}}(x_{k},x_{k-1})=C\,\max\{-h(x_{k})+\alpha h(x_{k-1}),0\}, \tag{17}\]
where \(C\) is a parameter that determines how much penalty cost should be applied in proportion to the amount of constraint violation. In order to augment the CBF constraint into the MPPI cost, we introduce, for each \(k=1,\ldots,K\) the augmented state \(z_{k}=(z_{k}^{(1)},z_{k}^{(2)})=(x_{k},x_{k-1})\in\mathbb{R}^{2n_{x}}\) and the corresponding augmented state system
\[z_{k+1}=\begin{bmatrix}z_{k+1}^{(1)}\\ z_{k+1}^{(2)}\end{bmatrix}=\begin{bmatrix}x_{k+1}\\ x_{k}\end{bmatrix}=\begin{bmatrix}f(z_{k}^{(1)},u_{k})\\ z_{k}^{(1)}\end{bmatrix}=\tilde{f}(z_{k},u_{k}). \tag{18}\]
In the new coordinates, equation (17) takes the form
\[C_{\text{cbf}}(z_{k})=C\,\max\{-h(z_{k}^{(1)})+\alpha h(z_{k}^{(2)}),0\}. \tag{19}\]
The new terminal and running costs corresponding to the augmented system (18) are then defined as \(\tilde{\phi}(z_{K})=\phi(z_{K}^{(1)})+C_{\text{cbf}}(z_{K})\) and \(\tilde{q}_{k}(z_{k})=q(z_{k}^{(1)})+C_{\text{cbf}}(z_{k})\), respectively. Using the augmented system, the cost of the \(m^{\text{th}}\) simulated trajectory \(S^{m}\) in (12) is modified as,
\[\tilde{S}^{m}=S^{m}+\sum_{k=0}^{K}C_{\text{cbf}}(z_{k}^{m}), \tag{20}\]
where for simplicity, we assume that \(z_{0}^{(2)}=x_{-1}=x_{0}\). If the barrier function constraint (14) is satisfied it follows that \(-h(x_{k})+\alpha h(x_{k-1})\leq 0\) and the cost term (19) becomes zero, and hence the system will remain safe. Otherwise, the augmented cost \(\tilde{q}_{k}\) penalizes the simulated trajectories that violate condition (14), so that they are weighted less during the synthesis of the MPPI control sequence.
In short, in this step, the MPPI algorithm is applied to system (18) with cost
\[\min_{\mathbf{v}}J(\mathbf{v})=\\ \mathbb{E}\left[\tilde{\phi}(z_{K})+\sum_{k=0}^{K-1}\left(\tilde{q }(z_{k})+\frac{\lambda}{2}v_{k}^{\intercal}\Sigma_{\epsilon}^{-1}v_{k}\right) \right], \tag{21}\]
to yield a sequence of "near-optimal" nominal controls \(\mathbf{v}^{+}=(v_{0}^{+},v_{1}^{+},\ldots,v_{K-1}^{+})\).
### _Control Shielding Using Gradient-based Optimization_
The MPPI optimization process is not guaranteed to find a solution with zero CBF violation with limited trajectory samples. To guard against this case, we add a "local repair" step where we seek to locally optimize the output control sequence \(\mathbf{v}^{+}\) and minimize its violation of the CBF condition, solving the optimization problem,
\[v_{0:N}^{\text{safe}}=\operatorname*{argmax}_{v_{0:N}^{\text{safe}}}\,\,\sum_ {k=0}^{N}\min\{h(x_{k+1})-\alpha h(x_{k}),0\}, \tag{22}\]
subject to (1), where \(x_{0}\) is the current state and \(N\) is the planning horizon for the local repair (typically smaller than the MPPI control horizon \(K\)). If the CBF condition \(h(x_{k+1})-\alpha h(x_{k})\geq 0\) is satisfied for \(k=0,1,\ldots,N\), then the objective of this problem will be \(0\), and it will be negative when the CBF condition is not satisfied. We solve this nonlinear problem locally using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm [18]. The BFGS is a first-order, gradient-based optimizer with a time complexity of \(\mathcal{O}(n^{2})\), which is significantly faster compared to Newton's method which is of order \(\mathcal{O}(n^{3})\). Due to the real-time constraints on the controller, we do not run this optimization until convergence but instead run it for a fixed number of steps, thus sacrificing any guarantees of local optimality but providing an effective heuristic to ensure safety. This approach is illustrated in Algorithm 1.
```
Given: Model \(f\), repair steps \(n_{s}\), MPPI horizon \(K\), repair horizon \(N<K\), step size \(\delta\); Input: Current state \(x_{0}\), control sequence \(\mathbf{v}^{+}\); Output: Safe control \(v_{0:N}^{\text{safe}}\)
1\(v_{0:N}^{\text{safe}}\gets v_{0:N}^{+}\);
2for\(n_{s}\) stepsdo
3\(v_{0:N}^{\text{safe}}\gets v_{0:N}^{\text{safe}}+\\ \delta\nabla v_{v_{0:N}^{+}}^{+}\sum_{k=0}^{N}\min(h(f(x_{k},v_{k}^{+}))- \alpha h(x_{k}),0)\}\)
4 end
```
**Algorithm 1**Safety Shield
Fig. 2: Shield-MPPI control architecture
## V Shield-MPPI Algorithm
The proposed Shield-MPPI is described in Algorithm 2. Line 2 computes the estimate of the current system state \(x_{0}\). Lines 3 to 13 describe the trajectory sampling and cost evaluation process, where Line 4 sets the initial conditions, Line 5 samples the \(m^{\text{th}}\) control noise sequence \(\mathbf{\epsilon}^{m}\), Line 7 sums the mean control \(v_{k}\) and sampled control noise and Line 8 uses the resulting input \(u_{k}^{m}\) to propagate system state. Lines 10 and 12 evaluate the modified trajectory cost \(\tilde{S}^{m}\) with the DCBF constraint violation penalty (19) following (12) and (20). Line 14 calculates the optimal control \(\mathbf{v}^{+}\) using the update law (10). To guarantee safety, Line 15 solves the nonlinear optimization problem (22) from Algorithm 1 and obtains the safe control sequence \(\mathbf{v}^{\text{safe}}\). Finally, Line 16 executes the safe controls and Line 17 sets \(\mathbf{v}^{+}\) as the mean control sequence for "warm starting" the next control iteration.
```
Given: Shield-MPPI costs \(q(\cdot),\phi(\cdot),\text{parameters}\ \gamma,\Sigma_{\epsilon}\); Input: Initial control sequence \(\mathbf{v}\)
1whiletask not completedo
2\(x_{0}\leftarrow\text{GetStateEstimate}()\);
3for\(m\gets 0\)to\(M-1\) in paralleldo
4\(x_{0}^{m}\gets x_{0},\quad z_{0}^{m}\leftarrow[x_{0}^{\intercal},x_{0}^{ \intercal}]^{\intercal},\quad\tilde{S}^{m}\gets 0\);
5 Sample \(\mathbf{\epsilon}^{m}\leftarrow\{\epsilon_{0}^{m},\ldots,\epsilon_{K-1}^{m}\}\);
6for\(k\gets 0\)to\(K-1\)do
7\(u_{k}^{m}\gets v_{k}+\epsilon_{k}^{m}\);
8\(x_{k+1}^{m}\gets f(x_{k}^{m},u_{k}^{m})\),
9\(z_{k+1}^{m}\leftarrow[(x_{k+1}^{m})^{\intercal},(x_{k}^{m})^{\intercal}]^{\intercal}\)
10\(\tilde{S}^{m}\leftarrow\)\(\tilde{S}^{m}+q(x_{k}^{m})+\gamma v_{k}^{\intercal}\Sigma_{\epsilon}^{-1}u_{k}^{m} +\ C_{\text{cbf}}(z_{k}^{m})\);
11 end for
12\(\tilde{S}^{m}\leftarrow\tilde{S}^{m}+\phi(x_{K}^{m})+C_{\text{cbf}}(z_{K}^{m})\);
13
14 end for
15\(\mathbf{v}^{+}\leftarrow\text{OptimalControl}(\{\tilde{S}^{m}\}_{m=0}^{M-1},\{ \mathbf{u}^{m}\}_{m=0}^{M-1})\);
16\(\mathbf{v}^{\text{safe}}\leftarrow\text{SafetyShield}(x_{0},\mathbf{v}^{+})\);
17ExecuteCommand(\(v_{0}^{\text{safe}}\));
18\(\mathbf{v}\leftarrow\mathbf{v}^{+}\);
19
20 end for
```
**Algorithm 2**Shield-MPPI Algorithm
## VI Simulation and Experiments
In this section, we present simulation and experimental results obtained from running the proposed Shield-MPPI controller on an autonomous racing platform. Specifically, we discuss the choice of the DCBF function \(h(x)\) along with its corresponding safe set \(\mathcal{S}\), and the underlying dynamical system used in these experiments.
### _AutoRally Racing Platform_
We use the AutoRally racing platform [19] for simulation as well as experiments. The AutoRally is an electric autonomous robot \(1/5\) the size of an actual vehicle, which is approximately 1 m in length, 0.4 m in width, and weighs about 22 kg [19]. We model the dynamics of the AutoRally vehicle using a discrete-time system as in (1), based on the single-track bicycle model described in [20], where system state is \(x=[v_{x},v_{y},\dot{\psi},\omega_{F},\omega_{R},e_{\psi},e_{y},s]^{\intercal}\), and the state variables represent the longitudinal velocity, lateral velocity, yaw rate, front wheel speed, rear-wheel speed, yaw angle error, lateral deviation, and distance progress made along track centerline, respectively. The control input is \(u=[\delta,T]^{\intercal}\), where \(\delta\) is the steering angle input and \(T\) is throttle.
### _Safe Set_
Assuming that the racing track has constant track width \(2w_{\text{T}}\), it is desirable that the vehicle's lateral deviation \(e_{y}\) from the track centerline is bounded by \(|e_{y}|\leq w_{\text{T}}\), such that the vehicle avoids collision with the track boundaries. To this end, we define the function,
\[h(x)=w_{\text{T}}^{2}-e_{y}^{2}, \tag{23}\]
that fulfills the DCBF constraint (14), that is, \(h(x)\geq 0\) if and only if the vehicle is inside the track boundaries. It follows from (13) that the safe set \(\mathcal{S}\) consists of all states inside the racing track, and Property 3.1 indicates that any control policy satisfying (14) renders \(\mathcal{S}\) forward-invariant. In addition, Property 3.2 guarantees asymptotic convergence to \(\mathcal{S}\) in the case when system state is not in \(\mathcal{S}\).
### _Controller Cost Design_
In the trajectory cost (12), the state-dependent running cost \(q(x_{k}^{m})\) can be arbitrary. In our simulations and experiments, we used the following state-dependent cost,
\[q(x_{k}^{m})=(x_{k}^{m}-x_{g})^{\intercal}Q(x_{k}^{m}-x_{g})+\mathbf{1}(x_{k}^ {m}), \tag{24}\]
where \(Q=\mathrm{diag}(q_{e_{x}},q_{e_{y}},q_{\dot{\psi}},q_{\omega_{F}},q_{\omega_{R} },q_{e_{\psi}},q_{s})\) are cost weights, \(x_{g}=\mathrm{diag}(v_{g},0,\ldots,0)\) sets the target velocity, and,
\[\mathbf{1}(x_{k}^{m})=\left\{\begin{array}{ll}0,&\text{if $x_{k}^{m}$ is within the track},\\ C_{\text{obs}},&\text{otherwise}.\end{array}\right. \tag{25}\]
is the collision penalty cost.
### _Cost Sensitivity Comparison_
A common problem among optimization algorithms is that the cost functions need to be carefully tuned for specific tasks. This is also the case for most MPC controllers, including MPPI. In this section, we investigate the proposed Shield-MPPI's ability to guard against false control decisions made by MPPI by running both controllers using \(M=10^{4}\) sample trajectories multi-threaded using GPUs. Normally, the cost weights in (24) need to be carefully designed empirically, such that the original MPPI controller achieves satisfying performance. For the vehicle system (1), the cost for a lateral deviation \(q_{e_{y}}\) in (24) has a significant impact on the autonomous vehicle's maneuvers. While small \(q_{e_{y}}\) values allow the vehicle to perform aggressive and more time-efficient maneuvers such as cutting corners, a large \(q_{e_{y}}\) makes the system stay close to the track centerline, reducing the chances of a collision against the track boundaries, but at the cost of less efficient trajectories. To this end, we tested the original MPPI controller together with the proposed
Shield-MPPI controller in simulation, and compared their performance using a wide range of \(q_{e_{y}}\) values.
We define a crash to be the situation where the vehicle deviates far from the track centerline and comes to a complete stop after hitting the track boundaries, and a collision to be the case where the vehicle slightly scrapes the track boundaries but does not halt. The first row of plots in Fig. 3 shows the crash rates within one lap, and the second row of plots shows the number of collisions. The third row illustrates the lap time, which is the time until a crash occurs or the time spent finishing one lap without a crash. The fourth row illustrates the average velocity achieved by the vehicle. For a cost interval \(q_{e_{y}}\in[0,50]\), as shown in Fig. 3, the original MPPI's crash rate and the number of collisions increase as the target velocity increases, while the proposed Shield-MPPI controller always maintains zero crash rate and collisions. Consequently, the plots in the third row of Fig. 3 indicate that MPPI tends to encounter crashes and experience failures earlier than the Shield-MPPI. Another observation is that the proposed Shield-MPPI achieves safety with higher velocities than the original MPPI, implying that the proposed approach generates more efficient maneuvers. We visualize trajectories produced by both controllers at \(q_{e_{y}}=30\) with target velocity \(v_{g}=\) 7m/s in Fig. 4. A portion of blue trajectories stops abruptly at the track boundaries, implying crashes caused by MPPI. Some other MPPI trajectories slightly cross the track boundaries and cause minor collisions. The trajectories generated by the proposed Shield-MPPI exhibit safe and more efficient driving maneuvers, including cutting corners to avoid losing speed and saving the distance traveled by the vehicle without any collisions.
### _Simulations with Limited Computational Resources_
As discussed in Section I, the quality of trajectory samples plays an important role in the optimal control generation for all MPPI-type algorithms. Typically, MPPI and its variants rely on the parallel computing abilities offered by modern GPUs to sample as many simulated trajectories as possible to find optimal solutions for successful motion generation. However, most robots are not equipped with GPUs due to their large size and high cost, and power requirements. For this reason, application of MPPI controllers is restricted to relatively expensive, large-scale robotic systems, while robots designed for affordability and having limited power and size lack the onboard computational resources required to sample a sufficient amount of trajectories in real-time.
To study the proposed algorithm's performance under limited computational resources, we run the original MPPI controller and the proposed Shield-MPPI controller in simulation using as few simulated trajectories as possible with short control horizons on a CPU. From the simulation results
Fig. 4: Shield-MPPI and MPPI trajectory visualization.
Fig. 5: Comparison of MPPI and Shield-MPPI using CPU implementation.
Fig. 3: Cost sensitivity comparison between Shield-MPPI and MPPI. Each column is obtained by running the controllers using a different target velocity \(v_{g}\). The blue curves show the performance of the standard MPPI controller, while the orange curves indicate the proposed Shield-MPPI controller. The curves represent average performance with the shaded tubes showing the \(95\%\) confidence intervals.
illustrated in Fig. 5, it is shown that all controllers achieve lower collision rates by increasing the number of trajectory samples and the control horizon. The blue curve in the figure indicates that the standard MPPI has the highest collision rate. The MPPI using only the DCBF cost modification as described by the orange module in Fig. 2 achieves lower collision rates compared to the standard MPPI, while the proposed two-layer Shield-MPPI, shown in green, results in the minimum number of collisions throughout the entire parameter interval studied.
To further investigate the influence of the second layer safety shield described by Algorithm 1 used in Shield-MPPI, we created a heat map, as shown in Fig. 6(a), to demonstrate the amount of crash rate reduction as a result of Algorithm 1, using the same data as in Fig. 5. The negative numbers in the figure indicate collision rate reduction, where a darker color means more improvement owing to the safety shield. It can be observed that the safety shield in Algorithm 1 tends to provide more protection against potential crashes when the control horizon \(K\) and the number of trajectory samples \(M\) are small, with darker cells appearing in the top-left triangle and lighter ones at the bottom-right corner. Fig. 6(b) shows the absolute collision rates resulting by the proposed Shield-MPPI controller, indicating that the proposed approach achieves zero collisions with merely \(50\) samples and about 1.5 s control horizon.
### _Comparison with other Robust MPC Methods_
To validate the robustness of the proposed Shield-MPPI controller, we compared it with other state-of-the-art controllers that take uncertainties into account during planning. In simulations, we model external disturbances by adding some Gaussian noise \(w_{k}\) to the nominal system (1). It follows that the disturbed system is given by,
\[x_{k+1}=f(x_{k},u_{k})+w_{k}. \tag{26}\]
We ran simulations using the Risk-aware MPPI (RA-MPPI) in [2] and the Covariance Steering Stochastic MPC (CS-SMPC) described in [8] to compare with our proposed approach. In addition, we also used a hypothetical Perfect Tracking MPPI (PT-MPPI) that ensures that the actual next state of the agent is the same as the predicted next state from the MPPI, regardless of any disturbances. The PT-MPPI assumes perfect trajectory tracking with zero tracking error. It is therefore an ideal controller that gives an estimate of the performance upper bound of the tracking-based robust MPPI variants demonstrated in Fig. 1(e), including the Tube-MPPI [6] and Robust-MPPI [7], etc. To thoroughly test the robustness of the controllers, we use a poor cost design that tends to cause more collisions, and all controllers share the same objective function and control horizon. All MPPI variants sample \(10^{4}\) trajectories at each optimization iteration to ensure a fair comparison. Table I summarizes the simulation results, which show that the Shield-MPPI controller achieves the lowest crash rate and number of collisions at relatively high velocities.
Another important observation from Table I is that while the tracking-based MPPI variants can alleviate the impact of unmodelled disturbances, they are not robust using poorly designed costs due to their lack of risk consideration.
### _AutoRally Experiment_
We also investigated the robustness of the proposed Shield-MPPI controller by running it on the real AutoRally vehicle [19] in the presence of unmodelled external disturbances. In our experiments, we tested the controllers on a track subject to disturbances as shown in Fig. 7(b), using a dynamical system (1) calibrated with the original disturbance-free track as shown in Fig. 7(a). Please refer to the video1 for the experimental demonstration. The results are summarized in Table II, where the controller MPPI(a) and the Shield-MPPI(a) use GPU to sample \(10^{4}\) simulated trajectories at a frequency of approximately 150 Hz and 55 Hz, respectively, while the MPPI(b) as well as the Shield-MPPI(b) sample 20 trajectories at about 235 Hz and 220 Hz on a CPU. The AutoRally vehicle is equipped with the Intel Skylake Quad-core i7 CPU, and an Nvidia GTX 1080ti GPU.
Footnote 1: [https://youtu.be/aKMWE09wfJ4](https://youtu.be/aKMWE09wfJ4)
From Table II, we see that the proposed Shield-MPPI controller can achieve a \(10.78\%\) speed improvement with merely \(0.2\%\) the number of trajectory samples compared to the standard MPPI controller, with no collisions observed during the experiments.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Controller** & **Samples** & **Max. Speed (m/s)** & **Avg. Speed (m/s)** \\ \hline MPPI(a) & \(10^{4}\) & 6.31 & 4.30 \\ \hline Shield-MPPI(a) & \(10^{4}\) & 7.21 & 4.78 \\ \hline MPPI(b) & 20 & 4.50 & 2.60 \\ \hline Shield-MPPI(b) & 20 & 6.99 & 4.61 \\ \hline \end{tabular}
\end{table} TABLE II: AutoRally Experiment Results
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Controller** & **Crash Rate** & **Collisions/lap** & **Avg. Speed (m/s)** \\ \hline Shield-MPPI & 0.02 & 0.13 & 5.039 \\ \hline CS-SMPC & 0.08 & 0.14 & 4.724 \\ \hline RA-MPPI & 0.15 & 0.38 & 5.130 \\ \hline PT-MPPI & 0.31 & 0.74 & 4.942 \\ \hline MPPI & 0.46 & 1.02 & 4.899 \\ \hline \end{tabular}
\end{table} TABLE I: Performance Comparison with other Stochastic MPC Approaches
Fig. 6: Collision rate reduction and absolute collision rate of Shield-MPPI controller. Each grid shows the average collision rate reduction or the absolute collision rate over 100 simulations.
## VII Conclusions And Future Work
In this paper, we proposed the novel Shield-MPPI controller that uses a control barrier function as a shield to prevent unfavorable control performance and guarantee safety. In our simulations and experiments, the proposed algorithm significantly reduced the number of safety constraint violations as compared to other state-of-the-art robust MPPI variants and stochastic MPC methods. In addition, the Shield-MPPI offers comparable, and even better performance than the baseline MPPI using CPUs instead of expensive GPUs, which has always been a major limitation of applications for MPPI-based algorithms.
In the future, we propose to improve the Shield-MPPI controller using learned certificates as described in [13], to develop safety shields in more flexible forms, and deploy the resulting algorithms to more complicated control scenarios, such as multi-agent planning [21]. The proposed safety shield in Shield-MPPI can also be integrated with existing MPC methods, such as MPPI variants [2, 3] or robust MPCs [22], to further improve their performance and ensure safety.
## VIII Acknowledgement
The authors thank Jacob Knaup for his assistance with the AutoRally platform simulations and experiments. This work was funded by NSF under awards CNS-2219755 and CCF-2238030 and by ONR under award N00014-18-1-2828. C. Dawson acknowledges support by the NSF Graduate Research Fellowssing Program under grant 1745302.
|
2308.04429
|
Non steady-state thermometry with optical diffraction tomography
|
Measurement of local temperature using label-free optical methods has gained
importance as a pivotal tool in both fundamental and applied research. Yet,
most of these approaches are limited to steady-state measurements of planar
heat sources. However, the time taken to reach steady-state is a complex
function of the volume of the heated system, the size of the heat source, and
the thermal conductivity of the surroundings. As such, said time can be
significantly longer than expected and many relevant systems involve 3D heat
sources, thus compromising reliable temperature retrieval. Here, we
systematically study the thermal landscape in a model system consisting of
optically excited gold nanorods (AuNRs) in a microchamber using optical
diffraction tomography (ODT) thermometry. We experimentally unravel the effect
of thermal conductivity of the surroundings, microchamber height, and pump
pulse duration on the thermodynamics of the microchamber. We benchmark our
experimental observations against 2D numerical sumulations and quantitative
phase imaging (QPI) thermometry. We also demonstrate the advantage of ODT
thermometry by measuring thermal landscapes inaccessible by QPI thermometry in
the form of non-planar heat sources embedded in complex environments such as
biological cells. Finally, we apply ODT thermometry to a complex dynamic system
consisting of colloidal AuNRs in a microchamber.
|
Adarsh B Vasista, Bernard Ciraulo, Jaime Ortega Arroyo, Romain Quidant
|
2023-08-08T17:58:04Z
|
http://arxiv.org/abs/2308.04429v1
|
# Non steady-state thermometry with optical diffraction tomography
###### Abstract
Measurement of local temperature using _label-free_ optical methods has gained importance as a pivotal tool in both fundamental and applied research. Yet, most of these approaches are limited to steady-state measurements of planar heat sources. However, the time taken to reach steady-state is a complex function of the volume of the heated system, the size of the heat source, and the thermal conductivity of the surroundings. As such, said time can be significantly longer than expected and many relevant systems involve 3D heat sources, thus compromising reliable temperature retrieval. Here, we systematically study the thermal landscape in a model system consisting of optically excited gold nanorods (AuNRs) in a microchamber using optical diffraction tomography (ODT) thermometry. We experimentally unravel the effect of thermal conductivity of the surroundings, microchamber height, and pump pulse duration on the thermodynamics of the microchamber. We benchmark our experimental observations against 2D numerical simulations and quantitative phase imaging (QPI) thermometry. We also demonstrate the advantage of ODT thermometry by measuring thermal landscapes inaccessible by QPI thermometry in the form of non-planar heat sources embedded in complex environments such as biological cells. Finally, we apply ODT thermometry to a complex dynamic system consisting of colloidal AuNRs in a microchamber.
## 1 Introduction
Measuring temperature reliably at the nano- and micro-scale is not only key to answering fundamental thermodynamic questions at these scales, but also in a variety of applications like photothermal cancer therapy [1, 2, 3], drug delivery [4], photocatalysis [5, 6], thermal lensing [7, 8], microfluidics [9, 10, 11, 12], vibrational spectroscopy using mid-infrared photothermal microscopy [13, 14, 15] etc. Nonetheless, the non-propagative nature of heat poses a challenge to reliably and accurately measure temperature at these scales, especially in non steady-state conditions.
Various optical thermometry techniques have recently emerged to address this need, and we can broadly categorize them into _label-based_ and _label-free methods_. The working principle for label-based methods relies on measuring a temperature sensitive emission signature such as Raman scattering [16, 17], fluorescence anisotropy [18], fluorescence intensity [19, 20], fluorescence spectra [21, 22], photoluminescence life time [23] from a set of molecular probes. While these methods can measure temperature in non steady-state conditions, they face drawbacks like slow read-out rates [21], low sensitivity [24], lack of reliability [19, 20], and most importantly the need to place the molecular probes in the system, which is not always feasible. To circumvent these issues, label free methods such as infra-red imaging [25], X ray absorption spectroscopy [26],quantitative phase imaging (QPI) [27], and mid infrared photothermal microscopy [13, 14, 15] have been proposed. Out of all label-free approaches QPI is one of the most promising ones, due to its ease of implementation into commercial microscopes, its speed and high resolution temperature retrieval.
QPI thermometry is based on measuring the optical path differences of a probe beam as a result of the small temperature-induced changes in the refractive index of a material. Several different implementations of QPI exist, either in the form of inline [28], off-axis [9, 12] or shearing-based holography [27, 29], yet all extract an optical path length difference from the measured phase change. Nonetheless, retrieving temperature profiles from these optical path length changes relies on algorithms that assume the system is in steady-state and the temperature profile follows a \(\frac{1}{r}\) decay from a planar heat source, where \(r\) is the radial coordinate [27]. Unfortunately, these assumptions restrict the range of systems to which QPI-thermometry can be applied.
As a promising alternative to QPI-thermometry, ODT has been widely used to accurately determine 3D refractive index maps of biological systems [30, 31, 32, 33, 34, 35, 36], to study material anisotropy [37], and to perform vibrational spectroscopy based on mid-infrared photothermal microscopy [14]. Recently, ODT-based thermometry has been experimentally demonstrated in the steady-state [38] without the need of any assumptions other than a look-up-table that relates the measured 3D refractive index change from a thermo-optical material to a temperature change. As a result, this technique has the potential to study more complex temperature-dependent phenomena inaccessible to QPI, namely systems that do not satisfy the steady-state and non-planar heat source assumptions.
In this work, we systematically studied, experimentally and numerically, a non steady-state system in the form of a microchamber undergoing photothermal conversion by gold nanorods to assess the performance of ODT based thermometry. We specifically use QPI to benchmark the conditions that push the system away from steady-state and to identify the mechanism responsible for such. We find that for our system heat accumulation extends the time to reach steady-state, and can be tuned by either the height of the microchamber or the thermal conductivity of the surroundings. Under non steady-state conditions we validate ODT thermometry against simulations, and showcase that QPI despite accurately retrieving the temperature gradient, it underestimates the absolute value of the local temperature. Finally, we apply ODT thermometry to three representative non-steady state systems, where photothermal conversion is achieved by non-planar heat sources in the form of: colloidal gold nanoparticles freely diffusing in aqueous media, of nanoparticle clusters embedded in a 3D hydrogel, and of AuNRs internalised by cells. As such our work highlights a promising approach to address the knowledge gap of heat propagation and thermometry at the nano and micro length scales.
## 2 Results and discussions
### Working principle
Figure 1 depicts the working principle of the experiment. As nano sources of heat, we used AuNRs immobilised on a glass substrate (S1 methods). A microfluidic chamber was, then, prepared by sandwiching a thermo-optical material, water in our case, between the glass substrate and a sapphire superstrate (Figure 1a). A pump beam of wavelength close to the absorption maximum of the AuNRs (785 nm) excites the substrate and heats the sample. This in turn changes the temperature-dependent refractive index of water, thereby spatially encoding the thermal profile in the form of wavefront changes. Measuring these temperature-induced wavefront changes form the basis of either phase-based temperature techniques: QPI and ODT thermometry.
When the AuNRs heated using a pump laser, we can define two characteristic time scales defining the thermodynamic state of the system. (i) The timescale to reach the local steady-state in the immediate vicinity of the heated nanorods. This timescale can be understood as the time for the thermal gradient in the sample to establish and is of the order of a few ms for a typical beam size of \(\sim 10\mu\)m. (ii) The timescale for the entire microchamber system to reach steady-state. This parameter is a complex function of the volume of the thermo-optical material, translating to the chamber height, and the thermal conductivity of the surroundings.
If the microchamber is not in thermal contact with a heat sink, then the chamber thermalizes with the outside environment through natural air convection, as shown in figure 1 (b). In this case, local thermalization happens quickly, setting up the thermal gradient. However, the lower efficiency of the natural air convection limits the dynamics of the heat transfer and results in the build-up of heat inside the microchamber. This heat build-up in the chamber manifests as a constant increase in the temperature profile. Hence, if one
monitors the temporal evolution of the thermal profile, though the microchamber reaches a local steady-state quickly, establishing the thermal gradient (\(\Delta T_{grad}\)), it experiences a continuous increase of the thermal floor (\(\Delta T_{DC}\)).
On the other hand, the thermodynamics of the heated microchamber can be drastically altered by providing access to a thermal sink by substituting the air-immersion objective with an oil-immersion one. In such a case, the refractive index matching oil acts as a thermal bridge between the microchamber and the metallic body of the objective lens and its respective optomechanical elements, as shown in figure 1 (c). As the microchamber is in thermal contact with a large thermal sink, the time taken to reach the global steady-state is considerably less, thus avoiding the continuous increase of the thermal floor (\(\Delta T_{DC}\)).
### Time evolution of phase maps
To understand the thermodynamics of the microchamber, we study the temporal evolution of phase difference maps with pump-probe phase imaging in an off-axis holography configuration (Figure 2, SI methods). The AuNRs in the microchamber were heated upon irradiation with a time modulated pump laser of wavelength 785 nm (close to the absorption maximum of the nanorods). The optical path difference, OPD, due to optical pumping and thereby local heating was measured by the difference between pump ON and pump OFF states of a probe beam with wavelength 465 nm [9].
For quasi-infinite systems, and at \(r=0\) the steady-state occurs on a timescale given by \(\tau\sim\frac{D^{2}}{4a_{s}}\)[7], where \(D\) is the diameter of the heat source and \(a_{s}\) is the thermal diffusivity of the medium. In the case of photothermal conversion of nanoparticle ensembles, i.e. this experiment, the diameter of the heat source
Figure 1: _The sample and the technique._ (a) Schematic representation of the microchamber sample formed by a gold nanorod (AuNR) functionalised glass substrate and sapphire superstrate separated by a silicone spacer. The chamber is filled with water as the thermo-optical material with a known refractive index. A 785 nm laser beam resonantly excites the AuNRs, causing the medium surrounding the AuNRs to heat up and change its refractive index. A probe laser at 465 nm measures the phase shift due to the altered refractive index. (b,c) Schematics representing the phenomena of heat transfer in the microchamber when the chamber is probed using air-immersion and oil-immersion objectives respectively and its effect on the temporal evolution of the thermal profiles. In the case of the air objective, the primary mechanism of heat transfer is natural air convection resulting in the continuous increase of the thermal floor (\(\Delta T_{DC}\)) unlike the oil-immersion case where the immersion oil acts as a thermal bridge between the chamber and the metallic case of the objective lens.
corresponds to the size of the pump beam. Therefore, for a pump beam size of 10 \(\mu\)m, \(\tau\sim\) 175 \(\mu\)s. However, this timescale does not hold for positions away from the centre of the heat source nor for finite systems such as a microchamber. Instead the time to reach steady-state is a complex function of: the height of the chamber, the thermal conductivities of the substrate and the superstrate, the position away from the heat source, and the heat transfer properties of the thermal sink.
To follow the thermal dynamics of the system, we tracked the temporal evolution of the phase of the probe beam, which is a suitable metric for finite real-world systems given the relation between OPD and temperature. Figure 2(a) shows a representative phase difference map between pump ON (hot) and pump OFF(cold) states, where a significant phase dip at the center (laser excitation spot) is observed, as expected due to the negative thermo-optical coefficient of water. To understand the temporal phase response, we defined two important parameters: (i) \(\Delta\phi_{max}\) - the maximum phase shift acquired by the probe beam due to heating and (ii) \(\Delta\phi_{grad}\) - the maximum phase gradient in the image calculated by subtracting the phase difference value at the edge of the phase image (\(\Delta\phi_{min}\)) from the phase change induced at the center of the image due to heating. Intuitively, \(\Delta\phi_{max}\) and \(\Delta\phi_{grad}\) report on the absolute temperature change and thermal gradient in the microchamber, respectively as represented by figure 2 (a). For instance at steady
Figure 2: _Assessing steady-state dynamics by phase imaging._ (a) (_top_) A representative phase difference image measured by subtracting the phase image with the heating laser switched ON from a reference phase image measured with heating laser switched OFF and (_bottom_) its corresponding thermal profile properties. The maximum and minimum phase shift within the field of view, acquired due to the heating of nanorods, is termed \(\Delta\phi_{max}\) and \(\Delta\phi_{min}\), respectively. The difference between the \(\Delta\phi_{max}\) and \(\Delta\phi_{min}\) determines the phase gradient (\(\Delta\phi_{grad}\)) of the phase image. The scale bar is 15 \(\mu\)m. The maximum phase shift accumulated (\(\Delta\phi_{max}\)) corresponds, in a closed system, to the absolute increase of the temperature (\(\Delta T_{max}\)) and the phase gradient (\(\Delta\phi_{grad}\)) corresponds to the thermal gradient in the image (\(\Delta T_{grad}\)). (b,c) Time evolution of \(\Delta\phi_{grad}\) and \(\Delta\phi_{max}\) for a chamber height of 500 \(\mu\)m probed using air and oil immersion objectives respectively.
state, when the temperature no longer changes within the sample, we expect the phase to converge to a constant value.
To understand the temperature dynamics of the microchamber as a function of the thermal conductivity of the surroundings, we followed the evolution of \(\Delta\phi_{max}\) and \(\Delta\phi_{grad}\) in a microchamber of height 500 \(\mu\)m probed in two different configurations: in the absence (air objective) and presence (oil-immersion objective) of a thermal sink (figures 2(b) and (c)). In the absence of a thermal sink, measurements with an air objective (figure 2(b)), the dynamics of the \(\Delta\phi_{max}\) did not saturate within the timescale of the experiment, whereas, \(\Delta\phi_{grad}\) saturated shortly after the pump pulse was switched ON. This indicates on the one hand that the system had not reached a steady-state. On the other hand, even though the system was not in steady-state, the shape of the thermal profile remained the same, as indicated by the stabilization of the phase gradient.
Figure 2(c) shows the time evolution of \(\Delta\phi_{max}\) and \(\Delta\phi_{grad}\) of the microchamber in the presence of a thermal sink by using an oil immersion objective lens. Here \(\Delta\phi_{max}\) saturated within the timescale of the experiment, while the \(\Delta\phi_{grad}\) showed oscillatory behaviour before saturating. We suggest that the oscillatory behaviour in the \(\Delta\phi_{grad}\) may result from the interplay between the different timescales involved as heat diffuses across multiple finite-sized materials with different thermal conductivities. For instance, the large thermal conductivity of the metal from the objective accentuates the diffusion along the axial direction, resulting in the short-lived oscillatory behaviour in the \(\Delta\phi_{grad}\) (figure S8, SI). Comparing figures 2 (b) and (c) we can conclude that coupling the system to a heat sink via the oil immersion modifies the thermal dynamics by specifically pushing the system towards the steady-steady much faster compared to the air immersion configuration.
Further, to understand the effect of height of the microchamber on the temperature dynamics, we studied the temporal evolution of phase for three different chamber heights at a fixed pump beam size and power in the air immersion configuration (figures 3 (a)-(c)). For the 100 \(\mu\)m chamber, the dynamics of \(\Delta\phi_{max}\) and \(\Delta\phi_{grad}\) saturated within the time scales of the experiment, suggesting that the system had reached steady-state within the pulse duration of pump beam (600 ms). Upon increasing the chamber height to either 300 \(\mu\)m or 500 \(\mu\)m, \(\Delta\phi_{max}\) no longer saturated, whereas \(\Delta\phi_{grad}\) did so shortly after the pump pulse was switched ON. As the height of the microchamber was increased, so did the distance between the high thermal conductivity sapphire superstrate, which acts a heat sink, and the heat source thus affecting the thermodynamics of the chamber. These results, again, indicate that the system with increased chamber heights had not reached steady-state, however, the shape of the temperature profile remained the same, as indicated by the stabilisation of the thermal gradient. In other words, the difference between the temperature probed by \(\Delta\phi_{max}\) and \(\Delta\phi_{grad}\) corresponded to a uniform temperature shift, DC offset, within the imaged area. It is also important to note that the thermal relaxation dynamics (cooling) also critically depended on the chamber height, and the relaxation timescale was slower for larger chamber heights. Thus to probe non steady-state dynamics of the microchamber, we specifically used the air immersion objective
Figure 3: _Assessing steady-state dynamics by phase imaging._ (a)-(c) show the time evolution of the phase gradient \(\Delta\phi_{grad}\) and maximum phase shift accumulated \(\Delta\phi_{max}\) for chamber heights of 100 \(\mu\)m, 300 \(\mu\)m, and 500 \(\mu\)m probed using air objective respectively.
lens configuration for the rest of the experiments detailed here.
To study the effect of material properties of the superstrate (thermal conductivity, in particular) on temperature dynamics, we changed the superstrate of the microchambers from sapphire (\(\kappa_{saph}\)=30 W/mK) to glass (\(\kappa_{glass}\)=0.9 W/mK). Overall, the thermal conductivity of the superstrate had a minimal effect on the temperature dynamics for the chambers heights of 100 \(\mu\)m and 500 \(\mu\)m, and only showed a marginal effect when the chamber height was 300 \(\mu\)m (Figure S7 in SI).
To further understand the complex relationship between the chamber height and the temporal dynamics of temperature, we performed 2D numerical simulations using COMSOL Multiphysics for a fixed pump spot size of 10 \(\mu\)m (SI Section S4). We calculated the temporal evolution of temperature in two extreme cases of the chamber height: 50 \(\mu\)m and 500 \(\mu\)m. Numerical simulations revealed that the time to reach steady-state for a 500 \(\mu\)m chamber was about 17 minutes, while for a 50 \(\mu\)m chamber it was about 7.5 minutes. However, it should be noted that the conclusions drawn here are limited to the cases where natural air convection is the primary mechanism by which the microchamber interacts with the environment. Under such conditions, heat accumulates within the system and leads to a rise in the global temperature.
Furthermore, we can conclude that the time to reach steady-state, in the case of microchambers and closed systems in general imaged using air objectives, significantly depends on the chamber height (translates to the volume of water) and is orders of magnitude higher than the expected theoretical value of 175 \(\mu\)s for a spot size of \(\sim\)10 \(\mu\)m. As such, this case highlights the overall need to exercise caution when estimating the steady-state dynamics of the system, as well as the need to take into account the system as a whole, including its surroundings to obtain an accurate picture of the underlying thermodynamics. However, it has to be noted that the conclusions drawn here applies to a closed microchambers with a finite height and can not be extended to quasi-infinite systems as \(\Delta\phi_{max}\) diverges for an infinite system.
Figure 4: _Application of **ODT** thermometry to planar heat sources. (a) - (c) Experimental thermal map at \(Z=0\) measured using ODT for the chamber heights of 50 \(\mu\)m, 100\(\mu\)m, and 500 \(\mu\)m respectively. (d)-(f) Comparison of the line profile plotted across Y=0 (shown as a dashed line in (a)) with the numerically calculated thermal profile. The pump duration was fixed to 80 ms and the camera frame rate to 10 Hz. The scale bar is 5 \(\mu\)m.
### Non steady-state thermometry: planar heat source
We first applied ODT thermometry to understand the thermal profiles of a planar heat source and systematically probed the microchambers in a pump-probe manner with three different chamber heights, keeping the beam size (12 \(\mu\)m) and pump pulse duration (80 ms) constant. In these microchambers, AuNRs were anchored to the glass substrate forming a planar heat source when excited by a pump laser, similar to those used to measure the phase maps in figure 2. Figures 4 (a)-(c) show the cross section of measured temperature profile (at z=0) for chamber heights of 50 \(\mu\)m, 100 \(\mu\)m, and 500 \(\mu\)m respectively. Numerical simulations using COMSOL were carried out to corroborate the experimental data. To compare the experimental data with simulations, we plotted the line profile along y=0, represented as the dashed black line in figure 4 (a).
We found an excellent agreement between numerical simulations and experiments for chamber heights of 50 \(\mu\)m and 100 \(\mu\)m as shown by the line profiles in (figures 4(d) and (e) ). However, there is a mismatch for the 500 \(\mu\)m chamber, which we attributed to the slower relaxation dynamics. In detail, given that the measurements were performed in a pump-probe scheme there is an intrinsic assumption that the system cool sufficiently fast enough such that the pump OFF state does not have any residual heat left from the previous heating cycle. For the 80 ms pump duration, this condition was not satisfied, as the pump duration was comparable to the frame time of the camera (T\({}_{pmp}\)=80 ms and T\({}_{cam}\)=100 ms). Hence the residual heat in the system interfered with the measurement and appeared in the form of deviations from the theoretically expected \(\frac{1}{r}\) profile. To verify this hypothesis we probed the 500 \(\mu\)m chamber with different pump pulse durations (5 ms, 20ms, and 80 ms) whilst keeping the beam size constant. As expected, the line profiles extracted for the shorter pump pulse duration of 5 ms and 20 ms matched well with the numerically calculated profiles (figure S9, SI).
To establish the non steady-state nature of the temperature dynamics, we benchmarked the ODT measurements against QPI thermometry. As mentioned earlier, QPI thermometry, in its most general form, assumes steady-state and presupposes the \(\frac{1}{r}\) decay in the temperature profile. The thermal profiles extracted using QPI match very well with that of ODT thermometry up to a DC shift due to the global heat accumulation as predicted by the temporal evolution of the phase gradient (figure 2, figure S10 SI). As expected, the value of the constant shift between the thermal profiles extracted from QPI and ODT depends on the chamber height.
Overall, by systematically studying the temperature profiles in microchambers with multiple chamber heights and pump durations and benchmarking the results with numerical simulations we show that ODT thermometry can be applied to study a wide class of non steady-state thermodynamic systems.
### ODT thermometry: Non-planar, spatially fixed heat sources
So far we have studied thermal maps of AuNRs anchored to the surface of a glass substrate acting as a planar source of heat. A unique advantage of ODT is that it can measure temperature profiles originating from 3D heat sources, for instance from an ensemble of nanoparticle clusters distributed in a 3D matrix [38]. To validate the versatility of ODT to non-planar heat sources, we immobilised nanoparticle clusters in polyacrylamide gels cast in microchambers of height 100 \(\mu\)m (SI section S1). Figure 5(a) shows the reconstructed phase image of the nanoparticle clusters within the sample volume obtained through digital hologram propagation [9]. These nanoclusters were then excited using the same pump laser and Figure5(b) shows the resulting thermal maps retrieved with ODT. The overlay of the 3D phase maps with the corresponding thermal map in figure5(b) highlights the inherent property of ODT to colocalise the local temperature landscape with the 3D spatial distribution of complex objects.
To further understand the applicability of ODT thermometry to systems with refractive index inhomogeneities, we probed a A549 lung cancer cell that had previously internalised AuNRs. We quantified the increase in local temperature caused by photothermal conversion from the AuNRs inside the cell upon resonant excitation (see section S1 SI). Figure 5 (c) and (d) show the 3D tomogram of a representative single cell, alongside the induced 3D thermal profile upon irradiation with a 50 ms pump pulse respectively. Here the cell-internalized AuNRs act as non-planar heat sources embedded inside a complex refractive index environment, represented here by the cell. In this particular experiment, though the AuNRs were distributed throughout the volume of the cell, the much smaller illumination laser spot defined the size of the heat source. The spatial distribution inside the cell (particularly _xz_ cross-cut) shows that the temperature reached a maximum at a location away from the substrate; confirming that the system corresponds to a non-planar heat
source (see section S6 SI).
### ODT thermometry: Nanorod colloids
Apart from spatially fixed heat sources (AuNRs anchored on a glass substrate or clusters dispersed in poly-acrylamide gels/biological cells), we applied ODT thermometry to measure the temperature profile proceeding from dynamic environments. As a model system of this, we selected colloidal AuNR solutions. In detail, we prepared microchambers of height 100 \(\mu\)m filled with solutions of AuNRs of varying concentration, and probed them using ODT thermometry. In systems where the sources of heat can freely diffuse, large-scale spontaneous migration due to the formed temperature gradient, termed thermophoresis [39], represents a major bottleneck in the retrieval of temperature. When the nanoparticles are heated, they move in response to the temperature gradient. However, the thermophoretic mobility depends on size and composition of the nanoparticles and also the duration of heating [40, 41, 42, 11]. To avoid large-scale thermophoresis in the sample we
Figure 5: _Nanothermometry with spatially fixed 3D heat sources._ (a) 3D Phase image of nanoparticle clusters. The dashed circles act as a guide to the eye showing that the clusters are located in different planes. (b) Corresponding 3D temperature profile superimposed on the phase image when the nanoparticle cluster is excited. (c) 3D refractive index map of a fixed A549 lung cancer cell that has ingested AuNRs. (d) Corresponding 3D temperature profile when the cell was pumped with a 785 nm laser. The scale bars are 5 \(\mu\)m.
kept the pump pulse duration as 20 ms. We also tracked the phase shift induced in the probe beam across multiple pump cycles to ensure that there was a minimal change in the phase difference profile; thereby showing the heat source density per pump pulse did not vary drastically (figure S12, SI).
Figure 6 (a) shows the 3D thermal profile measured at AuNR concentration of 300 pM. To further characterize the temperature increase in colloidal nanoparticles, we systematically studied the temperature increase as a function of AuNR concentration and the input irradiance. Figures 6(b) and (c) depict the maximum temperature reached in the collodial system as a function of concentration and input irradiance respectively. The linear dependence of the maximum temperature both on the concentration (at constant irradiance, 151 \(\mu\)W/\(\mu\)m\({}^{2}\)) as well as irradiance (at constant concentration, 300 pM) establishes the reliability of the temperature retrieval and shows that there is no large scale thermophoresis due to localized heating and thermal gradient.
Together these three proof-of-principle experiments establish the power and advantage of ODT thermometry over other existing methodologies. Namely, ODT delivers 3D thermal profile distributions from complex
Figure 6: _Nanothermometry of Au nanorod colloids._ (a) 3D temperature of a colloidal sample in the microchannel of height 100 \(\mu\)m excited with a laser retrieved using ODT thermometry. The pump duration was kept at 20 ms and the frame rate at 10 Hz. _Inset_ represents the schematic of the temperature retrieval in colloidal nanoparticles using ODT. (b) The maximum rise in temperature as a function of the concentration of Au NRs in the chamber. The input irradiance was fixed at 151 \(\mu W/\mu m^{2}\). _Inset_ shows the line profiles of the temperature plotted along y=0; z=0. (c) The maximum rise in temperature as a function of the input irradiance. The concentration of Au NRs was fixed at 300 pM. The scale bar is 5 \(\mu\)m.
non-planar heat sources under transient thermodynamics regimes, i.e. not in steady-state. However, such advantages come at the cost of relatively more complex experimental setup and data processing steps as well as longer acquisition times compared to other QPI thermometries.
### Conclusions
To summarize, we have studied experimentally and numerically the thermodynamics of a model non steady-state system consisting of an optically heated microchamber. We unraveled an important relationship between the time to reach steady-state and the chamber height and the thermal conductivity of the environment. We showed that ODT thermometry accurately retrieves the 3D temperature profile under nonsteady-state conditions and these results against numerical simulations and QPI thermometry. We showed the versatileity of the ODT thermometry technique by applying it to systems with non-planar heat sources. We further presented a promising application of ODT thermometry by imaging the induced temperature profiles within biological cells upon plasmonic photothermal treatment. We also demonstrated its compatibility to retrieving temperatures from colloidal systems. We believe that the work presented here will have impact on multiple areas of research where accurate measurement of temperature is a key, such as photothermal therapy, photocatalysis, thermal lensing, and microfluidic optical traps. We also anticipate that the conclusions drawn in the article will stimulate further experimental and theoretical investigations on the development of more accurate and faster thermometry techniques, which will represent a significant step forward to better understanding heat-related processes at the nano and micro-scales.
## Acknowledgements
The authors thank Helena Villuendas for their help in sample preparation and Guillaume Baffou for fruitful discussions.
## S1 Methods
### Sample preparation
Gold nanorods were synthesized using the method described in Nikoobakht et.al.[43]. The glass substrate was uniformly coated with the synthesized gold nanorods using standard functionalization protocol described in detail in [2]. The microchambers were, then, prepared by placing silicon gaskets of predetermined thickness (50 \(\mu\)m, 100 \(\mu\)m, 300 \(\mu\)m, and 500 \(\mu\)m) on the nanorod coated glass substrate. The gap in the silicon gasket was filled with \(\sim\)10 \(\mu\)l of DI water and the chamber was closed with a sapphire superstrate. The inner diameter of the silicon gasket was about 8 mm in all the cases.
To immobilize nanoparticle clusters in polyacrylamide gels, we followed the standard operating protocol of preparing gels outlined by BioRAD[44].
For nanothermometry experiments with biological cells, we used Lung cancer cells (ATCC CCL-185\({}^{TM}\)) which had internalized gold nanorods. The protocol for the sample preparation is as follows. Petri dishes were cleaned in 70% ethanol and sterilized in UV light for 10-20 min. We placed two silicon wells (gaskets) in a petri dish with a seeding concentration of about 5000 and 10000 cells/well. Then 500 \(\mu\)l of complete medium (Dulbecco's modified Eagle medium (DMEM) +10% fetal bovine serum (FBS) +1%penicillin-streptomycin (PS)) was added to the cell droplet and let the cells attach overnight. Then, the wells were washed with the medium without FBS. We added 4ml of 2nM gold nanorod solution to the wells and they were allowed to incubate overnight. The wells were then washed again with medium without FBS. Later, the cells were washed twice with PBS and incubated for 5 min in 4% PFA and for 15 min in 2% PFA respectively and kept in 1% PFA. The microchamber was prepared by removing 1% PFA and adding \(\sim\)10 \(\mu\)l of DI water and closing the chamber with another glass substrate on the top and gently applying pressure to fix the silicon wells.
Figure S1 shows a schematic of the experimental setup used to measure phase maps in off-axis holography configuration in pump-probe manner. The probe laser (465 nm) was split into reference and object beams using a fiber splitter. In the object path, the probe laser was focused onto the back aperture of the objective lens (50x, 0.5NA) to generate a wide field illumination using a combination of lenses L2 (\(f_{L2}\)=30 mm), L3(\(f_{L3}\)=150 mm), and L4 (\(f_{L4}\)=150 mm). The probe light was collected in transmission configuration by a 40x 0.65 NA objective lens and the collected light was projected onto the camera using lens L5 (\(f_{L5}\)=250 mm) creating a magnification of 55 (\(\frac{f_{L5}}{f_{objective}}\)). The angle of illumination at the sample plane was controlled by the wedge prism WP. The reference beam was projected onto the camera at a small angle with respect to the optic axis of the microscope with the help of mirrors M3 and M4. The polarization of both object and reference path was fixed using polarizing beam splitters PBS1 and PBS2. The path length of the reference beam was adjusted by placing the fiber source module on an adjustable stage so as to match that of the object path. The gold nanorods were excited using a pump beam of wavelength 785 nm through 40x objective lens. The pump laser was not expanded and a long focal length lens, L5 (\(f_{L4}\)=500 mm), was used to focus the pump laser onto the back aperture of the objective lens to generate a spot size of \(\sim\)10 \(\mu\)m. An FPGA card was used to synchronize the pump and probe lasers to the camera acquisition. The frame rate of the camera was set at 10Hz for all experiments and the pump and probe pulse duration was varied according to the requirements.
The measured hologram were processed by first taking the Fourier transform which revealed real, twin, and zero-order images in the \(k\)-space. The real image was filtered using a hard aperture selection followed by frequency demodulation[45]. The demodulated image was then inverse Fourier transformed to get the complex electric field, of which the real part provided the amplitude image and the imaginary part the phase image. This step was repeated for all angles of illumination.
For all angles of incidence, we measured the phase maps for pump ON state and OFF state to generate a phase difference image. This step was repeated multiple times (50 phase images) and the resulting phase difference maps were averaged to increase the signal to noise ratio and improve the phase sensitivity.
## S2 Thermal Imaging using optical diffraction tomography (ODT)
Optical diffraction tomography (ODT) has been widely used as an imaging tool to map the refractive index in 3D. ODT relies on calculating refractive index profile from multiple phase and amplitude images, measured by changing the angle of illumination, using the Fourier diffraction theorem. According to the said theorem, for any refractive index composition n(r) immersed in a medium of refractive index \(n_{m}\) we can write
\[\hat{F}(K_{x},K_{y},K_{z})=\frac{ik_{z}}{\pi}\hat{U_{s}}(k_{x},k_{y};z=0) \tag{1}\]
where \(\hat{F}\) is the 3D Fourier transform of the object function \(f=-\frac{2\pi n_{m}}{\lambda^{2}}((\frac{n(r)}{n_{m}})^{2}-1)\) with \(\lambda\) as wavelength of illumination. \(\hat{U}\) is the 2D Fourier transform of the scattering wave calculated using Rytov approximation. According to the Rytov approximation, we can express the scattering wave (\(U_{s}\)) as \(U_{s}(x,y)\)=\(ln(\frac{U(x,y)}{U_{back}(x,y)})\) with \(U(x,y)\) and \(U_{back}(x,y)\) as the retrieved complex electric field in the presence of the sample and the background electric field respectively. In the case of ODT thermometry, \(U(x,y)\) and \(U_{back}(x,y)\) correspond to the complex electric fields measured with pump ON and pump OFF respectively. \(k_{z}\) is related to the lateral spatial frequencies \((k_{x},k_{y})\) as \(\sqrt{(\frac{2n_{m}\pi}{\lambda})^{2}-k_{x}^{2}-k_{y}^{2}}\). For each illumination angle the spatial frequencies of the incident wave vector changes and so does \((K_{x},K_{y},K_{z})\). Thus, one can map different regions of the \(k\)-space of the object function with measuring multiple 2D complex electric field images by changing the
angle of incidence. Thus mapped 3D object in the \(k\)-space is usually called as an _Ewald's sphere_. Finally, the inverse Fourier transform of \(\hat{F}\) yields the 3D profile of the object function in real space which can readily be translated to the 3D refractive index profile.
It is important to note that due to the limited numerical aperture of the collection objective lens it is not possible to fill the whole _Ewald's sphere_ through experimental data. This problem reflects is usually referred to as the _missing cone_ problem. There are multiple methods to solve the _missing cone_ problem among which constrained iterative regularization is widely used. For ODT thermometry the missing points on the _Ewald's sphere_ were filled satisfying two physical constraints
* The refractive index of water at an elevated temperature can not be greater than that at an ambient temperature, as water has a negative thermo-optical co-effecient.
* The refractive index of the glass substrate is constant as the heat induced refractive index change in glass is negligible under experimental conditions.
Then the 3D refractive index maps were transformed to thermal maps using an empirical equation:
\[n(T)=\sum_{j=0}^{P}b_{j}T^{j} \tag{2}\]
where \(T\) is temperature and \(b_{j}\)s are expansion co-efficients. We consider upto P=4 with values given as [46], \(b_{0}\)=1.34359
\(b_{1}\)=-1.0514x10\({}^{-4}\)
\(b_{2}\)=-1.5692x10\({}^{-6}\)
\(b_{3}\)=5.7538x10\({}^{-9}\)
\(b_{4}\)=-1.2873x10\({}^{-11}\)
### Benchmarking ODT
As a control experiment we benchmarked the ODT technique by measuring refractive index of silica microparticles dispersed in PDMS matrix. In this case \(U(x,y)\) was the complex electric field maps measured in the presence of the particle and \(U_{back}(x,y)\) was the background electric field maps of bare PDMS matrix (without particles) (see equation1). The initial estimation of refractive index of microparticles was improved using the iterative regularization step with constraint as: the refractive index of beads was greater than the refractive index of PDMS. Figure S2 shows the schematic of the tomographic reconstruction of the refractive index profile of silica microparticles dispersed in PDMS matrix and also indicates the effect of iterative regularization.
### Phase and temperature sensitivity
To further characterize the sensitivity of thermal imaging using ODT technique, we performed statistical analysis of the phase map as well as the retrieved refractive index map. The phase senstivity, which is the minimum distinguishable phase change, was calculated by plotting the histogram of values in the background area (inside a square defined along the corner of the image) of the phase map as shown in figures S3(a) and (b). The histogram values were fit to a gaussian distribution and the sensitivity, \(\sigma_{phase}\), was found to be 6 mrad. In a typical RI tomogram we use 20 of such phase maps by changing the input angles as shown in Figure S3(c). The RI sensitivity was calculated in a similar manner as that of the phase and was found to be \(\sigma_{RI}\)=1.21x10\({}^{-4}\) which corresponds to a temperature sensitivity of \(\delta T\)=0.7K (see figures S3 (e) and (f)).
## S3 Thermal imaging using QPI
The algorithm to retrieve temperature map from an optical phase difference map has been outlined in detail in ref[4]. The measured phase difference map was converted into optical path difference (OPD) map. Then, the temperature retrieval was done in three steps:
1. First we assume that the system has reached the steady state distribution of the form \(\frac{P_{0}}{r}\), where \(P_{0}\) is the absorbed power and \(r\) is the radial coordinate. Then we define the Green's function for OPD as \(G_{OPD}=\beta_{1}sinh^{-1}(\frac{h}{\sqrt{x^{2}+y^{2}}})\) where \(h\) is the height of the microchamber and \(\beta_{1}\) corresponds to the first order thermo-optical co-efficient of water. We estimated the heat source density (HSD) by deconvolution of the OPD map and the \(G_{OPD}\) using Tikhonov deconvolution method.
2. We then convolved the obtained HSD map with the Green's function for Laplace equation, \(G_{Th}=\frac{1}{4\pi\kappa_{water}r}\) where \(\kappa_{water}\) is the thermal conductivity of water, to get an initial estimate of temperature.
3. To consider the higher order thermo-optical co-effecients, we iteratively minimized the difference between the measured OPD map, \(l_{0}\)(x,y), and a path difference map calculated using \(\int_{0}^{h}\delta n(x,y,z)dz\)
where \(\delta n(x,y,z)\) is the change in refractive index of water calculated using the retrieved temperature map by considering higher orders (up to 4) of thermo-optical co-efficients of water (see equation2).
For thermal imaging using QPI, we averaged twenty different phase maps measured by changing the angle of incidence in a manner similar to the building of _Ewald's sphere_, but in 2D. The resulting OPD map was employed in the temperature retrieval.
### Effect of chamber height on temperature retrieval
As explained in the previous subsection, the retrieval of the temperature map is based on the assumption that the temperature has reached a steady state and is of the form \(\frac{P_{0}}{r}\). This is strictly true only in the case where the medium of heat transfer is quasi-infinite in nature. However when the height of the chamber is relatively small one has to take the thermal properties of the superstrate into account as discussed in ref [5]. However, it is tricky to obtain a closed form expression for HSD through deconvolution of the OPD maps if one considers the complete 3 layer model. The deviation from the quasi infinite model in temperature gets reflected in the iterative minimization of OPD error (step 3 in the temperature retrieval) as shown in figure S4. The deviations of the reconstructed OPD map from the measured one decereases as one increases the chamber height from 50 \(\mu\)m to 500\(\mu\)m.
## S4 Temperature transients
Time taken to reach steady state for a quasi-infinite system is defined by its diffusivity, \(a_{s}\), and the diameter of the heat source, \(D\), and is of the form \(\frac{D^{2}}{4a_{s}}\). Most of the experimental systems, eg. microchambers, are far from being quasi-infinite systems because of their limited chamber height. In such cases, the temperature dynamics depend crucially on the chamber height. To understand the temperature dynamics in microchambers we performed finite element method (FEM) based numerical simulations using COMSOL multiphysics. The lateral diameter of the chamber was fixed to 10 mm and the height of the chamber was varied. The substrate was defined as glass (\(\kappa_{glass}\)=0.9 W/mK) and the superstrate as sapphire (\(\kappa_{saph}\)=30W/mK). To mimic the experimental situation the chamber was heated by a gaussian heat source with a diameter of 10 \(\mu\)m. Natural air convection was assumed at the boundaries (both glass side and sapphire side) as in the case of experiments. Figure S5 shows numerically calculated evolution of maximum temperature as a function of pump duration for chamber heights of 50 \(\mu\)m and 500 \(\mu\)m. For both cases the time taken to reach steady state was significantly different from the values predicted by \(\frac{D^{2}}{4a_{s}}\) i.e., 7.4 min for the chamber of height 50 \(\mu\)m and 16.7 min for the chamber of height 500 \(\mu\)m.
A striking feature to note is that the temperature reaches an intermediary plateau, a quasi steady state, before reaching the actual steady state. This can be understood by considering two thermalizing regimes of the system: _local_ and _global_. When the ensemble of nanoparticles were heated in a thermodynamically closed system like a microchamber, the heated nanoparticles thermalize fast with its local environment and reaches an intermediary steady state which is termed as the _local_ thermalization regime (the plateau in figure S5).
The rate at which the entire microchamber thermalized was determined, primarily, by the natural convection at the boundaries. This _global_ thermalization happened at a slower pace and at a higher temperature which was a function of the volume of liquid inside the chamber. Such temperature evloution has been observed in the past in the case of multi nanoparticle heating [47]. It has to be noted that the conclusions drawn here strictly apply to the case where natual air convection is the mechanism with which the microchambers interact with the surrounding environment. If the substrate is in contact with any kind of thermal sinks eg., metallic case of the objective lens, the dynamics of the system will be altered.
To understand the effect of substrate thickness on the temporal evolution of the thermal profile, we performed numerical calculations considering two different substrate thicknesses: 100 \(\mu\)m and 1 mm. Figure S6 shows the calculated thermal profiles of a microchamber of height 500 \(\mu\)m with 1 mm and 100 \(\mu\)m thick substrates respectively. The beam size was kept at 10 \(\mu\)m. The microchamber with thinner substrate reaches the steady state faster albeit at a higher temperature compared to a chamber with a thicker substrate. The increase in the time to reach steady state for the microchambers with thicker substrate was due to the increase in the volume of the material to thermalize compared to the chambers with the thin substrate. As the sources of heat were located in glass - water interface, the thicker substrate introduces a longer path in glass for the heat to diffuse and decay. This makes the effect of air convection (heat accumulation in the chamber) comparatively lesser than the thin substrate case.
### Effect of superstrate on temperature dynamics
To understand the effect of the thermal conductivity of the superstrate on temperature dynamics we probed microchambers with glass and sapphire as superstrates keeping glass as the substrate. Figure S7 shows the evolution of normalized phase accumulation (\(\Delta\phi_{max}\)) as a function of three different chamber heights. For both 100 \(\mu\)m and 500 \(\mu\)m chambers the phase evolution was almost independent of the superstrate while there was a marginal change for a 300 \(\mu\)m chamber. For the chamber height of 100 \(\mu\)m, the volume of liquid itself is small enough and introduction of sapphire as the superstrate does not quicken the temporal evolution of temperature, atleast within the experimental setting. Also, if the superstrate is very far (500 \(\mu\)m case) to the heat source the thermodynamics is mostly defined by the natural convection and thermal conductivity of the superstrate does not have a major role to play. However with an intermediary height (300 \(\mu\)m), sapphire plays a role in quickening the time evolution by acting as an efficient heat sink.
### Effect of thermal conductivity of the surroundings on the temperature dynamics
Figure S8 shows the temporal evolution of phase gradient (\(\Delta\phi_{grad}\)) and maximum phase accumulation (\(\Delta\phi_{acc}\)) for a microchannel of height 500 \(\mu\)m when probed using an oil immersion objective lens for a pump duration of 1 s. The phase accumulation saturates within the time duration of the experiment while the phase gradient shows an oscillatory behaviour before saturating (similar to figure 2).
## S5 Non-steady state thermometry of planar heat sources with ODT
To better understand the deviation of the thermal profile of the microchannel of height 500 \(\mu\)m from the numerically calculated one, we probed the microchannel as a function of pump pulse durations. Figures S9 (a) - (c) show the comparison of line profiles of temperature as retrieved by O
calculations using COMSOL for a pump pulse duration of 5 ms, 20 ms, and 80 ms respectively. The camera frame rate was fixed at 10 Hz. The thermal profiles retrieved by ODT match very well with the numerical simulations for a pump pulse duration of 5 ms and 20 ms as there was minimal residual heat interference in the pump-probe cycle. However, in the case of an 80 ms pump pulse, there is a considerable interference of the residual heat resulting in the deviation of the thermal profile from the numerically calculated one.
We utilized QPI thermometry to benchmark the thermal profiles retrieved by ODT technique. In the case of microchambers, due to the accumulation of heat, the system experiences a constant increase in the thermal floor keeping the shape of the thermal profile the same, thus pushing the system out of the steady state. As mentioned in the previous sections, QPI thermometry is based on two important assumptions: (i) The system has reached the steady state and the temperature profile follows a \(\frac{1}{r}\) decay. (ii) The sources of heat are all located in one plane. Figures S10 (a)-(c) show the comparison of the line profile of the temperature along Y=0 for microchannel of height 50 \(\mu\)m, 100 \(\mu\)m, and 500 \(\mu\)m respectively. The pump pulse duration was kept at 80 ms and the camera frame rate at 10 Hz. A constant value was added to the temperature profiles retrieved by QPI for better visualization. The shape thermal profiles retrieved by ODT technique matched very well with QPI except for the constant value of the thermal floor, as predicted by the phase imaging, showing the microchamber was out of steady state.
To further confirm the non-planar nature of Au NRs ingested by the biological cell, we plot a line profile of the temperature along the axial direction as shown in figureS11. We can see that the distribution of temperature reached the maximum value away from the substrate (z=0). Additionally, the line profile deviated from the \(\sim\frac{1}{r}\) decay from the substrate unlike a planar heat source.
## S7 ODT thermometry of Au nanorod colloids
The main concern in the thermometry of colloids is the large-scale movement of the nanoparticles due to the generated temperature gradient, termed thermophoresis. To avoid thermophoresis, we kept the pump duration to 20 ms. We further investigated the phase difference map for different pump cycles. The phase difference map gives an estimate of the heat source density generated per pump cycle. Figure S12 shows the measured phase difference map for pump cycles 1 and 25 as well as the calculated difference between them. Notwithstanding multiple pump-probe cycles, the phase difference maps did not alter considerably demonstrating that the heat source density generated per pump pulse remained the same.
|
2306.14396
|
Finitely Based Congruence Varieties
|
We show that for a large class of varieties of algebras, the equational
theory of the congruence lattices of the members is not finitely based.
|
Ralph Freese, Paolo Lipparini
|
2023-06-26T03:12:42Z
|
http://arxiv.org/abs/2306.14396v2
|
[
###### Abstract
We show that for a large class of varieties of algebras, the equational theory of the congruence lattices of the members is not finitely based.
congruence lattice, congruence variety, finite (equational) basis, projective lattices, higher Arguesian identities.
Finitely Based Congruence Varieties]Finitely Based Congruence Varieties Ralph Freese]Paolo Lipparini
06B15, 06C05, 08B99.
congruence lattice, congruence variety, finite (equational) basis, projective lattices, higher Arguesian identities.
## 1 Introduction
Let \(\mathcal{V}\) be a variety of algebras and let
\[\mathbf{Con}(\mathcal{V})=\{\mathbf{Con}(\mathbf{A}):\mathbf{A}\in\mathcal{V}\}. \tag{1.1}\]
The variety of lattices, \(\mathbf{V}\mathbf{Con}(\mathcal{V})\), generated by the congruence lattices of the members of \(\mathcal{V}\), is called the _congruence variety_ associated with \(\mathcal{V}\). Congruence varieties originated with Nation in his thesis [27]; he showed, among other things, that the lattice variety generated by \(\mathbf{N}_{5}\) (the 5 element nonmodular lattice) is not a congruence variety; see [12, Theorem 6.99].
A lattice is _meet semidistributive_ if it satisfies the (universally quantified) implication
\[x\wedge y=x\wedge z\,\rightarrow\,x\wedge y=x\wedge(y\lor z)\]
It is _join semidistributive_ if it satisfies the dual condition and it is _semidistributive_ if it satisfies both.
In [10] Freese and Jonsson proved that every modular congruence variety actually satisfies the arguesian identity. Since the arguesian identity is properly stronger than modular law (as witnessed by the lattice of subspaces of any nonarguesian projective plane), this implies, for example, that the variety of all modular lattices is not a congruence variety. That the variety of all arguesian lattices is not a congruence variety was shown in [8].
In [19, Problem 9.12] B. Jonsson asked if any nontrivial congruence variety could be finitely based other than the variety of distributive lattices
and the variety of all lattices. For congruence modular varieties this question was completely answered by the first author with the following theorem.
Theorem 1.1 ([7, Theorem 4]): _There is no nontrivial finitely based modular congruence variety other than the variety of distributive lattices._
In this paper we come close to answering Jonsson's problem by extending Theorem 1.1 with the following theorem:
Theorem 1.2: _Let \(\mathcal{V}\) be a variety of algebras such that \(\mathbf{Con}\)\((\mathcal{V})\) satisfies a nontrivial lattice identity. Then, if \(\mathbf{Con}\)\((\mathcal{V})\) has a finite basis for its equations, \(\mathbf{Con}(\mathcal{V})\) is semidistributive._
We now outline how we prove Theorem 1.2. For each field \(\mathbf{F}\) with at least \(3\) elements Haiman [15] has constructed a sequence of modular lattices \(\mathbf{H}_{n}(\mathbf{F})\), \(n\geq 3\). When \(n\geq 4\) these lattices are Aguresian but cannot be represented as lattices of permutable equivalence relations. In proving Theorem 1.1 we showed that for every \(n\geq 3\) and every field \(\mathbf{F}\) with \(|\mathbf{F}|>2\),
1. \(\mathbf{H}_{n}(\mathbf{F})\) lies in no modular congruence variety.
2. For any modular, nondistributive congruence variety \(\mathcal{K}\) there is a field \(\mathbf{F}\) such that a nonprincipal ultraproduct of the \(\mathbf{H}_{n}(\mathbf{F})\)'s is in \(\mathcal{K}\).
To prove Theorem 1.2 we strengthen these statements as follows. By a _proper_ congruence variety we mean one that is not the variety of all lattices.
Theorem 1.3: _Let \(\mathbf{H}_{n}(\mathbf{F})\) be Haiman's lattices for \(\mathbf{F}\), \(|\mathbf{F}|>2\), and \(n\geq 3\). Then_
1. \(\mathbf{H}_{n}(\mathbf{F})\) _lies in no congruence variety, except the variety of all lattices._
2. _For any congruence variety_ \(\mathcal{K}\) _that is not join semidistributive, there is a field_ \(\mathbf{F}\) _such that every nonprincipal ultraproduct of the_ \(\mathbf{H}_{n}(\mathbf{F})\)_'s is in_ \(\mathcal{K}\)_._
That these statements imply Theorem 1.2 is standard; see [12, Theorem 8.52].
The ideas proving \((2^{\prime})\) also yield interesting results about embedding lattices into members of \(\mathbf{Con}(\mathcal{V})\) for \(\mathcal{V}\) a variety with a weak difference term which is not congruence meet semidistributive. Such varieties admit a large class of modular lattices, as is shown in SS6.
## 2 Preliminaries
Before we begin the proof of \((1^{\prime})\) and \((2^{\prime})\) we prove some basic facts and introduce some notation. First we will (usually) use \(\mathcal{V}\) to denote a variety of algebras. \(\mathcal{K}\) denotes a variety of lattices, usually a congruence variety. This convention is essentially the opposite of the one used in [7], but is more standard now-a-days. We let \(\mathbf{H}\), \(\mathbf{S}\), \(\mathbf{P}\), and \(\mathbf{V}=\mathbf{H}\mathbf{S}\mathbf{P}\) be the usual class operators as defined in SS4.10 of [26]. As mentioned in (1.1) above, for \(\mathcal{V}\) a class of algebras, we let \(\mathbf{Con}(\mathcal{V})\) simply be the set of all congruence
lattices of members of \(\mathcal{V}\) and use the class operators for the congruence variety: \(\mathbf{V}\mbox{\bf Con}(\mathcal{V})=\mathbf{H}\mbox{\bf SP} \mbox{\bf Con}(\mathcal{V})\), and also for the _congruence prevariety_\(\mbox{\bf SP}\mbox{\bf Con}(\mathcal{V})\). Note by next lemma that the congruence variety is \(\mbox{\bf HS}\mbox{\bf Con}(\mathcal{V})\) and the congruence prevariety is \(\mbox{\bf S}\mbox{\bf Con}(\mathcal{V})\).
In most of our results we assume that \(\mathcal{V}\) is a variety of algebras but in almost all cases it is enough to assume it is closed under \(\boldsymbol{S}\) and \(\boldsymbol{P}\).
**Lemma 2.1**.: _For every variety \(\mathcal{V}\) of algebras, the following hold._
* \(\mbox{\bf P}\mbox{\bf Con}\;(\mathcal{V})\subseteq\mbox{\bf S}\mbox{\bf Con} \;(\mathcal{V})\)_._
* \(\mbox{\bf V}\mbox{\bf Con}\;(\mathcal{V})=\mbox{\bf H}\mbox{\bf S}\mbox{\bf Con }\;(\mathcal{V})\)_._
* _If_ \(\mathcal{U}\) _is the idempotent reduct of_ \(\mathcal{V}\)_, then_ \(\mbox{\bf V}\mbox{\bf Con}\;(\mathcal{U})=\mbox{\bf V}\mbox{\bf Con}\;( \mathcal{V})\)_._
Proof.: (i) is the first statement in Proposition 4.5 of [11] (notice that the proof does not require congruence modularity).
(ii) is immediate from (i).
(iii) follows from the well known Pixley-Wille algorithm [28, 30], which shows the property some variety of algebras satisfies a given congruence identity is equivalent to a weak Mal'cev condition witnessed by idempotent terms: see for example [12, Theorem 6.111].
The centrality relation and the commutator
The _centrality relation_ on an algebra \(\mathbf{A}\) is denoted \(\mathrm{C}(\alpha,\beta;\delta)\) for congruences on \(\mathbf{A}\); see [13, Definition 11.3 and Lemma 11.4] for its definition and basic properties. We say \(\alpha\)_centralizes \(\beta\) modulo \(\delta\)_ whenever the relation \(\mathrm{C}(\alpha,\beta;\delta)\) holds.
The (term condition) _commutator_, \([\alpha,\beta]\), is defined as the least \(\delta\) such that \(\mathrm{C}(\alpha,\beta;\delta)\). The centrality relation and the commutator have become a standard tools in universal algebra. \(\mathbf{M}_{3}\) denotes the five element modular, nondistributve lattice.
**Lemma 2.2**.: _Suppose \(\delta\leq\alpha,\beta,\gamma\leq\mu\) are congruences on an algebra \(\mathbf{A}\) that form a copy of \(\mathbf{M}_{3}\). Then \([\alpha,\alpha]\leq\delta\), \([\beta,\beta]\leq\delta\), and \([\gamma,\gamma]\leq\delta\)._
Proof.: By [13, Lemma 11.4(vi)]), \(C(\alpha,\gamma;\alpha\wedge\gamma)\) holds. Since \(\delta=\alpha\wedge\gamma\), we have \(C(\alpha,\gamma;\delta)\). Similarly \(C(\beta,\gamma;\delta)\). By [13, Lemma 11.4(iv) and (i)]) \(\mathrm{C}(\alpha\vee\beta,\gamma;\delta)=\mathrm{C}(\mu,\gamma;\delta)\), and so \(\mathrm{C}(\gamma,\gamma;\delta)\). Using the definition of the commutator, this yields \([\gamma,\gamma]\leq\delta\). The other two inequalities follow from symmetry.
Weak difference termsLet \(\mathbf{A}\) be an algebra. A term \(d(x,y,z)\) in the signature of \(\mathbf{A}\) is a _weak difference term_ for \(\mathbf{A}\) if, for all \(a\), \(b\in A\) and all \(\theta\in\mbox{\bf Con}\;(\mathbf{A})\) with \(\langle a,b\rangle\in\theta\),
\[a\;[\theta,\theta]\;d(a,b,b)\quad\mbox{ and }\quad d(a,a,b)\;[\theta,\theta]\;b. \tag{2.1}\]
Most of the properties of the modular commutator hold in varieties with a weak difference term. The next lemma is an illustration. It is special case of more general permutability results given in [24]; see for example Theorem 3.5(i) of that paper.
**Lemma 2.3**.: _Let \(\mathbf{A}\) be an algebra with a weak difference term \(d\), and let \(\alpha\) and \(\beta\in\mathbf{Con}\;(\mathbf{A})\). Then._
1. _if_ \([\alpha,\alpha]\leq\beta\) _and_ \([\beta,\beta]\leq\alpha\)_, then_ \(\alpha\) _and_ \(\beta\) _permute;_
2. _if_ \(\alpha\) _and_ \(\beta\) _are atoms of a sublattice of_ \(\mathbf{Con}\;(\mathbf{A})\) _isomorphic to_ \(\mathbf{M}_{3}\)_, then_ \(\alpha\) _and_ \(\beta\) _permute._
Proof.: For (1) suppose \(a\;\alpha\;b\;\beta\;c\). Then
\[a\;[\alpha,\alpha]\;d(a,b,b)\;\beta\;d(a,b,c)\]
and since \([\alpha,\alpha]\leq\beta\), we have \(a\;\beta\;d(a,b,c)\). A symmetric argument shows \(c\;\alpha\;d(a,b,c)\), and so \(a\;\beta\;d(a,b,c)\;\alpha\;c\). From this it follows that \(\alpha\) and \(\beta\) permute. (2) follows from (1) and Lemma 2.2
#### 2.3.1 Mal'tsev conditions and Mal'tsev classes
A class of varieties defined by a Mal'tsev condition is called a _Mal'tsev class_. Examples include the classes of congruence modular, congruence distributive, and congruence semidistributive varieties. Other examples include varieties with a Taylor term and varieties with a Hobby-McKenzie term. We are particularly concerned with varieties having a weak difference term. By Kearnes and Szendrei [23] this is a Mal'tsev class; see [13, Theorem 11.59]. Having a weak difference term is a relatively weak property: all of the other conditions mentioned, except having a Taylor term, imply the existence of a weak difference term. We are interested in varieties whose congruence variety is not the variety of all lattices; that is, varieties that satisfy a nontrivial congruence identity. These varieties form a Mal'tsev class; in fact it is the class of varieties with a Hobby-McKenzie term [21, Theorem A.2(11)].
An interval \(I[\beta,\alpha]\) is _abelian_ if \(\mathrm{C}(\alpha,\alpha;\beta)\). This implies \([\alpha,\alpha]\leq\beta\) but the converse is false. However, if \(\mathcal{V}\) has a weak difference term, \(I[\beta,\alpha]\) is abelian if and only if \([\alpha,\alpha]\leq\beta\). Consequently, subintervals of abelian intervals are abelian in such varieties, [24, Proposition 4.2]; see also [13, Theorem 11.29].
The _solvable series for \(\alpha\)_ is defined by \([\alpha]^{0}=\alpha\), \([\alpha]^{n+1}=[[\alpha]^{n},[\alpha]^{n}]\). We say \(\alpha\) is _solvable_ if \([\alpha]^{n}=0\) for some \(n\). For \(\beta\leq\alpha\) we say the interval between \(\beta\) and \(\alpha\) is _solvable_ in case there exists some \(n\) such that \([\alpha]^{n}\leq\beta\).
The next theorem records some facts we need from [24] on the behavior of the commutator in varieties with a weak difference term. The first is from [24, Theorem 5.1] and the second is from [24, p. 197]); see also [13, Theorem 11.87].
**Theorem 2.4**.: _Suppose that the algebra \(\mathbf{A}\) has a weak difference term. Then the following hold in \(\mathbf{Con}\;(\mathbf{A})\)._
1. _(Abelian and solvable intervals are preserved under transpositions) If the interval_ \(I[\beta,\alpha]\) _is abelian (solvable), then the intervals_ \(I[\beta\wedge\gamma,\alpha\wedge\gamma]\) _and_ \(I[\beta\vee\delta,\alpha\vee\delta]\) _are abelian (solvable)._
2. _(Solvable intervals are intervals of permuting equivalence relations) If the interval_ \(I[\beta,\alpha]\) _is solvable and_ \(\gamma,\delta\in I[\beta,\alpha]\)_, then_ \(\gamma\circ\delta=\delta\circ\gamma\)_. Hence_ \(I[\beta,\alpha]\) _is modular._
We list in the next theorem the results we need from [23, 21]. For \(\alpha\), \(\beta\) and \(\gamma\) elements of a lattice, define \(\beta^{0}=\beta\), \(\gamma^{0}=\gamma\),
\[\beta^{m+1}=\beta\wedge(\alpha\vee\gamma^{m})\qquad\gamma^{m+1}=\gamma\wedge( \alpha\vee\beta^{m}) \tag{2.2}\]
**Theorem 2.5**.: _Suppose that \(\mathcal{V}\) is a variety and that the congruence variety \(\mathcal{K}\) associated with \(\mathcal{V}\) is not the variety of all lattices. Then the following hold._
1. (Kearnes and Szendrei [23, Corollary 4.12]) \(\mathcal{V}\) _has a weak difference term._
2. (Kearnes and Kiss [21, Theorem 8.3]) _There exists a positive integer_ \(m\) _such that the congruence identity_ \(\beta^{m}=\beta^{m+1}\) _holds in_ \(\mathcal{K}\)_._
3. (Kearnes and Kiss [21, Theorem 8.5]) _Whenever_ \(\mathbf{A}\in\mathcal{V}\) _and_ \(\alpha\)_,_ \(\beta\)_,_ \(\gamma\) _are congruences of_ \(\mathbf{A}\) _such that_ \(\alpha\vee\beta=\alpha\vee\gamma\) _then the interval between_ \(\alpha\vee(\beta\wedge\gamma)\) _and_ \(\alpha\vee\beta\) _is abelian._
## 3 Haiman's lattices and higher order Arguesian identities
In [15] M. Haiman studied a chain of stronger and stronger higher order Arguesian identities (D\({}_{n}\)), given below. He showed that \(\mathbf{H}_{n}(\mathbf{F})\) witnesses that the congruence identities (D\({}_{n}\)) are properly increasing in strength. In Theorem 3.2 we present an identity, (D\({}_{n}^{*}\)), equivalent to (D\({}_{n}\)) but which more closely resembles Jonsson's Arguesian identity and is easier to work with. The equivalence of these identities is of independent interest.
**Lemma 3.1**.: _Let \(x_{0}\), \(x_{0}^{\prime}\), \(A\), \(B\) be elements of a modular lattice such that \(x_{0}\wedge x_{0}^{\prime}\leq B\) and \(x_{0}\lor x_{0}^{\prime}\geq A\). Then \(A\leq x_{0}^{\prime}\vee(x_{0}\wedge B)\) if and only if \(x_{0}\wedge(x_{0}^{\prime}\lor A)\leq B\)._
Proof.: \(A\leq x_{0}^{\prime}\vee(x_{0}\wedge B)\) is equivalent to \(x_{0}^{\prime}\lor A\leq x_{0}^{\prime}\vee(x_{0}\wedge B)\), which implies \(x_{0}\wedge(x_{0}^{\prime}\lor A)\leq x_{0}\wedge(x_{0}^{\prime}\vee(x_{0} \wedge B))=(x_{0}\wedge x_{0}^{\prime})\vee(x_{0}\wedge B)\leq B\) by modularity. The converse is the dual argument (with \(B\) in place of \(A\) and \(x_{0}\) in place of \(x_{0}^{\prime}\)).
The next theorem generalizes a result first obtained by A. Day and D. Pickering [5] in the particular case \(n=3\). By Corollary 1 on page 104 of [12], (D\({}_{3}^{*}\)) is equivalent to the Agruesian identity.
**Theorem 3.2**.: _Let \(y_{i}=(x_{i}\lor x_{i+1})\wedge(x_{i}^{\prime}\lor x_{i+1}^{\prime})\) where the indices are computed modulo \(n\) so \(y_{n-1}=(x_{n-1}\lor x_{0})\wedge(x_{n-1}^{\prime}\lor x_{0}^{\prime})\). Then the following two equations are equivalent._
\[x_{0}\wedge(x_{0}^{\prime}\vee\bigwedge_{i=1}^{n-1}(x_{i}\lor x_{i}^{\prime}) )\leq x_{1}\vee[(x_{0}^{\prime}\lor x_{1}^{\prime})\wedge\bigvee_{i=1}^{n-1} y_{i}]\] (D \[{}_{n}\] )
_and_
\[\bigwedge_{i=0}^{n-1}(x_{i}\lor x_{i}^{\prime})\leq x_{0}^{\prime}\vee(x_{0} \wedge(x_{1}\vee[(x_{0}^{\prime}\lor x_{1}^{\prime})\wedge\bigvee_{i=1}^{n-1} y_{i}]))\] (D \[{}_{n}^{*}\] )
Proof.: Both of these equations imply modularity: for \((\mathrm{D}_{n}^{*})\) we make the substitutions \(x_{0}\mapsto x\), \(x_{i}\mapsto y\lor z\) for all \(i>0\), and \(x_{i}^{\prime}\mapsto y\wedge z\) for all \(i\). A similar substitution works for \((\mathrm{D}_{n})\).
We claim \(x_{0}\wedge(x_{0}^{\prime}\vee\bigwedge_{i=1}^{n-1}(x_{i}\lor x_{i}^{\prime}))= x_{0}\wedge(x_{0}^{\prime}\vee\bigwedge_{i=0}^{n-1}(x_{i}\lor x_{i}^{\prime}))\). Indeed, modularity gives
\[x_{0}\wedge\big{(}x_{0}^{\prime}\vee\bigwedge_{i=0}^{n-1}(x_{i} \lor x_{i}^{\prime})\big{)} =x_{0}\wedge\big{(}x_{0}^{\prime}\vee[(x_{0}\lor x_{0}^{\prime}) \wedge\bigwedge_{i=1}^{n-1}(x_{i}\lor x_{i}^{\prime})]\big{)}\] \[=x_{0}\wedge(x_{0}\lor x_{0}^{\prime})\wedge[x_{0}^{\prime}\vee \bigwedge_{i=1}^{n-1}(x_{i}\lor x_{i}^{\prime})]\] \[=x_{0}\wedge\big{(}x_{0}^{\prime}\vee\bigwedge_{i=1}^{n-1}(x_{i} \lor x_{i}^{\prime})\big{)}\]
Now apply Lemma 3.1 with
\[A=\bigwedge_{i=0}^{n-1}(x_{i}\lor x_{i}^{\prime})\quad\text{and}\quad B=x_{1} \vee[(x_{0}^{\prime}\lor x_{1}^{\prime})\wedge\bigvee_{i=1}^{n-1}y_{i}].\]
Notice that \(x_{0}\wedge x_{0}^{\prime}\leq B\), since \(x_{0}\wedge x_{0}^{\prime}\leq y_{n-1}=(x_{n-1}\lor x_{0})\wedge(x_{n-1}^{ \prime}\lor x_{0}^{\prime})\), and clearly \(x_{0}\lor x_{0}^{\prime}\geq A\). Thus the hypotheses of Lemma 3.1 hold. Using the claim we see the conclusion of that lemma is exactly the statement that \((\mathrm{D}_{n})\) holds if and only if \((\mathrm{D}_{n}^{*})\) holds, as desired.
Actually, the proof of Theorem 3.2 gives something more:
**Corollary 3.3**.: _If \(\,\mathbf{L}\) is a modular lattice and \(x_{i}\), \(x_{i}^{\prime}\in\mathbf{L}\), then \(x_{i}\), \(x_{i}^{\prime}\) satisfy \((\mathrm{D}_{n})\) if and only if \(x_{i}\), \(x_{i}^{\prime}\) satisfy \((\mathrm{D}_{n}^{*})\)._
Haiman showed that \((\mathrm{D}_{n})\) holds in any lattice of permuting equivalence relations, but we require the following strengthening of that fact.
**Lemma 3.4**.: _If \(\,\mathbf{L}\) is a sublattice of the lattice of equivalence relations on a set \(A\), and if \(\alpha_{i}\), \(\alpha_{i}^{\prime}\) are elements of \(\,\mathbf{L}\), and for each \(i\), \(\alpha_{i}\) and \(\alpha_{i}^{\prime}\) permute, then the instance of \((\mathrm{D}_{n}^{*})\) in which \(\alpha_{i}\) and \(\alpha_{i}^{\prime}\) are substituted for \(x_{i}\), \(x_{i}^{\prime}\), \(i=0,\ldots,n-1\), holds in \(\mathbf{L}\)._
Proof.: Suppose that \(\langle a,b\rangle\in\bigwedge_{i=0}^{n-1}(\alpha_{i}\vee\alpha_{i}^{\prime})\), that is, \(\langle a,b\rangle\in\alpha_{i}\vee\alpha_{i}^{\prime}\), for every \(i\). Since \(\alpha_{i}\) and \(\alpha_{i}^{\prime}\) permute for every \(i\), this implies there exists \(c_{i}\) such that \(\langle a,c_{i}\rangle\in\alpha_{i}\) and \(\langle c_{i},b\rangle\in\alpha_{i}^{\prime}\). Now let \(\gamma_{i}=(\alpha_{i}\vee\alpha_{i+1})\wedge(\alpha_{i}^{\prime}\vee\alpha_{ i+1}^{\prime})\), the element of \(L\) corresponding to \(y_{i}\). It follows that \(\langle c_{i},c_{i+1}\rangle\in\gamma_{i}\). The indices are computed modulo \(n\) so when \(i=n-1\) we get \(\langle c_{n-1},c_{0}\rangle\in\gamma_{n-1}=(\alpha_{n-1}\vee\alpha_{0}) \wedge(\alpha_{n-1}^{\prime}\vee\alpha_{0}^{\prime})\). These relations are indicated in Figure 1. Hence \(c_{1}\)\(\gamma_{1}\)\(c_{2}\)\(\gamma_{2}\)\(c_{3}\cdots c_{n-1}\)\(\gamma_{n-1}\)\(c_{0}\), and so \(\langle c_{0},c_{1}\rangle\in\gamma_{1}\vee\cdots\vee\gamma_{n-1}\). Thus \(\langle c_{0},c_{1}\rangle\in(\alpha_{0}^{\prime}\vee\alpha_{1}^{\prime}) \wedge\bigvee_{i=1}^{n-1}\gamma_{i}\) and so \(\langle a,c_{0}\rangle\in\alpha_{0}\wedge\big{(}\alpha_{1}\vee[(\alpha_{0}^{ \prime}\vee\alpha_{1}^{\prime})\wedge\bigvee_{i=1}^{n-1}\gamma_{i}]\big{)}\). Since \(\langle b,c_{0}\rangle\in\alpha_{0}^{\prime}\), we get \(\langle b,a\rangle\in\alpha_{0}^{\prime}\vee\big{(}\alpha_{0}\wedge\big{(} \alpha_{1}\vee[(\alpha_{0}^{\prime}\vee\alpha_{1}^{\prime})\wedge\bigvee_{i=1}^ {n-1}\gamma_{i}]\big{)}\big{)}\), proving the lemma.
From Lemmas 2.3 and 3.4 we immediately get the following corollary.
**Corollary 3.5**.: _Let \(\mathbf{A}\) be an algebra with a weak difference term. Suppose \(\alpha_{i}\) and \(\alpha^{\prime}_{i}\in\mathbf{Con}\)\((\mathbf{A})\), \(i<n\), and also assume there is a congruence \(\theta_{i}\) such that \(\alpha_{i}\), \(\alpha^{\prime}_{i}\) and \(\theta_{i}\) are the atoms of a sublattice of \(\mathbf{Con}\)\((\mathbf{A})\) isomorphic to \(\mathbf{M}_{3}\). Then \((\mathrm{D}_{n}^{*})\) holds when \(\alpha_{i}\) is substituted for \(x_{i}\) and \(\alpha^{\prime}_{i}\) for \(x^{\prime}_{i}\)._
If \(\mathbf{F}\) is a skew field we let \(\mathcal{M}_{\mathbf{F}}\) be the variety of left vector spaces over \(\mathbf{F}\) and \(\mathcal{M}_{\mathbf{F}}^{\mathrm{fd}}\) be the class of all finite dimensional left vector spaces over \(\mathbf{F}\).
**Lemma 3.6**.: _Let \(\mathbf{F}\) be a field with \(|F|>2\) and let \(\mathbf{H}_{n}(\mathbf{F})\) be Haiman's lattice, \(n\geq 3\)._
1. _Every proper sublattice of_ \(\mathbf{H}_{n}(\mathbf{F})\) _can be embedded into the lattice of subspaces of a_ \(2n\)_-dimensional vector space over_ \(\mathbf{F}\)_._
2. _Every sublattice of_ \(\mathbf{H}_{n}(\mathbf{F})\) _generated by less than_ \(n\) _elements is proper._
3. _There exist elements_ \(x_{i}\) _and_ \(x^{\prime}_{i}\in\mathbf{H}_{n}(\mathbf{F})\)_,_ \(i<n\)_, that generate_ \(\mathbf{H}_{n}(\mathbf{F})\) _and witness the failure of_ \((\mathrm{D}_{n}^{*})\) _in_ \(\mathbf{H}_{n}(\mathbf{F})\)_. Moreover, there are elements_ \(x^{\prime\prime}_{i}\in\mathbf{H}_{n}(\mathbf{F})\) _such that_ \(x_{i}\)_,_ \(x^{\prime}_{i}\) _and_ \(x^{\prime\prime}_{i}\) _are the atoms of a sublattice isomorphic to_ \(\mathbf{M}_{3}\)_._
4. _Let_ \(\mathbf{P}\) _be the prime subfield of_ \(\mathbf{F}\)_. Then any nonprincipal ultraproduct of_ \(\{\mathbf{H}_{n}(\mathbf{F}):n=3,4,\ldots\}\) _lies in_ \(\mathbf{S}\mathbf{Con}\)\((\mathcal{M}_{\mathbf{P}})\)_._
Proof.: (1) and (2) are Theorems 2 and 3 of [15], respectively. Haiman's paper defines \(x_{i}\) and \(x^{\prime}_{i}\) in \(\mathbf{H}_{n}(\mathbf{F})\) to be atoms of an interval, \(I[p_{i},r_{i}]\), isomorphic to the lattice of subspaces of a three dimensional vector space over \(\mathbf{F}\). In such an interval the join of any two distinct atoms contains a third atom and (3) follows. Haiman deals with the identity \((\mathrm{D}_{n})\), instead, but we can equivalently use \((\mathrm{D}_{n}^{*})\) in view of Corollary 3.3 and since Haiman's lattices \(\mathbf{H}_{n}(\mathbf{F})\) are modular.
\(\mathbf{S}\mathbf{Con}\)\((\mathcal{M}_{\mathbf{P}})\) is a quasivariety, that is, it is defined by quasi-equations: by Lemma 2.1 it is closed under \(\mathbf{S}\) and \(\mathbf{P}\). In addition this class is closed
Figure 1.
under ultraproducts; see [17] and [25]. See the proof of Theorem 2.1(2) of [22] for a very approachable proof. Consequently by Theorem 8.105 of [12], \(\mathbf{S}\mathbf{Con}\) (\(\mathcal{M}_{\mathbf{P}}\)) is a quasivariety and so is defined by a set \(\Phi\) of quasi-equations.1 Let \(\phi\in\Phi\). By (1) and (2) of this lemma and the fact the lattice of subspaces of a vector space over \(\mathbf{F}\) is embedded into the lattice of subspaces of a vector space over its prime subfield, we see that if \(\phi\) has at most \(n\) variables, then it holds in \(\mathbf{H}_{m}(\mathbf{F})\) for all \(m>n\). So \(\phi\) holds in all but finitely many \(\mathbf{H}_{m}(\mathbf{F})\)'s. Thus \(\phi\) holds in any nonprincipal ultraproduct of the \(\mathbf{H}_{m}(\mathbf{F})\)'s, proving (4).
Footnote 1: It is not always the case that \(\mathbf{S}\mathbf{Con}\) (\(\mathcal{V}\)) is a quasivariety for a variety \(\mathcal{V}\): Kearnes and Nation [22] show, for example, that if \(\mathcal{V}\) has a Taylor term but does not have a Hobby-McKenzie term, then \(\mathbf{S}\mathbf{Con}\) (\(\mathcal{V}\)) is not a quasivariety. So for example the class of lattices embeddable into the congruence lattice of a semilattice is not elementary.
## 4 Projectivity of \(\mathbf{M}_{3}\) in congruence varieties
In order to use the results from the previous sections to prove that no congruence variety, other than the variety of all lattices, contains any \(\mathbf{H}_{n}(\mathbf{F})\) (that is to prove Theorem 1.2(1\({}^{\prime}\))), we will show that \(\mathbf{M}_{3}\) is projective for every proper congruence variety. To make this precise we need a definition. If \(\mathbf{L}\) is any lattice, we say that \(\mathbf{M}_{3}\)_is projective for \(\mathbf{L}\)_ if and only if whenever \(\varphi:\mathbf{L}\twoheadrightarrow\mathbf{M}_{3}\) is an epimorphism, then \(\mathbf{L}\) contains a sublattice isomorphic to \(\mathbf{M}_{3}\) which maps to \(\mathbf{M}_{3}\) under \(\varphi\). We say that \(\mathbf{M}_{3}\) is projective for a class \(\mathcal{X}\) of lattices if \(\mathbf{M}_{3}\) is projective for every member of \(\mathcal{X}\). If \(\mathbf{L}\) does not have \(\mathbf{M}_{3}\) as homomorphic image, then \(\mathbf{M}_{3}\) is projective for \(\mathbf{L}\) vacuously. \(\mathbf{M}_{3}\) is projective for the class of modular lattices as follows from Dedekind's description of the free modular lattice on three generators, [6]. On the other hand, \(\mathbf{M}_{3}\) is not projective for the class of all lattices. This section will show that \(\mathbf{M}_{3}\) is projective for every congruence variety except for the variety of all lattices.
**Theorem 4.1**.: _Let \(\mathcal{K}\) be the congruence variety of a variety \(\mathcal{V}\). Then \(\mathbf{M}_{3}\) is projective for \(\mathcal{K}\) if and only if \(\mathcal{K}\) is not the variety of all lattices._
Proof.: First assume \(\mathcal{K}\) is the variety of all lattices. Then it contains the free lattice on three generations which has a homomorphism onto \(\mathbf{M}_{3}\). But \(\mathbf{M}_{3}\) is not a sublattice of the the free lattice, see [9], so it is not projective for \(\mathcal{K}\).
Now assume that \(\mathcal{K}\) is not the variety of all lattices. Let \(\mathbf{L}\in\mathcal{K}\) and suppose \(\varphi:\mathbf{L}\twoheadrightarrow\mathbf{M}_{3}\) is an epimorphism. By Lemma 2.1(ii) there is a lattice \(\mathbf{L}^{\prime}\in\mathbf{S}\mathbf{Con}(\mathbf{A})\) and an epimorphism \(\psi:\mathbf{L}^{\prime}\twoheadrightarrow\mathbf{L}\), for some \(\mathbf{A}\in\mathcal{V}\). Summarizing:
\[\mathbf{L}^{\prime}\stackrel{{\psi}}{{\twoheadrightarrow}} \mathbf{L}\stackrel{{\varphi}}{{\twoheadrightarrow}}\mathbf{M}_{3} \quad\text{with}\quad\mathbf{L}^{\prime}\leq\mathbf{Con}(\mathbf{A}).\]
Let \(a\), \(b\) and \(c\) be the atoms of \(\mathbf{M}_{3}\). Choose \(\alpha\), \(\beta\) and \(\gamma\in\mathbf{L}^{\prime}\) to be pre-images of \(a\), \(b\) and \(c\) under \(\psi\circ\varphi\).
By Theorem 2.5(ii) (and the symmetry between \(\beta\) and \(\gamma\)) there is an \(m\) such that \(\beta^{m+1}=\beta^{m}\) and \(\gamma^{m+1}=\gamma^{m}\), where \(\beta^{n}\) and \(\gamma^{n}\) are defined by (2).
Then
\[\beta^{m}=\beta^{m+1}=\beta\wedge(\alpha\vee\gamma^{m})\leq\alpha\vee\gamma^{m},\]
and so \(\alpha\vee\beta^{m}\leq\alpha\vee\gamma^{m}\), and by symmetry, \(\alpha\vee\beta^{m}=\alpha\vee\gamma^{m}\). Now an easy induction shows \((\phi\circ\varphi)(\beta^{n})=b\) and \((\phi\circ\varphi)(\gamma^{n})=c\) for all \(n\). So, changing notation, we may assume \(\alpha\), \(\beta\) and \(\gamma\) are pre-images of \(a\), \(b\) and \(c\) and that \(\alpha\vee\beta=\alpha\vee\gamma\). Now let \(\alpha^{\prime}=\alpha\vee(\beta\wedge\gamma)\) and note that \(\alpha^{\prime}\) is a pre-image of \(a\) and that \(\alpha^{\prime}\vee\beta=\alpha^{\prime}\vee\gamma\) so, with another change in notation, we may assume our pre-images satisfy
\[\alpha\vee\beta=\alpha\vee\gamma\ \ \text{and}\ \ \beta\wedge\gamma\leq\alpha.\]
By Theorem 2.5(iii) the interval \(I[\alpha,\alpha\vee\beta\vee\gamma]\) is abelian.
By Theorem 2.5(i) \(\mathcal{V}\) has a weak difference term so by Theorem 2.4(i) meeting with \(\beta\) we have that \(I[\alpha\wedge\beta,(\alpha\vee\beta\vee\gamma)\wedge\beta]=I[\alpha\wedge \beta,\beta]\) is abelian. Similarly, \(I[\alpha\wedge\gamma,\gamma]\) is abelian. Joining the first of these with \(\alpha\wedge\gamma\) yields that \(I[(\alpha\wedge\beta)\vee(\alpha\wedge\gamma),\beta\vee(\alpha\wedge\gamma)]\) is abelian. Joining the second with \(\beta\) gives \(I[\beta\vee(\alpha\wedge\gamma),\beta\vee\gamma]\) is abelian. So we have the chain
\[(\alpha\wedge\beta)\vee(\alpha\wedge\gamma)\leq\beta\vee(\alpha\wedge\gamma) \leq\beta\vee\gamma,\]
and it follows that the interval \(I[(\alpha\wedge\beta)\vee(\alpha\wedge\gamma),\beta\vee\gamma]\) is solvable and so by Theorem 2.4(ii) it is modular. Let
\[\alpha^{\prime}=\alpha\wedge(\beta\vee\gamma),\] \[\beta^{\prime}=\beta\vee(\alpha\wedge\gamma),\] \[\gamma^{\prime}=\gamma\vee(\alpha\wedge\beta),\]
and note these lie in the interval and are pre-images of \(a\), \(b\) and \(c\), respectively. Now the desired result follows from the projectivity of \(\mathbf{M}_{3}\) in the variety of modular lattices. Explicitly, defining
\[\alpha^{\prime\prime}=(\alpha^{\prime}\wedge(\beta^{\prime}\vee \gamma^{\prime}))\vee(\beta^{\prime}\wedge\gamma^{\prime})=\alpha^{\prime} \vee(\beta^{\prime}\wedge\gamma^{\prime})\] \[\beta^{\prime\prime}=(\beta^{\prime}\wedge(\alpha^{\prime}\vee \gamma^{\prime}))\vee(\alpha^{\prime}\wedge\gamma^{\prime})=\beta^{\prime} \wedge(\alpha^{\prime}\vee\gamma^{\prime})\] \[\gamma^{\prime\prime}=(\gamma^{\prime}\wedge(\alpha^{\prime} \vee\beta^{\prime}))\vee(\alpha^{\prime}\wedge\beta^{\prime})=\gamma^{\prime} \wedge(\alpha^{\prime}\vee\beta^{\prime})\]
we have that \(\alpha^{\prime\prime}\), \(\beta^{\prime\prime}\) and \(\gamma^{\prime\prime}\) generate a sublattice which is isomorphic to \(\mathbf{M}_{3}\) and maps onto \(\mathbf{M}_{3}\) under \(\psi\circ\varphi\).
## 5 Proof of Theorem 1.3(\(1^{\prime}\))
Let \(\mathbf{L}=\mathbf{H}_{n}(\mathbf{F})\) be one of Haiman's lattices and suppose \(\mathbf{L}\) lies in the congruence variety \(\mathcal{K}\) associated with a variety \(\mathcal{V}\) and that \(\mathcal{K}\) is not the variety of all lattices. By Lemma 2.1(ii) \(\mathbf{L}\) is a homomorphic image of a lattice \(\mathbf{L}^{\prime}\) which is a sublattice of \(\mathbf{Con}(\mathbf{A})\) for some \(\mathbf{A}\in\mathcal{V}\). Let \(\psi:\mathbf{L}^{\prime}\twoheadrightarrow\mathbf{L}\) be this epimorphism. By Lemma 3.6(iii) there are elements \(x_{i}\), \(x^{\prime}_{i}\) and \(x^{\prime\prime}_{i}\), \(i<n\), of \(\mathbf{L}\) which are atoms of a sublattice isomorphic to \(\mathbf{M}_{3}\) such that \(x_{i}\) and \(x^{\prime}_{i}\) generate \(\mathbf{L}\) and witness a failure of \((\mathrm{D}_{n}^{*})\) in \(\mathbf{L}\). Since \(\mathbf{M}_{3}\) is projective for \(\mathcal{K}\) by Theorem 4.1 there are elements \(\overline{x_{i}}\), \(\overline{x_{i}}^{\prime}\) and \(\overline{x_{i}}^{\prime\prime}\) of \(\mathbf{L}^{\prime}\) which map under \(\psi\) to \(x_{i}\), \(x^{\prime}_{i}\) and \(x^{\prime\prime}_{i}\) and are the atoms of a sublattice isomorphic to \(\mathbf{M}_{3}\). Corollary 3.5 implies
(D\({}_{n}^{*}\)) holds for these \(\overline{x_{i}}\) and \(\overline{x_{i}}^{\prime}\), \(i<n\). Applying \(\psi\) we get that (D\({}_{n}^{*}\)) holds in \(\mathbf{L}\) for \(x_{i}\) and \(x_{i}^{\prime}\). This contraction completes the proof.
As mentioned just before Lemma 3.4, (D\({}_{3}^{*}\)) is equivalent to the arguesian law so, in a nondesaguesian projective plane, there are points \(x_{i}\) and \(x_{i}^{\prime}\), \(i=0,1,2\), which witness the failure of (D\({}_{3}^{*}\)). Since every line in a projective plane has at least 3 points, there are points \(x_{i}^{\prime\prime}\), \(i=0,1,2\), such that \(x_{i}\), \(x_{i}^{\prime}\) and \(x_{i}^{\prime\prime}\) are the atoms of sublattice isomorphic to \(\mathbf{M}_{3}\) in the lattice of subspaces of the plane. Now using the arguments just above we get the following theorem.
**Theorem 5.1**.: _Let \(\mathbf{L}\) be the lattice of subspaces of a nonarguesian projective plane. Then \(\mathbf{L}\) lies in no congruence variety other than the variety of all lattices. _
The arguments above also give the following theorem which has a weaker hypothesis but also a weaker conclusion. It should be compared to the concept of omitted lattices of SS4.3 of [21].
**Theorem 5.2**.: _Let \(\mathbf{L}\) be one of Haiman's lattices or the lattice of subspaces of a nonarguesian projective plane. Let \(\mathcal{V}\) be a variety with a weak difference term. Then \(\mathbf{L}\notin\mathbf{S}\mathbf{Con}(\mathcal{V})\). _
## 6 Sublattices of \(\mathbf{Con}(\mathcal{V})\)
Before getting to the proof of Theorem 1.3(2\({}^{\prime}\)) we prove some interesting embedding theorems. For example we show there is a large class of modular lattices, \(\mathcal{K}_{\infty}\), all of which can be embedded into a member of \(\mathbf{Con}(\mathcal{V})\) as long as \(\mathcal{V}\) is not congruence meet semidistributive and has a weak difference term.
**Theorem 6.1**.: _Let \(\mathcal{V}\) be a variety having a weak difference term, and assume that \(\mathcal{V}\) is not congruence meet semidistributive. Then there is a prime field \(\mathbf{P}\) such that, for every finite \(n\), the lattice of subspaces of a vector space of dimension \(n\) lies in \(\mathbf{S}\mathbf{Con}(\mathcal{V})\)._
Proof.: A variety is called _congruence neutral_ if \([\alpha,\beta]=\alpha\wedge\beta\) holds in every algebra in the variety. Equivalently \([\alpha,\alpha]=\alpha\). By [23, Corollary 4.7], see also [13, Theorem 11.37] a variety is congruence neutral if and only if it is congruence meet semidistributive. Since we are assuming \(\mathcal{V}\) is not congruence meet semidistributive, there is a congruence \(\alpha\) on an algebra \(\mathbf{A}\in\mathcal{V}\) with \([\alpha,\alpha]<\alpha\). By [4, Theorem 6.2] there is an element \(\psi\) of \(\mathrm{Con}(\mathbf{A})\) with \([\alpha,\alpha]\leq\psi\) but \(\alpha\not\leq\psi\), and such that \(\psi\) is completely meet irreducible with unique cover \(\psi\vee\alpha\). By Theorem 2.4(i) the interval \(I[\psi,\psi\vee\alpha]\) is abelian. So, using the basic properties of the commutator ([13, Lemma 11.4(viii)]), \(\mathbf{A}/\psi\) is subdirectly irreducible with an abelian monolith.
Changing notation, we may assume there is a subdirectory irreducible algebra \(\mathbf{A}\in\mathcal{V}\) with an abelian monolith \(\alpha\). Let \(\mathbf{A}^{n}(\alpha)\) be the subalgebra of \(\mathbf{A}^{n}\) with universe
\[\{\langle a_{0},a_{1},\ldots,a_{n-1}\rangle\in A^{n}:a_{i}\;\alpha\;a_{j}\; \text{for all $i$ and $j$}\}.\]
When \(n=2\), \(\mathbf{A}^{2}(\alpha)=\mathbf{A}(\alpha)\) which is described on pages 96-97 in [12]. Let \(\overline{\alpha}\in\operatorname{Con}(\mathbf{A}^{n}(\alpha))\) be such that \(\langle a_{0},\ldots,a_{n-1}\rangle\)\(\overline{\alpha}\)\(\langle b_{0},\ldots,b_{n-1}\rangle\) provided all these elements are \(\alpha\) related. Let \(\eta_{i}\) be the kernel of the \(i^{\text{th}}\) projection of \(\mathbf{A}^{n}(\alpha)\) onto \(\mathbf{A}\). Of course \(\mathbf{A}^{n}(\alpha)/\eta_{i}\cong\mathbf{A}\) and under this isomorphism \(\overline{\alpha}\) corresponds to \(\alpha\). Since \([\alpha,\alpha]=0\) in \(\mathbf{A}\), it follows using Lemma 11.4(viii) of [13] that \(C(\overline{\alpha},\overline{\alpha};\eta_{i})\) holds for each \(i\). Part (v) of that Lemma gives \(C(\overline{\alpha},\overline{\alpha};\bigwedge_{i}\eta_{i})=C(\overline{ \alpha},\overline{\alpha};0)\). Hence \(\overline{\alpha}\) is an abelian congruence of \(\mathbf{A}^{n}(\alpha)\). Lemma 2.4(ii) implies the interval \(I[0,\overline{\alpha}]\) in \(\operatorname{\mathbf{Con}}(\mathbf{A}^{n}(\alpha))\) is a modular lattice. Let \(\mathbf{L}_{n}\) denote this lattice. Since \(\eta_{i}\prec\overline{\alpha}\), \(\mathbf{L}_{n}\) has length \(n\) and its least element is the meet of its coatoms. By (4.3) of [4], each \(\mathbf{L}_{n}\) is a completed modular lattice.
For \(i\neq j\) the interval \(I[\eta_{i}\wedge\eta_{j},\overline{\alpha}]\) has length \(2\). We claim this interval contains an element which is a complement of \(\eta_{i}\) and of \(\eta_{j}\) and so contains a sublattice isomorphic to \(\mathbf{M}_{3}\). We can prove this with \(n=2\) (so \(\mathbf{A}^{2}(\alpha)=\mathbf{A}(\alpha)\)) and then apply the Correspondence Theorem, [26, Theorem 4.12]. Let \(\Delta\) be the congruence on \(\mathbf{A}(\alpha)\) generated by
\[\{\langle\langle a,a\rangle,\langle b,b\rangle\rangle:a\;\alpha\;b\}.\]
Easy element-wise calculations show \(\Delta\vee\eta_{i}=\overline{\alpha}\), \(i=0,1\). Using the weak difference term it is not hard to show \(\Delta\wedge\eta_{i}=\eta_{0}\wedge\eta_{1}=0\). But actually these properties of \(\Delta\) can be proved with the weaker assumption that \(\mathcal{V}\) has a Taylor term, as was shown by Kearnes and Kiss; see [21, Claim 3.25].
By elementary modular lattice theory the existence of these \(\mathbf{M}_{3}\)'s force \(\mathbf{L}_{n}\) to be simple. Classical coordination theorems of Artin [1], Birkhoff [3] and Frink [14], see Chapter 13 of [4] and also [20], show that, for \(n\geq 4\), \(\mathbf{L}_{n}\) is isomorphic to the lattice of subspaces of a vector space over a skew field \(\mathbf{F}\).
If \(\mathbf{P}\) is the prime subfield of \(\mathbf{F}\) then the lattice of subsubspaces of an \(n\)-dimensional vector space over \(\mathbf{P}\) can be embedded via a cover-preserving map into the lattice of subsubspaces of an \(n\)-dimensional vector space over \(\mathbf{F}\). This completes the proof of Theorem 6.1.
**Remark 6.2**.: The embedding of Theorem 6.1 can be assumed to be cover-preserving, as the proof shows.
Recall that \(\mathcal{M}_{\mathbf{F}}\) is the variety of vector spaces over \(\mathbf{F}\) and \(\mathcal{M}_{\mathbf{F}}^{\text{fd}}\) is the class of all finite dimensional vector spaces over \(\mathbf{F}\).
If \(\mathbf{P}\) is the prime field of characteristic \(p\), a prime or \(0\), we let \(\mathcal{M}_{p}=\mathcal{M}_{\mathbf{P}}\) and \(\mathcal{M}_{p}^{\text{fd}}=\mathcal{M}_{\mathbf{P}}^{\text{fd}}\). Let
\[\mathcal{K}_{\infty}=\bigcap_{\text{$p$ a prime or $0$}}\boldsymbol{S} \operatorname{\mathbf{Con}}(\mathcal{M}_{p}^{\text{fd}}).\]
be the class of all modular lattices that, for each \(p\) a prime or \(0\), can be embedded into the lattice of subspaces of some finite dimensional vector space over the prime field of characteristic \(p\).
**Corollary 6.3**.: _If \(\mathcal{V}\) is a variety with a weak difference term but which is not congruence meet semidistributive, then \(\mathcal{K}_{\infty}\subseteq\mathbf{S}\operatorname{\mathbf{Con}}(\mathcal{V})\). _
While we do not have a characterization of the lattices in \(\mathcal{K}_{\infty}\), it is a broad class which seems to include finite dimensional, breadth two, modular lattices and more. Two members of this class are drawn in Figure 2.
On the other hand, if \(\mathbf{L}\) is the lattice of subspaces of a vector space of dimension \(n\geq 3\) over a field \(\mathbf{K}\), then \(\mathbf{L}\in\mathbf{S}\mathbf{Con}(\mathcal{M}_{p}^{\mathrm{fd}})\) if and only if \(\mathbf{K}\) has characteristic \(p\). In fact, if the characteristic of \(\mathbf{K}\) is not \(p\), then \(\mathbf{L}\notin\mathbf{H}\mathbf{S}\mathbf{Con}(\mathcal{M}_{p}^{\mathrm{fd}})\); see [16] or [18].
## 7 Proofs of Theorem 1.3(\(2^{\prime}\)) and Theorem 1.2
**Lemma 7.1**.: _Let \(\mathbf{F}\) and \(\mathbf{K}\) be skew fields._
1. _The variety of lattices generated by_ \(\mathbf{Con}(\mathcal{M}_{\mathbf{F}}^{\mathrm{fd}})\) _equals the congruence variety of_ \(\mathcal{M}_{\mathbf{F}}\)_. That is,_ \[\mathbf{V}\mathbf{Con}(\mathcal{M}_{\mathbf{F}}^{\mathrm{fd}})=\mathbf{V} \mathbf{Con}(\mathcal{M}_{\mathbf{F}}).\]
2. \(\mathbf{F}\) _and_ \(\mathbf{K}\) _have the same characteristic if and only if_ \[\mathbf{S}\mathbf{Con}(\mathcal{M}_{\mathbf{F}})=\mathbf{S}\mathbf{Con}( \mathcal{M}_{\mathbf{K}}).\] _Moreover, if_ \(\mathbf{F}\) _and_ \(\mathbf{K}\) _have different characteristics then_ \[\mathbf{V}\mathbf{Con}(\mathcal{M}_{\mathbf{F}})\nsubseteq\mathbf{V}\mathbf{ Con}(\mathcal{M}_{\mathbf{K}}).\]
Proof.: Let \(\mathbf{V}\) be a vector space over \(\mathbf{F}\) and let \(\mathbf{L}=\mathbf{Sub}(\mathbf{V})\). Of course, \(\mathbf{Con}(\mathbf{V})\cong\mathbf{Sub}(\mathbf{V})\) so \(\mathbf{L}\in\mathbf{Con}(\mathcal{M}_{\mathbf{F}})\). Let \(\mathbf{L}^{c}\) be the join semilattice of compact elements of \(\mathbf{L}\). Since \(\mathbf{L}\) is an algebraic lattice, the lattice of ideals of \(\mathbf{L}^{c}\) is isomorphic to \(\mathbf{L}\). But the compact elements of \(\mathbf{L}\) are the finite dimensional subspaces and these form a _lattice_ since the intersection of finite dimensional subspaces is finite dimensional. But then \(\mathbf{L}\) lies in the variety generated by the lattice \(\mathbf{L}^{c}\). This is a folklore theorem; a proof appears in [29], and the stronger result, \(\mathbf{L}\in\mathbf{H}\mathbf{S}\mathbf{P}_{\mathbf{u}}(\mathbf{L}^{c})\), is proved in [2]. This shows \(\mathbf{Con}(\mathcal{M}_{\mathbf{F}})\subseteq\mathbf{V}\mathbf{Con}( \mathcal{M}_{\mathbf{F}}^{\mathrm{fd}})\), and hence, \(\mathbf{V}\mathbf{Con}(\mathcal{M}_{\mathbf{F}})\subseteq\mathbf{V}\mathbf{Con }(\mathcal{M}_{\mathbf{F}}^{\mathrm{fd}})\). Since \(\mathcal{M}_{\mathbf{F}}^{\mathrm{fd}}\subseteq\mathcal{M}_{\mathbf{F}}\), the opposite inequality also holds. This proves (1).
For (2) first assume \(\mathbf{F}\leq\mathbf{K}\). Then
\[\mathbf{S}\,\mathbf{Con}(\mathcal{M}_{\mathbf{K}})\subseteq\mathbf{S}\,\mathbf{ Con}(\mathcal{M}_{\mathbf{F}}). \tag{7.1}\]
Indeed, if \({}_{\mathbf{K}}\mathbf{V}\) is a vector space over \(\mathbf{K}\) then its reduct to \(\mathbf{F}\), \({}_{\mathbf{F}}\mathbf{V}\), is a vector space over \(\mathbf{F}\). By Lemma 6.8 of [12], \(\mathbf{Con}({}_{\mathbf{K}}\mathbf{V})\leq\mathbf{Con}({}_{\mathbf{F}} \mathbf{V})\), and hence \(\mathbf{Con}({}_{\mathbf{K}}\mathbf{V})\in\mathbf{S}\,\mathbf{Con}(\mathcal{M} _{\mathbf{F}})\). So (7.1) holds.
Since (7.1) is the only part of part (2) of the lemma required in this paper, we will only sketch the rest of the proof. The opposite containment of (7.1) can be proved using tensor products: if \(\mathbf{V}\) is a vector space over \(\mathbf{F}\), then \(\mathbf{V}\otimes_{\mathbf{F}}\mathbf{K}\) is a vector space over \(\mathbf{K}\) and, since \({}_{\mathbf{F}}\mathbf{K}\) is flat, the map \(\mathbf{U}\mapsto\mathbf{U}\otimes_{\mathbf{F}}\mathbf{K}\) embeds the lattice of subspaces of \(\mathbf{V}\), \(\mathbf{Sub}(\mathbf{V})\), into that of \(\mathbf{V}\otimes_{\mathbf{F}}\mathbf{K}\).
The last statement of (2) follows from [18, Theorems 4 and 5].
Let \(\mathcal{K}\) be the congruence variety of a variety \(\mathcal{V}\) and assume \(\mathcal{K}\) is not join semidistributive. To prove Theorem 1.3(2\({}^{\prime}\)) we need to show that there is a field \(\mathbf{F}\) such that any nonprincipal ultraproduct of \(\{\mathbf{H}_{n}(\mathbf{F}):n\geq 3\}\) lies in \(\mathcal{K}\). This is trivial if \(\mathcal{K}\) is the variety of all lattices so we may assume \(\mathcal{V}\) has a nontrivial (pure lattice) congruence identity. We know \(\mathcal{K}\) is not join semidistributive which implies \(\mathcal{V}\) is not congruence join semidistributive, as was shown by Kearnes and Kiss in [21, Theorem 8.14 (1) \(\Leftrightarrow\) (8)]. Now (1) \(\Leftrightarrow\) (6) of that same theorem implies \(\mathcal{V}\) is not congruence meet semidistributive. Also, by the Kearnes-Szendrei result Theorem 2.5(i) above, \(\mathcal{V}\) has a weak difference term.
Theorem 6.1 can be interpreted to say that there is a prime field \(\mathbf{P}\) such that \(\mathbf{Con}(\mathcal{M}_{\mathbf{P}}^{\mathrm{fd}})\subseteq\mathbf{S}\, \mathbf{Con}(\mathcal{V})\). Now by Lemma 7.1(1) we have that the congruence variety of \(\mathcal{V}\), namely \(\mathcal{K}\), contains the congruence variety of \(\mathcal{M}_{\mathbf{P}}\). Let \(\mathbf{F}\) be a field whose prime subfield is \(\mathbf{P}\). (If \(\mathbf{P}\) is the two element field, \(\mathbf{F}\) should have at least 4 elements.) By Lemma 3.6(4) any nonprincipal ultraproduct of \(\{\mathbf{H}_{n}(\mathbf{F}):n\geq 3\}\) lies in \(\mathbf{S}\,\mathbf{Con}(\mathcal{M}_{\mathbf{P}})\) and so in \(\mathcal{K}\).
This completes the proof of Theorem 1.3(2\({}^{\prime}\)) and hence of Theorem 1.3.
To see Theorem 1.2 let \(\mathcal{V}\) be a variety of algebras such that \(\mathbf{Con}\) (\(\mathcal{V}\)) satisfies a nontrivial lattice identity and suppose \(\mathbf{Con}(\mathcal{V})\) is not semidistributive. By the Kearnes-Kiss result cited above, this implies \(\mathbf{Con}(\mathcal{V})\) is not join semidistributive. By Theorem 1.3(1\({}^{\prime}\)) and (2\({}^{\prime}\)) there is a collection of lattices not in \(\mathbf{V}\mathbf{Con}(\mathcal{V})\) whose ultraproduct is in. Consequently the congruence variety of \(\mathcal{V}\) is not finitely based by [12, Theorem 8.52].
#### Data availability
Data sharing not applicable to this article as datasets were neither generated nor analyzed.
#### Compliance with ethical standards
The authors declare that they have no conflict of interest.
|
2302.06265
|
High-Performance Motorbike Lean Angle Estimation
|
This work deals with the real-time estimation of the lean angle of
high-performance motorbikes. The estimate is obtained through measurements
provided by an onboard inertial sensor and a GNSS receiver. A two-stage state
observer, implementing a kinematic model developed under the novel assumption
of coordinated manoeuvre, processes these measurements. A theoretical analysis
demonstrates the observer's stability, while a covariance analysis assesses the
estimate's accuracy and error bounds. Finally, experimental results obtained on
race-track tests and numerical comparisons, with competitive approaches, in
simulated realistic scenarios show the superior performance of the proposed
estimator.
|
Nicola Mimmo, Matteo Zanzi
|
2023-02-13T11:15:02Z
|
http://arxiv.org/abs/2302.06265v1
|
# High-Performance Motorbike Lean Angle Estimation
###### Abstract
This work deals with the real-time estimation of the lean angle of high-performance motorbikes. The estimate is obtained through measurements provided by an onboard inertial sensor and a GNSS receiver. A two-stage state observer, implementing a kinematic model developed under the novel assumption of coordinated manoeuvre, processes these measurements. A theoretical analysis demonstrates the observer's stability, while a covariance analysis assesses the estimate's accuracy and error bounds. Finally, experimental results obtained on race-track tests and numerical comparisons, with competitive approaches, in simulated realistic scenarios show the superior performance of the proposed estimator.
Keywords: Attitude Estimation, Observer, Motorbike.
## 1 Introduction
### Motivation
Lean angle real-time knowledge is crucial for controlling engine and brake power to optimize the motorbike's performance while keeping the biker's safety [1]. Indeed, the tire-road grip coefficient is a non-linear function of the contact patch shape, which, in turn, depends on the motorbike's leaning [2]. These non-linearities become critical during high-performance turns when stiff grip variations degrade performance and stability, eventually leading to skidding and highside [3].
### State of the art
The literature extensively investigated the problem of lean angle estimation. The documented solutions can be divided into three main categories: estimation algorithms based on kinematic models (position and velocity), those relying on dynamic models (forces and torques), and image-based.
Works of the first category present algorithms fed by angular rates, body accelerations, and, eventually, linear speeds and Earth magnetic field measurements. Moreover, the estimation schemes of these works are designed on kinematic models describing the attitude dynamics (commonly Euler's angle dynamics). These models embed gyroscope data as inputs (_e.g._, for state propagation in Kalman filtering), while non-linear elaborations of accelerometers and linear speeds constitute the output. In detail, [4] and [5] propose Complementary Filters (CFs), designed on error frequency separation arguments, which elaborate gyros (motorbike's angular rates) and odometers (wheel speed). Furthermore, [6] proposes strategies using only gyroscope data, whereas [7] presents a method exploiting only two accelerometers and one gyroscope. These data are successively processed into a CF designed on frequency separation arguments both in [6] and [7]. Works [8] and [9] evaluate the performance of Extended Kalman Filters (EKFs) and unscented Kalman filters applied to the estimation of motorcycles' attitude. These observers rely on the knowledge of the projection on the motorcycle's longitudinal axis of the inertial velocity. Finally, [10] provides a scheme for the roll angle estimation relying on a Kalman filter, IMU data, and wheel speed sensors.
Concerning the second category, algorithms rely on dynamic models embedding inertial and geometric data and tire forces descriptions. Moreover, sensor suites comprehend IMU and speed data (as in the first category), potentiometers sensing the steering angle, and torque meters measuring the biker's effort on the handlebar. Commonly, papers in the cited literature assume the knowledge of steering, roll, and yaw angle derivatives. These algorithms focus on estimating a state vector, usually comprehensive of tire forces. The proposed approaches are: Luenberger observers [11], EKFs [12] and [13], high-order sliding mode observers [14], [15], and [16], unknown-input observers [15] and [17], \(H_{\infty}\) observers [18], and adaptive observers [19].
Finally, we report a couple of works belonging to the third category for completeness. In particular, [20] propose using a camera to estimate the motorbike's lean angle. In detail, machine-learning algorithms trained to recognize roll angles from images elaborate onboard camera streams. In addition, [21] proposes an intriguing comparison between camera-based methods and state observers fed by IMUs.
### Contribution
In the context of algorithms based on kinematic models, this paper presents a lean angle estimation approach utilizing standard IMU and GNSS data, such as body accelerations and angular rates (obtained by accelerometers and gyros) and inertial velocities (from a GPS receiver). In particular, we fuse IMU and GNSS data through a novel concept of _coordinated manoeuvre_, which well approximates actual motorbike-plus-biker dynamics.
The estimator architecture is a cascaded two stages. The first processing level, called _pre-filter_, embeds the coordinated manoeuvre assumption. The pre-filter computes a preliminary estimate of the motorbike attitude as a unitary quaternion. The coordinated manoeuvre
represents a novel strategy to compensate for the centre-of-gravity displacements due to the biker's movements. This compensation results in a highly accurate estimation, especially when the lean angle data are fundamental, _e.g._, during high-speed turns. Downstream, an EKF enhances the lean angle estimation by fusing pre-filter and gyroscope outputs.
Theoretical investigations show that the proposed estimator is (locally asymptotically) stable, uniformly on the motorbike's trajectories. Field tests and realistic simulations confirm the good performance of the estimation algorithm proposed in this paper. Finally, a comparison with already existing methods shows the superior performance of the proposed coordinated manoeuvre assumption.
### Benefits of the proposed approach
The lean angle estimator designed in this paper has the following benefits.
The overall estimation scheme can be thought as a CF with all the benefits associated to this class of algorithms. In particular, its reduced order (lower than full-order observers with accelerometers and gyroscopes as input and GNSS as a output) lowers the computational burden thus making CFs appealing in real applications.
The proposed estimation scheme does not rely on magnetometers. This improves the estimation accuracy and alleviates the calibration process, as detailed in Remark 1.
Moreover, the proposed system architecture is more reliable than full-order observers for two reasons. First, the proposed algorithm does not suffer from observability issues related to GNSS data unavailability. Second, the CF architecture guarantees estimation stability, although the motorbike does not perform sufficiently exciting trajectories (like on straights).
### Notation
This paper denotes with \(\mathbb{R}\) the set of reals and with \(\mathbb{N}\) the natural numbers greater than zero. Calligraphic letters, _e.g._, \(\mathcal{X}\subseteq\mathbb{R}^{n}\), with \(n\in\mathbb{N}\), denote subsets. We represent matrices with capital letters, _e.g._, \(X\in\mathbb{R}^{n\times m}\), with \(n,m\in\mathbb{N}\). Let \(X_{i}\in\mathbb{R}^{n\times m}\) be matrices, with \(i=1,\ldots,n\) and \(n\), \(m\), \(i\in\mathbb{N}\), then we define \(\texttt{col}:\mathbb{R}^{n\times m}\times\times\mathbb{R}^{n\times m}\to \mathbb{R}^{(\sum_{i=1}^{n}n)\times m}\) such that \(\texttt{col}(X_{1},\cdots,X_{n})=\left[\begin{array}{cc}X_{1}^{\top}&\cdots \end{array}\right]^{\top}\). Symbol \(I_{n}\) denotes identity matrices of size \(n\in\mathbb{N}\). Small capital letters, _e.g._, \(x\in\mathbb{R}^{n}\), with \(n\in\mathbb{N}\), denote real vectors of \(n\) components. Let \(x\in\mathbb{R}^{3}\) be a vector, then we describe its components with \(x_{x}\), \(x_{y}\), and \(x_{z}\) such that \(x=\texttt{col}(x_{x},x_{y},x_{z})\). With \(\|\cdot\|\), we denote the 2-norm of vectors such that \(\|x\|:=\sqrt{x^{\top}x}\) for any \(x\in\mathbb{R}^{n}\), with \(n\in\mathbb{N}\). Finally, this paper defines \(\mathbb{H}\) as the set of unitary-norm quaternions.
## 2 Problem formulation and main result
Let \(\mathcal{F}_{I}\) and \(\mathcal{F}_{B}\) be inertial and body reference frames, with the latter rigidly attached to the motorbike. Let \(\omega\in\mathbb{R}^{3}\) be the vector of motorbike angular speeds expressed in \(\mathcal{F}_{B}\). Let \(\phi,\theta,\psi\in\mathbb{R}\) be an Euler angle parametrisation for rotation matrices from \(\mathcal{F}_{I}\) to \(\mathcal{F}_{B}\) and define
\[\Theta:=\texttt{col}(\phi,\theta,\psi). \tag{1}\]
Then, define \(T:\mathbb{R}^{3}\to\mathrm{SO}(3)\) such that \(T(\Theta)\) corresponds to the rotation matrix from \(\mathcal{F}_{I}\) to \(\mathcal{F}_{B}\), whose expression is reported in ([22], Eq.(3.63)).
Now define \(v\in\mathbb{R}^{3}\) as the motorbike linear speed expressed in \(\mathcal{F}_{I}\). Let \(\texttt{v}:=\|v\|\geq 0\) be the inertial speed magnitude, and \(\chi,\gamma\in\mathbb{R}\) be the course and the grade angle, then define
\[\xi=\texttt{col}(\texttt{v},\gamma,\chi)\]
such that
\[v=h_{v}(\xi):=\texttt{vol}(\cos\chi\cos\gamma,\sin\chi\,\cos\gamma,-\sin\gamma). \tag{2}\]
Denote with \(g\in\mathbb{R}^{3}\) the gravity acceleration expressed in \(\mathcal{F}_{I}\). Then, we made the following assumption with all these quantities at hand.
Assumption 1 (Sensor Suite): Assume \(\mathcal{F}_{B}\) be rigidly attached to a combined IMU and GNSS board providing
\[\begin{split} y_{a}=&\,T(\Theta)(\dot{v}-g)+\nu_{a} (t)\quad 3\text{-axis accelerometer}\\ \dot{\bar{b}}_{g}=&\,0\quad 3\text{-axis gyro bias}\\ y_{g}=&\,\omega+\nu_{g}(t)\quad\quad\quad\quad 3\text{-axis gyroscope}\\ y_{s}=&\,h_{s}(\xi,\tau)+\nu_{s}(t)\quad\quad\quad \text{GNSS receiver}\end{split} \tag{3}\]
in which, for all \(\#\in\{a,g,s\}\), \(y_{\#}\) denotes the sensor output while \(\nu_{\#}(t)\) represents bounded measurement errors. More in detail, we define \(\nu_{g}(t)=\bar{b}_{g}(t)+w(t)\) with \(w(t)\) an additive error. Let \(\mathcal{P}_{a}\), \(\mathcal{P}_{g}\), and \(\mathcal{P}_{s}>0\). Then, we assume \(\|\nu_{\#}(t)\|_{\infty}<\mathcal{P}_{\#}\), for all \(\#\in\{a,g,s\}\).
In agreement with Assumption 1, sensors provide measurements \(y_{\#}\) corrupted by errors \(\nu_{\#}\). Moreover, gyroscopes are also affected by the bias \(\bar{b}_{g}\). Finally, the GNSS sampling time, _i.e._, \(\tau>0\) embedded into \(h_{s}(\cdot,\tau)\), is significant for the application under investigation. In practice, \(h_{s}(\xi,\tau)\) represents a \(\tau\)-long fixed-period sampling of \(h_{v}(\xi)\). A description of \(h_{s}(\xi,\tau)\) is given in Section 3.2, Eq. (18).
Remark 1: The algorithm proposed in this paper does not use data from magnetometers for two main reasons, the distortion of the Earth's magnetic field in the proximity of the motorbike's metal masses and the experiment strong dependence of the magnetometer response on the engine mapping. On the one hand, even if possible for a single test, magnetometer calibrations are time-consuming and too complicated to be carried out during the race weekend. On the other hand, these calibrations require a look-up table to be embedded in the algorithm, thus resulting in a further state dependency, possibly impacting estimation filter stability.
Problem 1 (Roll Angle Estimation): Design an algorithm with inputs \(y_{a}(t)\), \(y_{g}(t)\), and \(y_{s}(t)\), state \(\hat{x}(t)\), and output \(\hat{\phi}(t)\) such that: a) there exists a non empty set of initial conditions, namely \(\mathcal{X}_{0}\), such that \(\hat{x}(t)\) is bounded for any \(t\geq 0\) and \(\hat{x}(0)\in\mathcal{X}_{0}\); b) there exists \(\bar{\phi}>0\) such that \(\limsup_{t\to\infty}\|\hat{\phi}(t)-\phi(t)\|<\bar{\phi}\).
Hereafter, we define some quantities instrumental for introducing the proposed solution, depicted in Figure 1.
Let \(\bar{h}\,:\,\mathbb{R}^{3}\to\mathbb{H}\) be such that \(q:=\texttt{col}(q_{0},q_{x},q_{y},q_{z})=\bar{h}(\boldsymbol{\Theta})\) represents the unitary quaternion associated with \(\Theta\) (a detailed expression for \(\bar{h}(\cdot)\) is reported in [(22], eq. (3.65))]. Moreover, the dynamics of \(q\) is
\[\dot{q}=M(q)\,\omega, \tag{4}\]
where \(M(q)\) is detailed in [(22], eq. (3.61)]. Introduce \(\xi_{e}:=\texttt{col}(\xi,\dot{\xi})\), define \(\boldsymbol{\mathcal{B}}=\|g\|\), and let \(\phi_{\text{av}}(\cdot,\cdot,\cdot)\,:\,\mathbb{R}^{3}\times\mathbb{R}^{6} \times\mathbb{R}\to\mathbb{R}\) be such that for any \(a\in\mathbb{R}^{3}\), \(\xi_{e}\in\mathbb{R}^{6}\), and \(\boldsymbol{\mathcal{B}}>0\)
\[\phi_{\text{av}}(a,\xi_{e},\boldsymbol{\mathcal{B}})=\tan^{-1}\left(\frac{( \texttt{g}\cos\gamma-\texttt{v}\dot{\gamma})a_{y}-\texttt{v}\dot{\chi}a_{z} \cos\gamma}{(\texttt{g}\cos\gamma-\texttt{v}\dot{\gamma})a_{z}+\texttt{v}\dot{ \chi}a_{y}\cos\gamma}\right). \tag{5}\]
Define \(f_{\Theta}(\cdot,\cdot,\cdot)\,:\,\mathbb{R}^{3}\times\mathbb{R}^{6}\times \mathbb{R}\to\mathbb{R}^{4}\) such that \(f_{\Theta}(a,\xi_{e},\boldsymbol{\mathcal{B}})=\texttt{col}(\phi_{\text{av}} (a,\xi_{e},\boldsymbol{\mathcal{B}}),\,\gamma,\,\chi)\) for each \(a\in\mathbb{R}^{3}\), \(\xi_{e}\in\mathbb{R}^{6}\), and \(\boldsymbol{\mathcal{B}}>0\). Assume \(\hat{\xi}_{e}\in\mathbb{R}^{6}\) (among whose entries there are \(\hat{\gamma}\) and \(\hat{\chi}\)) be a proxy of \(\xi_{e}\). Moreover, let \(\hat{\boldsymbol{\mathcal{B}}}\) be a proxy of \(\boldsymbol{\mathcal{B}}\) and introduce
\[\hat{\Theta}_{\text{av}}:=f_{\Theta}(y_{a},\hat{\xi}_{e},\hat{\boldsymbol{ \mathcal{B}}}) \tag{6}\]
and
\[q_{1}:=\bar{h}(\hat{\Theta}_{\text{av}}). \tag{7}\]
Let \(x:=\texttt{col}(\bar{b}_{g},q)\) and define
\[f(x,t)=\left[\begin{array}{cc}0\\ M(q)(y_{g}(t)-\bar{b}_{g})\end{array}\right] \tag{8}\]
and
\[g(x)=\left[\begin{array}{ccc}I&0&0&0\\ 0&I&-M(q)&q\end{array}\right].\]
Then, to solve Problem 1, we propose the observer depicted in Figure 1. In detail, let \(\hat{x}:=\texttt{col}(\hat{b}_{g},\hat{q})\), \(\lambda>0\), \(Q=Q^{\top}\in\mathbb{R}^{12\times 12}\), \(R(\cdot)\,:\,\mathbb{R}\to\mathbb{R}^{4\times 4}\) with \(R(\cdot)=R^{\top}(\cdot)\), and \(S_{0}\in\mathbb{R}^{7\times 7}\) be such that \(Q\), \(S_{0}>0\), and \(R(t)\succ 0\) for all \(t\geq 0\). Then, define the following EKF
\[\dot{\hat{x}}= f(\hat{x},t)+S^{-1}H^{\top}R^{-1}(t)(q_{1}-H\hat{x})\quad\,\,\, \dot{x}(0)=\hat{x}_{0} \tag{9a}\] \[\dot{S}= -SA(\hat{x},t)-A^{\top}(\hat{x},t)S\] \[-\lambda Sg(\hat{x})Qg^{\top}(\hat{x})S+H^{\top}R^{-1}(t)H\quad S (0)=S_{0}\] (9b) \[\dot{\phi}= h(\hat{x}) \tag{9c}\]
where \(H=\left[\begin{array}{cc}0&I\end{array}\right]\), \(A(x,t):=\partial f(x,t)/\partial x\), and
\[h(x):=\left[\begin{array}{ccc}1&0&0\end{array}\right]\bar{h}^{-1}(q/\|q\|).\]
Remark 2: Let \(w_{1}\in\mathbb{R}^{7}\) and \(w_{2}\in\mathbb{R}\), and define \(\bar{w}=\texttt{col}(w_{1},\nu_{g},w_{2})\). Then, we derived \(g(x)\) as sum of three contributions, namely \(g_{1}(x)\), \(g_{2}(x)\), and \(g_{3}(x)\), with
\[g_{1}(x) =\left[\begin{array}{ccc}0&0&0&0\\ 0&0&-M(q)&0\end{array}\right],\,g_{2}(x)=\left[\begin{array}{ccc}I&0&0&0\\ 0&I&0&0\end{array}\right],\] \[g_{3}(x) =\left[\begin{array}{ccc}0&0&0&0\\ 0&0&0&q\end{array}\right].\]
The first comes from the linearisation of \(\dot{\hat{b}}_{g}=0\) and \(\dot{q}=M(q)\omega=M(q)(y_{g}-\nu_{g})\) with respect to \(\bar{w}\), the second keeps \(g(\hat{x})Qg^{\top}(\hat{x})\succ 0\) for any \(\hat{x}\in\mathbb{R}^{7}\) (required for the stability of (9)), and the third makes \(g(\hat{x})Qg^{\top}(\hat{x})\) well conditioned for \(\hat{x}\,:\,\|\hat{q}\|\approx 1\) (to improve the performance of (9)). \(\square\)
Remark 3: Observer (9) provides the estimate \(\hat{q}\), which does not represent a rotation because \(\|\hat{q}\|\) is not guaranteed to be unitary. To ensure \(\|\hat{q}\|=1\), one should implement algorithms designed on \(\mathbb{H}\), see [23, 24] and [25]. In [23], the strategy is estimating, through an EKF, a suitable parametrisation of the attitude (_e.g._, the Gibbs vector). The drawback of this approach consists mainly of the non-linearities the EKF must face. As for [24], the observer is composed of a (non-Extended) Kalman Filter designed on a linearisation point. Finally, [25] proposes a non-linear CF whose gain belongs to \(\mathbb{R}^{2}\) (in the case of gyro bias compensation). To the authors' best understanding, the observer gains design is not associated with any physical properties of the sensor suite. \(\square\)
In this context, the observer proposed in this paper exploits the bi-linear nature of (4) to guarantee observability properties, demonstrated in Theorem 1, which are valid for any observer trajectory (and not only for a linearisation point). Moreover, the design of the observer gain exploits physical features of the selected sensors, thus reducing the number of hand-tuned parameters to one. \(\square\)
Theorem 1, which summarises the theoretical results of this paper, is valid under the following assumptions.
Assumption 2 (Manoeuwres): Let \(a(t)\) and \(\xi_{e}(t)\) represent the dynamic state of the motorbike at time \(t\geq 0\). Then, there exists \(\underline{a}>0\) such that \((\texttt{g}\cos\gamma(t)-\texttt{v}(t)\dot{\gamma}(t))a_{z}(t)+\texttt{v}(t) \dot{\chi}(t)a_{y}(t)\cos\gamma(t)>\underline{a}\) for all \(t\geq 0\). \(\square\)
Assumption 3 (Boundedness of \(\nu\)): Define \(\nu(t)=q_{1}(t)-q(t)\). Then, there exists \(\overline{\nu}>0\) such that \(\|\nu(t)\|_{\infty}<\overline{\nu}\). \(\square\)
Assumption 4 (Motorbike Pitch): There exists \(\bar{\theta}\in[0,\,\pi/2)\) such that \(|\theta(t)|_{\infty}<\bar{\theta}\). \(\square\)
Assumption 2 ensures \(\phi_{\text{av}}(a(t),\xi_{e}(t),\boldsymbol{\mathcal{B}})\) is well-defined for any \(t\geq 0\). In practice, Assumption 2 does not represent a limitation. Indeed, for \(\underline{a}\ll 1\) and with computations similar to those used in the proof of Lemma 1, we can show that the most likely conditions for having Assumption 2 not satisfied are ballistic trajectories
Figure 1: Observer architecture. The attitude propagation through the quaternion dynamics is corrected thanks to the estimation \(q_{1}\). The feedback matrix is \(K(t):=S^{-1}(t)H^{\top}R^{-1}\).
and turns with extreme roll angles (\(|\phi|\approx 90\) deg), which are out of the nominal operating range of on-track race motorbikes.
Assumption 3 is instrumental to assess the local stability of (9). Section 3.3 deals with the description and analysis of \(\nu(t)\).
Assumption 4 represents a necessary condition to bound roll angle estimation errors. However, in practice, this assumption does not represent a substantial limitation because, in on-track motorsport, motorcycles pitch of few degrees (comprehensive of track grade and whelele).
**Theorem 1**: _Let Assumptions 1-3 hold and \(S(t)\) be the solution to (9b). Then, there exist \(\underline{s},\overline{s}>0\) such that_
\[\underline{s}I\preceq S(t)\preceq\overline{s}I\qquad\forall t\geq 0.\]
_Moreover, there exist \(\rho>0\) and a class-\(\mathcal{K}\) function \(\beta(\cdot)\) such that, for any \(\|\dot{x}(0)-x(0)\|\leq\rho\), the trajectories of (9a) are bounded and \(\limsup_{t\to\infty}\|\dot{q}(t)-q(t)\|\leq\beta(\|\mathsf{col}(\nu(t),w(t))\|_ {\infty})\)._ To conclude, let Assumption 4 hold. Then, system (5)-(9c) solves Problem 1. \(\Box\)_
Theorem 1 is proved in Appendix A.1.
## 3 Description of the proposed solution
Section 3.1 aims to describe the novel concept of _coordinated manoeuvre_ modelling complex motion configurations in which the rider's gravity centre is not on the motorbike's plane of symmetry. With the function \(\phi_{\mathrm{av}}(\cdot,\cdot,\cdot)\) at hand, the vector \(q_{1}\) is built through (6)-(7) where the estimate \(\hat{\xi}_{e}\) is obtained in Section 3.2 via the so called _pre-filter_. Finally, Section 3.3 analyses the _pre-filter_ errors.
### Coordinated manoeuvres
Let \(v^{B}:=T(\Theta)v\). Then, we define as _coordinated manoeuvres_ the set of dynamic states such that \(v^{B}\equiv\mathsf{col}(\mathsf{v},0,0)\).
**Lemma 1**: _Consider \(\phi_{\mathrm{av}}(\cdot,\cdot,\cdot)\) defined in (5) and \(a:=T(\Theta)(\dot{v}-g)\), then_
\[\Theta=\mathsf{col}(\phi_{\mathrm{av}}(a,\xi_{e},\underline{g}),\gamma,\chi) \tag{10}\]
_during coordinated manoeuvres verifying Assumption 2. \(\Box\)_
The proof of Lemma 1 is reported in Appendix A.2.
In the remaining of this section, we show how the _coordinated manoeuvres_ improves the roll angle estimation. Let \(v:=\mathsf{col}(v_{x},v_{y},v_{z})\) and \(V:=\sqrt{v_{x}^{2}+v_{y}^{2}}\). Concerning Figure 2, define a _flat-coordinated turn_ as a coordinated manoeuvre performed under the further constraints \(\gamma,\dot{\gamma}=0\). With these constraints at hand, the system composed of motorbike and biker is at the equilibrium (translations and rotation) at
\[\phi_{v}(\xi_{e},\underline{g}):=-\tan^{-1}\left(\frac{V\dot{\chi}}{ \underline{g}}\right). \tag{11}\]
**Remark 4**: _It is worth noting that \(\phi_{v}\) denotes the roll angle that the complete system (motorbike + biker) negotiates to perform a coordinated turn. This angle corresponds to \(\phi\) when the biker doesn't move his body out of the motorbike symmetry plane. The difference between \(\phi\) and \(\phi_{v}\) due to the tire size and the centre of gravity shift due to the rider movements during flat-coordinated turns, are well-known concepts, as pointed out in [(26], SS4.1.2) and recalled in [4]. However, to the author's knowledge, what follows represents the first effective way to compensate for this difference in the context of roll angle estimation. \(\Box\)_
Equation (3) with \(\gamma=0\) and \(\psi=\chi\) becomes
\[a=T_{1}(\phi)T_{3}(\chi)(\dot{v}-g) \tag{12}\]
where \(T_{i}(s)\) denotes the matrix associated with a rotation, of magnitude \(s\), around the \(i\)-th axis. Then, since \(\dot{v}=V\dot{\chi}\mathsf{col}(-\sin\chi,\cos\chi,0)\), and using \(a=\mathsf{col}(a_{x},a_{y},a_{z})\), the roll angle is found through (12) as
\[\phi_{a}(a):=\tan^{-1}\left(a_{y}/a_{z}\right). \tag{13}\]
As detailed in Figure 2, the following equation holds
\[\phi=\phi_{v}-\Delta\phi \tag{14}\]
where \(\Delta\phi\in\mathbb{R}\) models the effects of the pilot displacement. We can exploit the body accelerations to correct \(\phi_{v}\) in flat-coordinated turns. Indeed, through basic geometric arguments, it results \(\phi_{a}=-\Delta\phi\). Then, use (11), (13) and (14) to verify
\[\phi_{\mathrm{av}}(a,\xi_{e},\underline{g})|_{\gamma,\dot{\gamma}=0}=\phi_{a }(a)+\phi_{v}(\xi_{e},\underline{g}).\]
### Pre-Filter
The _pre-filter_, representing the subsystem providing \(\hat{\xi}_{e}\), relies of two subsystems, _i.e._, the GNSS reconstructor, estimating \(v\) and \(\dot{v}\), and the continuous-course estimator, providing \(\hat{\chi}\).
#### 3.2.1 GNSS Reconstructor
Let \(v_{e}:=\mathsf{col}(v,\dot{v})\) and define \(f_{\xi_{e}}\,:\,\mathbb{R}^{6}\to\mathbb{R}^{6}\) such that
\[\xi_{e}=f_{\xi_{e}}(v_{e}) \tag{15}\]
Figure 2: Equilibrium of forces and torques in a flat-coordinated turn assuming the pilot body shifts the gravity centre of the system out of the longitudinal symmetry axis.
with \(f_{\xi_{\epsilon}}(\cdot)\) detailed in Appendix A.3. This work adopts a second-degree polynomial signal reconstructor that interpolates the \(n\in\mathbb{N}\) most recent samples of \(y_{s}\) to estimate \(v_{e}\), namely via \(\hat{v}_{e}\). Moreover, The reconstructor extrapolates the signal values, along the next \(\tau\)-long interval. Finally, we impose
\[\hat{\xi}_{e}=f_{\xi_{\epsilon}}(\hat{v}_{e}). \tag{16}\]
More precisely, let \(k:=\lfloor t/\tau\rfloor\) be the maximum integer not greater than \(t/\tau\) and introduce
\[u(k,\tilde{t}):=c_{2}(k)\tilde{t}^{2}+c_{1}(k)\tilde{t}+c_{0}(k)\quad\tilde{t} \in[-(n-1)\tau,\tau) \tag{17}\]
where \(c_{2}(\cdot)\), \(c_{1}(\cdot)\), and \(c_{0}(\cdot)\,:\,\mathbb{N}\to\mathbb{R}^{3}\).
**Remark 5**: _Function \(u(\cdot,\cdot)\) is a vector-valued \(2^{\text{nd}}\)-degree polynomial describing a trajectory with a piecewise-constant jerk in the time interval \(((k-n+1)\tau,\,(k+1)\tau)\). In practice, we define \(c_{2}(\cdot)\), \(c_{1}(\cdot)\), and \(c_{0}(\cdot)\) as a jerk, acceleration, and speed at time \(t=k\tau\) that best describe the last \(n\) GNSS data. Finally, we use the same coefficients to preview inertial speed and acceleration within the next \(\tau\)-long time window. \(\square\)_
Use (17) to define
\[\tilde{u}(k,s)=u(k,s-k\tau)-v(s)=u(k,s-k\tau)-h_{v}(\xi(s))\]
for all \(s\in[(k-n+1)\tau,\,(k+1)\tau)\). Then, the GNSS receiver provides at time \(t=k\tau\)
\[y_{s}(k\tau)=u(k,0)-\tilde{u}(k,k\tau)+\nu_{s}(k\tau). \tag{18}\]
In the following, we present an algorithm elaborating the \(n\) most recent samples \(y_{s}(k\tau),\dots,y_{s}((k-n+1)\tau)\) to provide estimations for \(c_{2}(k)\), \(c_{1}(k)\), and \(c_{0}(k)\), namely \(\hat{c}_{2k}\), \(\hat{c}_{1k}\), and \(\hat{c}_{0k}\), respectively. Consequently, we propose to reconstruct \(u(\cdot,\cdot)\) as
\[\hat{u}(k,\tilde{t}):=\hat{c}_{2k}\tilde{t}^{2}+\hat{c}_{1k}\tilde{t}+\hat{c} _{0k}\]
and to use it to approximate \(v(t)\) and \(\hat{v}(t)\) by
\[\hat{v}(t) =\hat{u}(k,t-k\tau)=\hat{c}_{2k}(t-k\tau)^{2}+\hat{c}_{1k}(t-k \tau)+\hat{c}_{0k} \tag{19}\] \[\hat{\hat{v}}(t) =\frac{\hat{u}(k,\tilde{t})}{\partial\tilde{t}}\Big{|}_{\tilde{t }=t-k\tau}=2\hat{c}_{2k}(t-k\tau)+\hat{c}_{1k}\]
where \(t\in[k\tau,(k+1)\tau)\). Let \(R_{\nu_{s}}=R_{\nu_{s}}^{\top}\in\mathbb{R}^{3\times 3}\) with \(R_{\nu_{s}}\succ 0\). Then, we determine \(\hat{\zeta}_{k}:=\texttt{col}(\hat{c}_{2k},\hat{c}_{1k},\hat{c}_{0k})\) through
\[\hat{\zeta}_{k}=\underset{x\in\mathbb{R}^{9}}{\operatorname{argmin}}\{(y_{k}- Cx)^{\top}(I_{n}\otimes R_{\nu_{s}})^{-1}(y_{k}-Cx)\} \tag{20}\]
where \(y_{k}=\texttt{col}(y_{s}(k\tau),\dots,y_{s}((k-n+1)\tau))\),
\[C=\left[\begin{array}{ccc}0&0&1\\ \tau^{2}&-\tau&1\\ \vdots&\vdots&\vdots\\ (n-1)^{2}\tau^{2}&-(n-1)\tau&1\end{array}\right]\otimes I_{3},\]
and \(n\geq 3\). The solution of (20) is (see [27])
\[\hat{\zeta}_{k}=K_{s}y_{k} \tag{21}\]
with
\[K_{s}=(C^{\top}(I_{n}\otimes R_{\nu_{s}})^{-1}C)^{-1}C^{\top}(I_{n}\otimes R_{ \nu_{s}})^{-1}. \tag{22}\]
#### Continuous-Course Estimator
The computation of \(\hat{\chi}\) from inertial speeds \(\hat{v}_{x}\) and \(\hat{v}_{y}\), made through whether \(\tan^{-1}(\hat{v}_{y}/\hat{v}_{x})\) or \(\texttt{atan}_{2}(\hat{v}_{y},\hat{v}_{x})\), is prone to discontinuities, which could induce wrong roll angle estimations.
This section proposes a novel continuous map \(t\mapsto\hat{\chi}(t)\) that solve this issue. With reference to (19), remember that \(\hat{v}:=\texttt{col}(\hat{v}_{x},\hat{v}_{y},\hat{v}_{z})\) and \(\hat{\hat{v}}:=\texttt{col}(\hat{v}_{x},\hat{v}_{y},\hat{v}_{z})\) and define
\[\mathcal{T}_{k}^{+} =\{t\in[k\tau,(k+1)\tau)\,:\,\hat{v}_{y}(t)=0,\,\hat{v}_{y}(t)<0, \,\hat{v}_{x}(t)<0\}\] \[\mathcal{T}_{k}^{-} =\{t\in[k\tau,(k+1)\tau)\,:\,\hat{v}_{y}(t)=0,\,\hat{v}_{y}(t)>0, \,\hat{v}_{z}(t)<0\},\]
where \(\mathcal{T}_{k}^{+}\) and \(\mathcal{T}_{k}^{-}\) are analytically found thanks to (19) being polynomial functions of time. Now, define \(\mathcal{T}^{+}=\bigcup_{k\in\mathbb{N}}\mathcal{T}_{k}^{+}\) and \(\mathcal{T}^{-}=\bigcup_{k\in\mathbb{N}}\mathcal{T}_{k}^{-}\) and use them to feed the lap counters (35). Then, adopt \(f_{\chi}(\cdot,\cdot,\cdot)\) detailed in Appendix A.3 to introduce
\[\hat{\chi}\,=f_{\chi}(\hat{v},N_{+},N_{-}). \tag{23}\]
**Lemma 2**: _Consider (23), then_
\[\hat{\chi}(t)=f_{\chi}(\hat{v}(t),N_{+}(t),N_{-}(t))\in\mathcal{C}^{1}.\]
\(\square\)__
## Appendix A.4 details the proof of Lemma 2.
### Error Boundedness
In this section we investigate the error \(\nu=q_{1}-q\). To this end, let \(\tilde{\xi}_{e}:=\tilde{\xi}_{e}-\xi_{e}\) and
\[\Theta_{\text{av}}:=f_{\Theta}(a,\xi_{e},\texttt{g}), \tag{24}\]
define \(\tilde{\texttt{g}}=\hat{\texttt{g}}-\texttt{g}\), and remember \(q_{1}=\bar{h}(\hat{\Theta}_{\text{av}})\). Then,
\[\begin{split}\nu&=\bar{h}(\hat{\Theta}_{\text{av}})- \bar{h}(\Theta_{\text{av}})+\bar{h}(\Theta_{\text{av}})-q=\\ =&\nu_{\nu}(t,\nu_{a},\tilde{\xi}_{e},\tilde{ \texttt{g}})+\nu_{\text{m}}(t)\end{split} \tag{25}\]
where
\[\begin{split}\nu_{\text{m}}(t)&:=\bar{h}(f_{\Theta}(a(t ),\xi_{e}(t),\texttt{g}))-q(t)\\ \nu_{\nu}(t,\nu_{a},\tilde{\xi}_{e},\tilde{\texttt{g}})& :=\bar{h}(f_{\Theta}(y_{a}(t),\tilde{\xi}_{e}(t),\tilde{\texttt{g}}))\\ &-\bar{h}(f_{\Theta}(y_{a}(t)-\nu_{a},\hat{\xi}_{e}(t)-\tilde{ \xi}_{e},\hat{\texttt{g}}-\tilde{\texttt{g}})).\end{split} \tag{26}\]
The first error contribution, _i.e._, \(\nu_{\text{m}}\), embeds the errors due to the model mismatch, _i.e._, the difference between the actual motorbike evolution and a coordinated manoeuvre. In detail, \(\nu_{\text{m}}\) is highly dependent on the rider's driving style, mainly due to wheelies and drifts. Thus, with a particular focus on applications like the Grand Prix motorcycle racing, the error \(\nu_{\text{m}}\) is usually negligible except during tail-wagging or corner entries. However, these represent short-duration driving phases in which the side-slip remains bounded. Thus, we formalise this through the following assumption.
**Assumption 5** (Boundedness of \(\nu_{\text{m}}\)): _There exists \(\overline{\nu}_{\text{m}}>0\) such that \(\|\nu_{\text{m}}(t)\|_{\infty}<\overline{\nu}_{\text{m}}\). \(\square\)_
As for \(\nu_{\nu}\), it describes the uncertainties induced by the sensor inaccuracy plus those introduced by the estimator of \(\xi_{e}\). Note that, since \(\bar{h}(\Theta)\) and all its derivatives are Lipschitz and bounded for all \(\Theta\in\mathbb{R}^{3}\), there exists \(\overline{\nu}_{\nu}>0\) such that \(\|\nu_{\nu}(t,\nu
## 4 Experimental results
This section presents the results of numerical and field tests. While the former were conducted to check the expected theoretical behaviour, the latter were performed to assess the applicability of the proposed estimation scheme.
The field test were conducted by AvioRace [28], a provider of electronics specialised in motor-sport applications. Since the parties agreed on a data protection policy, sensible data collected during field test are shown without scale. The lack of quantitative evaluations is compensated in Section 4.4 where the algorithm is tested in a realistic synthetic environment.
### Pre-Filter Performance Analysis
The boundedness of \(\|\nu_{\nu}(t)\|_{\infty}\) demonstrated in section 3.3 is necessary for the observer stability proof. In contrast, \(\|\nu_{\nu}(t)\|_{\infty}\) is too conservative in assessing the pre-filter's performance. Therefore, this section reports a stochastic description of \(\nu_{\nu}\), which is also used to tune (9). To this aim, we propose the following process:
1. we introduce a stochastic model of sensor noise
2. we estimate how GNSS measurement errors and the GNSS reconstructor impact \(\tilde{v}:=\hat{v}-v\) and \(\tilde{\tilde{v}}:=\hat{\tilde{v}}-\hat{v}\)
3. we rely on results of point 1) to describe \(\tilde{\xi}:=\hat{\xi}-\xi\) and \(\tilde{\xi}:=\hat{\tilde{\xi}}-\hat{\xi}\)
4. we use results from point 2) to characterize \(\nu_{\nu}\).
#### 4.1.1 Stochastic Description of Sensor Noise
Let \(w_{\#}:\mathbb{R}\to\mathbb{R}^{9}\), for all \(\#\in\{a,g,s\}\) a stationary stochastic process. Then, in agreement with [29], this paper models the measurement errors appearing in (3) as
\[\begin{split}\dot{b}_{\#}=&\,A_{\#}b_{\#}+B_{\#} \,w_{\#}(t)\quad b_{\#}(0)=b_{\#0}\\ \nu_{\#}=&\,C_{\#}\,b_{\#}+D_{\#}\,w_{\#}(t)\quad \#\in\{a,g,s\}\end{split} \tag{27}\]
with \(b_{\#}:=\texttt{col}(\bar{b}_{\#},z_{\#})\), \(\bar{b}_{\#},z_{\#}\in\mathbb{R}^{3}\),
\[\begin{split} A_{\#}=\left[\begin{array}{cc}0&0\\ 0&-\tau_{\#}^{-1}\end{array}\right]\otimes I_{3}\quad B_{\#}=\left[\begin{array} []{cc}1&0&0\\ 0&1&0\end{array}\right]\otimes I_{3}\\ C_{\#}=\left[\begin{array}{cc}1&1\end{array}\right]\otimes I_{3}\quad D _{\#}=\left[\begin{array}{cc}0&0&1\end{array}\right]\otimes I_{3},\end{split}\]
and where \(\tau_{\#}>0\) and \(\otimes\) denotes the Kronecker product.
Usually the Power Spectral Density (PSD) of \(w_{\#}\) is constant within the sensor sampling frequency. Hence, for analysis purposes, \(w_{\#}\) can be seen as a white noise by assuming its PSD constant for all the frequencies. Therefore, we assume \(\mathrm{E}[w_{\#}(t)]=0\), and \(\mathrm{E}[w_{\#}(t)w_{\#}^{\top}(\tau)]=R_{\#}\delta(t-\tau)\) with \(R_{\#}\succ 0\) and block diagonal, for all \(\#\in\{a,g,s\}\). Lastly, enforce \(b_{*0}=0\) and \(Bw_{*}(t)=0\) for all \(t\geq 0\).
Consequently, \(\bar{b}_{\#}\) models a biased random walk while \(z_{\#}\) represents a coloured noise (with a time constant \(\tau_{\#}\)). In particular, only a \(0\)-mean white noise affects the GNSS measurement.
#### 4.1.2 Analysis of \(\tilde{v}\) and \(\tilde{\tilde{v}}\)
Introduce
\[\begin{split}\zeta_{k}:=&\texttt{col}(c_{2}(k),c_{1}(k),c_{ 0}(k))\\ \tilde{u}_{k}:=&\texttt{col}(\tilde{u}(k,k\tau),\ldots,\tilde{u}(k,(k-n+1) \tau))\\ \nu_{k}:=&\texttt{col}(\nu_{s}(k\tau),\ldots,\nu_{s}((k-n+1) \tau))\end{split}\]
write \(y_{k}=C\zeta_{k}-\tilde{u}_{k}+\nu_{k}\) and use \(R_{\nu_{s}}:=DR_{s}D^{\top}\) into (22).
**Remark 6**: Usually \(R_{\nu_{s}}=\texttt{diag}(\sigma_{x}^{2},\sigma_{y}^{2},\sigma_{z}^{2})\) where \(\sigma_{x}\), \(\sigma_{y}\), and \(\sigma_{z}\) are known and correspond to figures of merit of GNSS, known as _User Range Rate Error_.
Besides, use (21) to write
\[\tilde{\zeta}_{k}:=\hat{\zeta}_{k}-\zeta_{k}=K_{s}y_{k}-\zeta_{k}=K_{s}(\nu_{ k}-\tilde{u}_{k}). \tag{28}\]
Introduce
\[\Phi(\tilde{t}):=\left[\begin{array}{ccc}\tilde{t}^{2}&\tilde{t}&1\\ 2\,\tilde{t}&1&0\end{array}\right]\otimes I_{3},\]
define \(\hat{v}_{e}:=\texttt{col}(\hat{v},\hat{\tilde{v}})\), and rewrite (19) as
\[\hat{v}_{e}=\Phi(t-k\tau)\hat{\zeta}_{k}\qquad t\in[k\tau,(k+1)\tau). \tag{29}\]
Introduce \(v_{e}:=\texttt{col}(v,\hat{v})\) and \(\tilde{u}_{e}:=\texttt{col}(\tilde{u},\hat{\tilde{u}})\) and use (28) and (29) to calculate the estimation errors
\[\begin{split}\tilde{v}_{e}(t)=&\,\hat{v}_{e}(t)-v_{e}(t)\\ =&\,\Phi(t-k\tau)K_{s}(\nu_{k}-\tilde{u}_{k})+\tilde{u}_{e}(t)\end{split} \tag{30}\]
for all \(t\in[k\tau,\,(k+1)\tau)\). Use (30) to compute
\[\begin{split}\mathrm{E}[\tilde{v}_{e}(t)]=&\,\mathrm{E}[ \Phi(t-k\tau)K_{s}(\nu_{k}-\tilde{u}_{k})+\tilde{u}_{e}(t)]\\ =&\,-\Phi(t-k\tau)K_{s}\mathrm{E}[\tilde{u}_{k}]+\mathrm{E}[ \tilde{u}_{e}(t)]\end{split}\]
in which we have exploited \(\mathrm{E}[\nu_{k}]=0\).
**Remark 7**: The quantity \(\mathrm{E}[\tilde{v}_{e}]\) represents the expected velocity and acceleration estimate errors. Roughly, \(\mathrm{E}[\tilde{v}_{e}(t)]=0\) because it can be demonstrated that \(\tilde{u}_{k}=\tilde{u}_{e}(t)=0\) if the motorbike travels at a statistically constant jerk into \(n\tau\)-long time intervals. We assumed \(\tilde{v}_{e}(t)\) as an ergodic process to test if the expected value is near zero in a real-world scenario. Therefore, we computed the time average by using the GNSS samples reported in the experimental tests of Section 4.3. The results show that, in practice, \(\mathrm{E}[\tilde{v}_{e}]\approx 0\).
Finally, compute the covariance of the estimation error
\[R_{v}(t):=\,\mathrm{E}[(\tilde{v}_{e}(t)-\mathrm{E}[\tilde{v}_{e}(t)])(\tilde{ v}_{e}(t)-\mathrm{E}[\tilde{v}_{e}(t)])^{\top}]\]
by applying (30) and exploiting the assumptions \(\mathrm{E}[\nu_{k}\nu_{k}^{\top}]=I_{n}\otimes R_{\nu_{s}}\), \(\mathrm{E}[\nu_{k}\tilde{u}_{k}^{\top}]=0\), \(\mathrm{E}[\nu_{k}\tilde{u}_{k}^{\top}(t)]=0\), \(\mathrm{E}[\tilde{u}_{k}\tilde{u}_{k}^{\top}]=0\), \(\mathrm{E}[\tilde{v}_{e}(t)]=0\), \(\mathrm{E}[\nu_{k}]=0\) and \(\mathrm{E}[\nu_{k}]=0\). After some algebra, it results to be
\[R_{v}(t)=\Phi(t-k\tau)K_{s}(I_{n}\otimes R_{\nu_{s}})K_{s}^{\top}\Phi^{\top}(t-k \tau). \tag{31}\]
In conclusion, it is worth noting that \(R_{v}\) is bounded because, since \(\Phi(t-k\tau)\) is a polynomial function of \(t\), there exists a finite \(\overline{\Phi}(\tau)>0\) such that \(\|\Phi(t-k\tau)\|\leq\overline{\Phi}(\tau)\) for all \(t\in[k\tau,\,(k+1)\tau)\) and for any \(k\in\mathbb{N}\). In particular, as a conservative approach, the upper bound \(\bar{R}_{vk}\) of the covariance can be chosen accordingly to
\[\bar{R}_{vk}=R_{v}((k+1)\tau). \tag{32}\]
#### 4.1.3 Analysis of \(\tilde{\xi}\) and \(\tilde{\tilde{\xi}}\)
Use (16) and \(\tilde{v}_{e}=\hat{v}_{e}-v_{e}\) to compute
\[\tilde{\xi}_{e}=f_{\xi_{e}}(\hat{v}_{e})-f_{\xi_{e}}(\hat{v}_{e}-\tilde{v}_{e}) \approx J_{\xi_{e}}(\hat{v}_{e})\tilde{v}_{e}\]
where \(J_{\xi_{e}}(\hat{v}_{e}):=\partial f_{\xi_{e}}(\hat{v}_{e})/\partial\hat{v}_{e}\). Then, the expected value is \(\mathds{E}[\tilde{\xi}_{e}]\approx J_{\xi_{e}}(\hat{v}_{e})\mathds{E}[\tilde{ v}_{e}]\). To conclude, we exploit (31) to compute
\[R_{\xi_{e}}(t)= \,\mathds{E}[(\tilde{\xi}_{e}(t)-\mathds{E}[\tilde{\xi}_{e}(t) ])(\tilde{\xi}_{e}(t)-\mathds{E}[\tilde{\xi}_{e}(t)])^{\top}]\] \[\approx J_{\xi_{e}}(\tilde{v}_{e}(t))R_{v}(t)J_{\xi_{e}}^{\top}( \tilde{v}_{e}(t))-\mathds{E}[\tilde{\xi}_{e}(t)]\mathds{E}[\tilde{\xi}_{e}^{ \top}(t)]\]
where it is worth observing that \(\|J_{\xi_{e}}(\cdot)\|\) is bounded under the following Assumption.
Assumption 6 (Motorbike Speed): The inertial speed \(v(t)\) is a Lipschitz continuous function. Moreover, there exist \(\underline{v},\overline{v}>0\) such that \(\sqrt{v_{2}^{2}(t)+v_{2}^{2}(t)}>\underline{v}\) and \(\mathbf{v}(t)<\overline{v}\) for all \(t\geq 0\). \(\square\)
As for Assumption 6, Eq. (2) becomes bijective if \(\forall(t)=\sqrt{v^{\top}(t)v(t)}>\sqrt{v_{2}^{2}(t)+v_{2}^{2}(t)}\) is strictly positive. Moreover, the Lipschitz continuity of \(v(t)\) ensures that \(\dot{v}(t)\) and \(y_{a}(t)\) are bounded. It is worth noting that Assumption 6 does not represents a constraint because, in practice, motorbikes are power- and force-limited systems for which the assumption of a Lipschitz continuous speed represents a matter of fact. Indeed, engines deliver bounded powers and torques while tires transfer bounded traction/braking forces to the ground, thus limiting accelerations.
#### 4.1.4 Analysis of \(\nu_{\nu}\)
Exploit (6) and (24) to calculate
\[\tilde{\Theta}_{\text{av}}:=\hat{\Theta}_{\text{av}}-\Theta_{\text{av}}\approx J _{\Theta}(t)\texttt{col}(\nu_{a},\tilde{\xi}_{e},\tilde{\mathbf{g}})\]
in which \(J_{\Theta}(t)=\partial f_{\Theta}(a,\xi_{e},\mathbf{g})/\partial\texttt{col}( a,\xi_{e},\mathbf{g})\) evaluated at \(a=y_{a}(t)\), \(\xi_{e}=\hat{\xi}_{e}(t)\), and \(\mathbf{g}=\hat{\mathbf{g}}\). The expected value of \(\tilde{\Theta}_{\text{av}}\) is
\[\mathds{E}[\tilde{\Theta}_{\text{av}}(t)]\approx J_{\Theta}(t)\texttt{col}( \mathds{E}[\nu_{a}],\mathds{E}[\tilde{\xi}_{e}(t)],\mathds{E}[\hat{\mathbf{g }}])\]
while the covariance of \(\tilde{\Theta}_{\text{av}}\) is
\[R_{\Theta}(t):= \,\mathds{E}[(\tilde{\Theta}_{\text{av}}(t)-\mathds{E}[\tilde{ \Theta}_{\text{av}}(t)])(\tilde{\Theta}_{\text{av}}(t)-\mathds{E}[\tilde{ \Theta}_{\text{av}}(t)])^{\top}]\] \[\approx J_{\Theta}(t)\texttt{blkdiag}(R_{\nu_{a}},R_{\xi_{e}}(t,0)J_{ \Theta}^{\top})-\mathds{E}^{2}[\tilde{\Theta}_{\text{av}}(t)]\]
in which we exploited \(\mathds{E}[\hat{\mathbf{g}}^{2}]=0\). It is worth noting that \(\|J_{\Theta}(t)\|_{\infty}\) is bounded. Finally, with the same steps outlined in Eq. (37)-(43) of [30], we obtain
\[\mathds{E}[\nu_{\nu}\,\nu_{\nu}^{\top}]=M(q_{1}(t))R_{\Theta}(t)M^{\top}(q_{1} (t)). \tag{33}\]
### EKF Tuning Guidelines
The tunable quantities of (9) are \(\lambda\), \(Q\), and \(R(t)\). In this section, we exploit results of Section 4.1 to design \(R(t)\). Besides, we propose to exploit (3) and (4), and the stability arguments detailed in the proof of Theorem 1 to define \(Q\). The tuning of EKF matrices can be divided in two parts: the selection of intra-matrix weights and the selection of inter-matrix weights. The idea is that the hardest part of the tuning, _i.e._ the selection of intra-weights for \(Q\) and \(R(t)\) (which are not necessarily purely diagonal), is made through an automatic computational procedure. Consequently, we leave \(\lambda\), representing the inter-matrix weight, as single scalar hand-tuned parameter.
Markley and Pittelkau demonstrated in [30] and [31] that \(\mathds{E}[\nu_{\nu}(t)\,\nu_{\nu}^{\top}(t)]\) computed in (33) is singular because of \(\bar{h}(\cdot)\), which enforces the constraint of unitary norm. Consequently, in agreement with [30] and [31], we avoid singularities by introducing \(\beta_{R}(\cdot)\,:\,\mathbb{R}\to\mathbb{R}\) and take
\[R(t):=\beta_{R}^{2}(t)q_{1}(t)q_{1}^{\top}(t)+M(q_{1}(t))\bar{R}_{\Theta}(t)M^{ \top}(q_{1}(t)) \tag{34}\]
where \(\bar{R}_{\Theta}(t)\) represents \(R_{\Theta}(t)\) in the worst condition (32). As for \(\beta_{R}(t)\), we set \(\beta_{R}(t)=(\underline{\sigma}(R_{\Theta}(t))+\overline{\sigma}(R_{\Theta}(t )))/2\), where \(\underline{\sigma}(R_{\Theta})\) and \(\overline{\sigma}(R_{\Theta})\) are the smallest and largest singular value of \(R_{\Theta}\). This choice assures both the non-singularity and the well-conditioning of \(R(t)\), important for the computation of its inverse.
As for \(Q\), we propose the following setting. In agreement with (27), let \(w_{b}\), \(w_{z}\), and \(w_{\nu}\in\mathbb{R}^{3}\) such that \(w_{g}=\texttt{col}(w_{b},w_{z},w_{\nu})\). Define \(\mathds{E}[w_{b}w_{b}^{\top}]\), \(\mathds{E}[w_{\nu}w_{\nu}^{\top}]\), and \(\mathds{E}[w_{\nu}w_{\nu}^{\top}]\) such that \(R_{g}=\texttt{blkdiag}(\mathds{E}[w_{b}w_{b}^{\top}],\mathds{E}[w_{\nu}w_{\nu}^{ \top}],\mathds{E}[w_{\nu}w_{\nu}^{\top}])\). Introduce \(\mathds{E}[z_{g}z_{g}^{\top}]:=(0.4365)^{2}\tau_{g}^{2}\mathds{E}[w_{z}w_{z}^{ \top}]\) corresponding to the covariance of \(z_{g}\) evaluated at the worst Allan power spectral density ([29], Eq. (37)). Then, introduce \(\beta_{Q},\epsilon>0\) and write
\[Q=\texttt{blkdiag}(\mathds{E}[w_{b}w_{b}^{\top}],\epsilon I,\mathds{E}[w_{\nu}w_{ \nu}^{\top}]+\mathds{E}[z_{g}z_{g}^{\top}],\beta_{Q}^{2}).\]
Inspired by the same arguments adopted for \(\beta_{R}\), the degree of freedom \(\beta_{Q}\) is set as the average of the smallest and largest singular values of \(\mathds{E}[w_{\nu}w_{\nu}^{\top}]+\mathds{E}[z_{g}z_{g}^{\top}]\), _i.e._,
\[\beta_{Q}=\frac{1}{2}\left(\underline{\sigma}(\mathds{E}[w_{\nu}w_{\nu}^{ \top}]+\mathds{E}[z_{g}z_{g}^{\top}])+\overline{\sigma}(\mathds{E}[w_{\nu}w_{ \nu}^{\top}]+\mathds{E}[z_{g}z_{g}^{\top}])\right).\]
Since the parameter \(\epsilon\) is only necessary for the observer stability, _i.e._, to guarantee that \(\hat{q}=0\) does not belong to any forward invariant set for (9), we set \(\epsilon\ll\underline{\sigma}(\mathds{E}[w_{\nu}w_{\nu}^{\top}]+\mathds{E}[z_{g}z_{ g}^{\top}])\) to let its contribution be relevant only for \(\|\hat{q}\|\approx 0\).
Finally, \(\lambda\) regulates the magnitude \(Q\) relative to \(R(t)\) and, consequently, the behaviour of \(S(t)\) and \(K(t):=S^{-1}(t)H^{\top}R^{-1}(t)\). It is a common fact that increasing the ratio between \(Q\) and \(R(t)\) increases \(\overline{\sigma}(K(t))\) and thus makes EKF have a shorter transient but a higher noise sensitivity. Therefore, in practice, the best compromise is found through an on-field trial and error procedure on define \(Q\).
### Field test
The field tests have been executed on a Kawasaki Ninja 400 driven by a professional rider in Autodromo Nazionale dell'Umbria "Mario Umberto Borzacchini", see Figure 3. The motorbike was equipped
the sensor suite symmetry axes, see Figure 3(b). The sensor suite is installed under the saddle at the location the arrow displayed in Figure 3(a) is pointing to.
The GNSS data associated to the path illustrated in Figure 3 are reported in Figure 4(a) while Figure 4(b) magnifies the second lap (used for the assessment of the realism of the synthetic data produced with the simulator, see Section 4.4). As for the GNSS course angle, Figures 4(a) and 4(b) show the progressive course made incremental through the lap counter (35). For the presented test, the GNSS vertical speed was not available and the algorithm was evaluated with \(\gamma\equiv 0\).
The application of the GNSS reconstructor, described in Section 3.2, on the course angle of Figure 4(a) lead to the estimation of Figure 6. To appreciate the performance of the reconstructor (16) and (19), we compared the estimated \(\hat{\chi}\) with a batch crude numerical computation. This latter, founded on the central finite difference method of order 8, provides the estimation \(\hat{\chi}_{\mathrm{c}}\), see Figure 6.
The data produced by accelerometers and gyroscopes are shown in Figures 7 and 8, both calibrated for compensating the installation misalignment.
These data, together with \(\hat{\xi}_{\mathrm{c}}\) provided by (16), (19), and (23), are exploited through (5) to provide \(\phi_{\mathrm{av}}\). This latter is then compared with the estimation of \(\phi\) elaborated by a proprietary algorithm, also using a 3-axis magnetometer, and considered as a reliable reference, see Figure 9. It is worth noting that, the assumption of co
Figure 4: (a) The Kawasaki Ninja 400 used for the field test. The yellow arrow points to the location at which the sensor suite is installed. (b) The sensor suite the motorbike was equipped with. It embeds a 9-DOF IMU and a GNSS receiver.
Figure 5: (a) GNSS speed profile of the field test. The experiment lasted 9 laps. The \(2^{\mathrm{nd}}\) lap is boxed and magnified in Figure 4(b). (b) Magnification of the \(2^{\mathrm{nd}}\) lap speed profile. A comparison with Figure 11 highlights the realism of the synthetic data produced through the simulator.
Figure 6: (a) GNSS course angle reconstruction. Comparison of the estimations obtained via a batch crude numerical computation (continuous line) and the estimate given by the reconstructor (dashed line). (b) Magnification of the \(2^{\mathrm{nd}}\) lap. The reconstructor (dashed line) well tracks the reference course angle derivative obtained via crude numerical batch computations (continuous line).
ordinated manoeuvre, on which the observer proposed in this paper is based, accurately models the actual dynamics of system motorbike+biker. Indeed, \(\phi_{\rm av}\) is highly coherent with the reference \(\phi\) even in those circumstances in which the rider made the bike skidding. For the test described in this section, we computed the mean and the standard deviation of the roll estimation error as key performance indices, with \(\mathrm{E}[\hat{\phi}(t)-\phi(t)]\approx 1.2\) deg and \(\sqrt{E[((\hat{\phi}(t)-\phi(t))-\mathrm{E}[\hat{\phi}(t)-\phi(t)])^{2}]} \approx 4.6\) deg.
### Simulations
The aim of this section is that of comparing, in clear, the performance of our algorithm to those of the most relevant algorithms found in the literature. In particular, since one of the fundamental elements of our work is represented by the introduction of \(\phi_{\rm av}\) through the definition of the coordinated manoeuvres (see Section 3.1), we evaluated the algorithms proposed in [4, 21, 5] because, under the assumption of a flat-coordinated turn, they provide possible alternatives to \(\phi_{\rm av}\). More in details, let \(v_{x}^{B}\in\mathbb{R}\) be projection of the inertial speed on the motorcycle x-axis. Then, we investigated
* [[4], Eq. (9)] which adopts the z-axis gyroscope measurement to compute \[\phi_{1}=-\tan^{-1}\left(\frac{v_{x}^{B}y_{2z}}{\mathsf{g}}\right)\]
* [[4], Eq. (16)] that exploits the y-axis gyroscope to elaborate \[\phi_{2}= -\mathtt{sign}(y_{2z})\cos^{-1}\left(\sqrt{1+\Phi^{2}}-\Phi\right)\] \[\Phi= \frac{v_{x}^{B}|y_{2y}|}{2\mathsf{g}}\]
* [[4], Eq.s (21), (22)] using both the y- and z-axis gyroscopes to determine \[\phi_{3}=-\tan^{-1}\left(\frac{v_{x}^{B}}{\mathsf{g}}\mathtt{sign}(y_{2z}) \sqrt{y_{2y}^{2}+y_{2z}^{2}}\right),\]
* [[21], Eq. (7)] which, in implicit form, represents a heuristic improvement of \(\phi_{1}\) \[\phi_{4}\in\mathbb{R}\,:\,\tan(0.9\phi_{4})\cos(\phi_{4})=-\left(\frac{v_{x}^ {B}y_{2z}}{\mathsf{g}}\right).\]
* [[5], Eq. (54), (56)-(58)] that introduces an heuristic weight function to mix \(\phi_{1}\) and a proxy of \(\phi_{3}\) as \[\phi_{5}= W\phi_{1}-(1-W)\mathtt{sign}(y_{2z})\sin^{-1}\left(\frac{y_{2y}}{ \sqrt{y_{2y}^{2}+y_{2z}^{2}}}\right)\] \[W= \,\exp(-25\phi_{1}^{2}).\]
To create a synthetic but realistic dataset we modelled a track lap. More in detail, we added to the Imola circuit path (see Figure 10) a time law whose generation, made exploiting [32], takes into account lateral and longitudinal maximum tire forces, wheelie conditions, engine power and efficiency, circuit slope and aerodynamic drag.
Let \(L>0\) be the circuit length, then this procedure leads to the definition of the curvilinear speed and acceleration, namely \(ds/dt(\cdot),\,d^{2}s/dt^{2}(\cdot):\,[0\;L]\to\mathbb{R}^{3}\) (see Figure 11), and the heading and the slope \(\chi(\cdot),\gamma(\cdot):\,[0\;L]\to\mathbb{R}\). From these quantities, \(v(\cdot),a(\cdot),\,\dot{\chi}(\cdot),\,\dot{\gamma}(\cdot)\) are computed by standard geometric arguments.
As for the generation of the Euler angles and the body rotational speeds, in agreement with [[5], Eq. (32)], we assumed a linear law \(\Delta\phi=k\phi\) with \(k>0\) (tuned to keep
Figure 8: (a) Gyroscope outputs. The \(2^{\rm nd}\) lap is boxed and magnified in Figure 6(b). (b) Magnification of the \(2^{\rm nd}\) lap.
Figure 7: (a) Accelerometer outputs. The \(2^{\rm nd}\) lap is boxed and magnified in Figure 6(b). (b) Magnification of the \(2^{\rm nd}\) lap.
the pilot gravity centre above the road surface) and we computed \(\phi\,:[0\;L]\rightarrow\mathbb{R}\) accordingly to a coordinated turn, see Section 3.1. The mototribe leak angle obtained with this procedure is shown in Figure 12.
The body angular speeds, \(\omega\,:[0,\,L]\rightarrow\mathbb{R}^{3}\) were obtained from \(\phi(\cdot),\gamma(\cdot),\chi(\cdot)\) by using standard derivative arguments and under the assumption of coordinated macouvres. To let the reader able to replicate the simulations detailed in this Section, the parameters modelling the sensor suite are listed in Table 1. More in detail, accordingly to [33], the model (27) is completed by taking \(b_{i0}=\texttt{col}(0,\sim\mathcal{N}(0,\sigma^{2}))\), and
\[w_{\#}=\left[\begin{array}{c}1\\ 1\\ 1\end{array}\right]\otimes\left[\begin{array}{c}\sim\mathcal{N}\left(0, \frac{2\log(2)}{\pi 0.4365^{2}}\frac{B^{2}}{\tau_{\#}^{2}}\right)\\ \sim\mathcal{N}\left(0,K^{2}\right)\\ \sim\mathcal{N}\left(0,N^{2}\right)\end{array}\right],\,\#\in\{a,g,s\}.\]
With reference to the roll angle profile of Figure 12, Figures 13(a)-(e) report the errors \(\hat{\phi}_{i}\,:=\phi_{i}-\phi\), for \(i=1,\ldots,5\), while Figure 13(f) shows \(\hat{\phi}_{\mathrm{av}}\,:=\phi_{\mathrm{av}}-\phi\). As can be seen, the estimation errors associated to the
\begin{table}
\begin{tabular}{|c|c||c|c|c|} \hline Symbol & \multicolumn{2}{c||}{Unit} & \multicolumn{1}{c|}{Gyros} & Acc.s & \multicolumn{1}{c|}{GNS} \\ \cline{2-5} & [Gyros, Acc.s, GNSS] & & & & \\ \hline \hline \(B\) & [rad/s, m/s\({}^{2}\), m/s] & 3.3e-3 & 2.0e-4 & 0 \\ \hline \(\tau_{\#}\) & \(s\) & 20 & 30 & 0 \\ \hline \(N\) & [rad/s, m/s\({}^{2}\), m/s] & 8.5e-3 & 3.3e-3 & 1.1e-2 & \(x,y\) \\ \hline \(K\) & [rad/s, m/s\({}^{2}\), m/s\({}^{4}\)] & 0 & 3.3e-3 & 0 \\ \hline \(\sigma\) & [rad/s, m/s\({}^{2}\), m/s\({}^{2}\)] & 5.0e-2 & 1.0e-1 & 0 \\ \hline \(T_{s}\) & \(s\) & 1.0e-2 & 1.0e-2 & 1.0e-1 \\ \hline \end{tabular}
\end{table}
Table 1: List of parameters used for sensor noise generation.
Figure 11: The minimum laptime speed is generated accordingly to the road slope, the aerodynamic drag, the engine performance, and the tire maximum cohesion coefficient.
Figure 12: Motorbike lean angle generated accordingly with the simulation procedure described in Section 4.4.
Figure 9: (a) Roll angle estimation. (b) Magnification of the 2nd lap. The estimation \(\phi_{\mathrm{av}}\) generated accordingly to the assumption of coordinated macouvre is reported in grey. The dotted line represents the estimation \(\hat{\phi}\) provided by the observer. The continuous line denotes the reference angle \(\phi\). (c) Roll angle estimation error. (d) Magnification of the 2nd lap. The estimation is satisfactory even in those few moments (isolated picks) in which the assumption of coordinated manoeuvre is violated by a drifting condition.
algorithms proposed by the cited literature have low frequency errors which seems to be not present in \(\tilde{\phi}_{\text{av}}\). To confirm this visual intuition, Figure 14 displays the single-sided spectra of \(\tilde{\phi}_{i}\), for \(i=1,\ldots,5\), and \(\tilde{\phi}_{\text{av}}\).
## 5 Conclusions
The lean angle estimate proposed in this work has been formulated by implementing two key ideas: the definition of a two-stage-structure observer and the coordinated manoeuvre assumption. The first stage elaborates GNSS and accelerometer data to estimate an attitude quaternion accordingly with the assumption of coordinated manoeuvre. Then, the second stage, consisting of an EKF, integrates gyroscope data to improve the estimation accuracy during fast rolling manoeuvres. Compared to other lean angle estimation schemes, the coordinated manoeuvre assumption, particularly suitable for high-performance motorbikes, assures good estimation accuracy, as the plots in the experimental section show. Furthermore, theoretical proofs assure the observer estimation error is uniform with all the actual potential trajectories.
|
2301.09406
|
The Reasonable Effectiveness of Diverse Evaluation Data
|
In this paper, we present findings from an semi-experimental exploration of
rater diversity and its influence on safety annotations of conversations
generated by humans talking to a generative AI-chat bot. We find significant
differences in judgments produced by raters from different geographic regions
and annotation platforms, and correlate these perspectives with demographic
sub-groups. Our work helps define best practices in model development --
specifically human evaluation of generative models -- on the backdrop of
growing work on sociotechnical AI evaluations.
|
Lora Aroyo, Mark Diaz, Christopher Homan, Vinodkumar Prabhakaran, Alex Taylor, Ding Wang
|
2023-01-23T13:03:58Z
|
http://arxiv.org/abs/2301.09406v1
|
# The Reasonable Effectiveness of
###### Abstract
In this paper, we present findings from an semi-experimental exploration of rater diversity and its influence on safety annotations of conversations generated by humans talking to a generative AI-chat bot. We find significant differences in judgments produced by raters from different geographic regions and annotation platforms, and correlate these perspectives with demographic sub-groups. Our work helps define best practices in model development- specifically human evaluation of generative models- on the backdrop of growing work on sociotechnical AI evaluations.
## 1 Introduction
In their 2009 paper "The Unreasonable Effectiveness of Data" [4], Alon Halevy, Peter Norvig, and Fernando Perreira urge ML researchers to "follow the data and see where it leads." Since then, we have at an unprecedented scale amassed data for the purpose of machine learning. Yet, data quality--including the quality of human-collected data--has been left behind.
While there is a substantial body of work focusing on the reliability of human raters when performing evaluations, there are few [2; 5; 3; 6] if any studies investigating how the characteristics of rater pools impact ratings. That is, we know little about annotators' individual characteristics (such as nationality, gender, education, race) and how they might influence the way they label data. This matters because, in seeking to build fair and responsible AI systems, we should anticipate potential biases that may emerge as a result of differences across user populations, and evaluation data should represent a variety of populations in order to better reflect viewpoints among real-world stakeholders and build diversity aware ground truth datasets[1].
Responding to the _Data-Centric AI_ call to study impacts of data on AI systems [7], we present findings from a semi-experimental exploration of rater diversity and its influence on safety annotations of chat bot conversations. We report results from a large-scale rater diversity study performed on a sample of 990 conversations generated by humans conversing with a generative AI-chatbot. We collected safety labels from a pool of 96 raters (recruited from two rating platforms covering a range of socio-economic subgroups) using 24 safety questions. Rather than requesting the typical number of ratings (a single rating or three-to-five ratings per conversation), we collected 40 ratings per safety question and repeated the experiment after 4 months. We analyzed:
* the variance, disagreement and reliability within and across different demographic slices.
* the majority and minority perspectives on the data items.
* the consistency of raters across replications of the experiment.
The contribution and implications of our research are threefold. First, we propose a methodological approach for assessing the influences of rater diversity on data labels. Second, we demonstrate that raters' backgrounds do have an impact on the labels they provide. Third, we illustrate that there are discrepancies in rater consistency across two replications. As such, we contribute to two topics in this workshop: (1) repeatability of human evaluations and (2) methods for assessing the quality and the reliability of human evaluations.
## 2 Experimental Design for Diverse Human Evaluation
The overall research question guiding our study was: Do rater demographics help us improve reliability of annotations and ultimately provide guidance for collecting representative data for AI development and evaluation?
Data:The input data for this study was sampled from a 8K conversations dataset (composing 48K turns in total) generated by humans interacting with a generative AI-chatbot. In our study, the sample of 990 conversations was annotated for 24 safety characteristics (plus one general question about conversation understandability) by 40 unique raters per conversation, per safety characteristic. This annotation experiment was repeated with a different pool of 96 raters (including 19 overlapping raters in both replications) 4 months later. The total number of annotations gathered is close to 2 million (990K in each repetition).
Task:The safety evaluation task used a UI designed for the safety characteristics as defined in Appendix. Each rater was asked to rate the final chatbot response in each conversation as _Unsafe_, _Safe_ or _Unsure_. These answers were given for each of the 24 safety questions, which were organized in three groups (Q2-Q4 below), plus one question (Q1) checking the overall comprehensibility of the conversation.
* Q1: Evaluate the whole conversation
* Q2: Evaluate the last bot utterance for "Harmful Content"
* Q3: Evaluate the last bot utterance for "Content with Unfair Bias"
* Q4: Evaluate the last bot utterance for "Misinformation and Political References"
Data Collection:We collected the ratings in two phases with an interval of four months. In both phases, we recruited 96 unique raters from two rater pools. All raters performed the task independently and used the same annotation template. All raters were asked to complete an optional demographic survey (e.g. gender, ethnicity, education level, age group, and native language). All questions in the demographic survey gave raters the option to select "Prefer not to answer". We also collected data about the average annotation time per conversation and the total time each rater spent annotating.
Raters:In Phase 1, the breakdown of raters was: 71 in Pool 1 (42 India, 29 US), 25 in Pool 2 (12 India, 13 US). For Phase 2, the break down was: 72 in Pool 1 (40 India, 32 US), 24 in Pool 2 (12 India, 12 US). 19 of the raters participated in both phases (5 from Pool 1, 14 from Pool 2; the 5 raters from Pool 1 are all in US, 6 out of the 14 from Pool 2 are in India and 8 in US; 9 identify as female and 10 as male). In this paper, we report results from the Phase 2, however we compare the two phases to measure consistency for the raters who participated in both.
## 3 Results
We present four key high-level observations from this study that contribute to our understanding of reliability of human evaluations, and its relation to diversity among raters.
Unreliability of gold labels:The left side of Fig. 1 shows the difference between the number of raters saying _Unsafe_ vs. _Safe_ for each of the 990 conversations in our data. For around a quarter of the conversations, the number of _Unsafe_ and _Safe_ responses per conversation are quite similar, i.e.,
between 15-25 votes on either side. If only 3 to 5 annotators were to rate each item, as is common practice among researchers and practitioners building annotated datasets, this level of observed disagreement in these conversations may easily be lost. This suggests that majority-based (or even 'unanimous') gold labels may be unreliable for a significant portion of the data, if the replication per item is low. This is a critical issue, since many evaluation tasks, even related to sensitive topics such as online safety, use such majority-based gold annotations to measure rater and model performance.
Inadequate intra-rater consistency:We now test whether the 19 raters who were present in both phases were consistent in their ratings. For the subset of items each of them annotated in both phases, we measured the number of times they disagreed with themselves (i.e., between phases) at least once -- considering all 25 questions separately. The right side of Fig. 1 shows the histogram of disagreements for these 19 raters. Eleven of these raters disagreed with themselves at least ten, and as many as 3,199, times. This is another concerning finding that suggests there are extraneous factors that may significantly influence the consistency in raters' responses across different sittings at different points in time.
Disparate within-group coherence across subgroups:Despite the issues with consistency and reliability, we observed significant patterns in rater behaviour within and across the various subgroups we considered. For this analysis, we modeled each annotator's response to a conversation as a 72-dimensional _response vector_ that captures the one-hot encoding of the {UNSAFE, SAFE, UNSURE} answers for each of the 24 safety questions (Q2-Q4). This allows us to calculate the pair-wise distance between the response vectors of two raters as a metric for how strongly they disagreed with one another on any particular conversation prompt.
Fig. 2 shows the average _hamming distance_ between all pairs of response vectors for each conversation, averaged across raters within different subgroups of raters. We observe that the average within-group rating distances vary substantially across groups.
Lower hamming distance between a subgroup and _All raters_ means that the subgroup is consistent within itself and different than all raters. The results show disparities in agreement along three demographics - between US and Indian raters, between Pool 1 and Pool 2 and between female and make raters. In particular, US male raters in Pool 1 behaved more similarly among themselves than any of the other groups studied.
Cross-group differences between subgroups:The above analysis provides only a partial picture, one that captures within-group distances but says nothing about whether the rating behaviors of a certain group of raters is more likely to be similar to others in the same subgroup than to those outside the subgroup. For instance, low within-group distance suggests that a particular subgroup has a coherent perspective on the task. If two different subgroups along a diversity axis (say, gender) exhibit such high within-group coherence, but also have low cross-group distance, it suggests that this particular diversity axis may not have any substantial influence in the context of this task. However,
Figure 1: **Left side: Conversations are arranged horizontally, ranked by the difference between _Unsafe_ and _Safe_ votes. The y-axis shows this difference. The red square points to roughly a quarter of the conversations with nearly equal numbers of _Unsafe_ and _Safe_ votes. Right side: Histogram of the number of times each of the 19 raters who rated items in both phases disagreed with themselves.**
if two groups that have low within-group distance, also have high cross-group distance, it suggests that the diversity axis is a substantial differentiator for the task.
Fig. 3 shows the within-group and cross-group distance along the locale (IN vs. US), pool (Pool 1 vs. Pool 2), and gender axes. The results show that while US raters produced significantly more similar ratings with other US raters, compared to IN raters, on average. In the case of gender, female raters produced ratings that are very similar to each other, and significantly dissimilar to the ratings produced by male raters. Moreover, there was not much variance in the average distance across different female rat pairs, whereas male raters exhibited high variance across pairs in how much they disagreed with one another. While we observe some difference between the Pool 1 and Pool 2, those differences are not statistically significant.
## 4 Reflections
In this paper we are excited to share just a few of the high-level results from the presented work. These results offer a clear indication that raters' demographics and the pool from which raters have been recruited have an impact on labelling tasks. Because the analysis we have done thus far is relatively coarse-grained, we believe that slicing further into the ethnicity, native languages and age groups of the raters is likely to reveal further insights and provide additional evidence of systematic differences between different groupings of raters. We will be conducting this detailed analysis with the ethnicity, age group and native language data that accompanies our data corpus and reporting results in upcoming publications.
As we propose a methodology for assessing the influences of rater diversity on data labels, our future work will also focus on determining the optimal number of raters per conversation and to what extent the impacts of rater diversity can be captured in smaller numbers of raters. This will be done in order to improve dataset generation methods that aim to address rater diversity.
Figure 3: Average pairwise hamming distance across different locale, platform, gender slices. The number of pairs within each group is specified in parenthesis.
Figure 2: The hamming distance metric on gender, locale and pool slices
Finally, we recognize more work is needed to help distinguish _good_ from _bad_ disagreement. In our work, this could be done by correlating the temporal data with other behavioral traits in raters across the two replications. Ultimately, this would extend our methodology to include an approach for studying outliers and different annotation perspectives.
|
2301.11249
|
Forward electromagnetic induction modelling in a multilayered
half-space: An open-source software tool
|
Electromagnetic induction (EMI) techniques are widely used in geophysical
surveying. Their success is mainly due to their easy and fast data acquisition,
but the effectiveness of data inversion is strongly influenced by the quality
of sensed data, resulting from suiting the device configuration to the physical
features of the survey site. Forward modelling is an essential tool to optimize
this aspect and design a successful surveying campaign. In this paper, a new
software tool for forward EMI modelling is introduced. It extends and
complements an existing open-source package for EMI data inversion, and
includes an interactive graphical user interface. Its use is explained by a
theoretical introduction and demonstrated through a simulated case study. The
nonlinear data inversion issue is briefly discussed and the inversion module of
the package is extended by a new regularized minimal-norm algorithm.
|
Gian Piero Deidda, Patricia Díaz de Alba, Federica Pes, Giuseppe Rodriguez
|
2023-01-26T17:32:48Z
|
http://arxiv.org/abs/2301.11249v2
|
Forward electromagnetic induction modelling in a multilayered half-space: An open-source software tool
###### Abstract
Electromagnetic induction (EMI) techniques are widely used in geophysical surveying. Their success is mainly due to their easy and fast data acquisition, but the effectiveness of data inversion is strongly influenced from the quality of sensed data, resulting from suiting the device configuration to the physical features of the survey site. Forward modelling is an essential tool to optimize this aspect and design a successful surveying campaign. In this paper, a new software tool for forward EMI modelling is introduced. It extends and complements an existing open-source package for EMI data inversion, and includes an interactive graphical user interface. Its use is motivated by a theoretical introduction and demonstrated through a simulated case study. The nonlinear data inversion issue is briefly discussed and the inversion module of the package is extended by a new regularized minimal-norm algorithm.
Frequency domain electromagnetic method - FDEM; Electromagnetic induction - EMI; Nonlinear forward modelling; Nonlinear inversion; Sensitivity function; MATLAB Toolbox; Graphical User Interface; Near surface geophysics; Electric conductivity; Magnetic permeability.
## 1 Introduction
Electromagnetic induction (EMI) methods are proximal and remote sensing methods, among the most popular in near surface geophysics investigation. They have been successfully used, often in combination with other geophysical techniques, in many areas spanning from environmental and hydro-geophysical investigations [1; 2; 3; 4] to the characterization and monitoring of dismissed municipal and industrial solid waste landfills [5; 6; 7; 8], from the quantitative evaluation of soil salinity and its spatial distribution [9; 10; 11; 12; 13] to soil water content monitoring [14; 15; 16; 17; 18], from sedimentology and soil studies [19; 20; 21; 22] to archaeology [23; 24; 25; 26; 27], just to name a few.
EMI methods have been used primarily with the aim of estimating apparent electrical conductivity variability, often presented as maps, or to recover subsurface distributions of electrical conductivity, magnetic permeability [28; 29; 30], and, in some cases, the dielectric permittivity [31; 32], by the inversion of the EMI responses. To these ends, it is mandatory that the physical characteristics at the survey site are such that it is possible to establish a measurable electromagnetic induction phenomenon. Since every area has its own character, suitable or unsuitable to be investigated with a certain method, what works in some cases will not work everywhere. As Knapp and Steepples remark in [33], there are some areas where good data cannot be obtained, but there are also areas with ideal conditions to be successfully investigated; for the latter it might be stated that there are areas where bad data cannot be obtained. However, the same authors warn about the risk that also in areas of good data, it is always possible to obtain bad or no data, when data acquisition parameters are not effectively designed or are not designed at all. Therefore, the results of a geophysical survey, which for the present work refers to an EMI survey, primarily rely on field data quality which, in turn, strongly depends on the quality (accuracy, resolution, and sensitivity) and appropriateness of the measuring device, as well as on the way it is used. Then, an accurate interpretation of the results, expressed as maps of apparent conductivity (and/or relative magnetic permeability) or sections of true conductivity (and/or magnetic permeability) estimated by inversion, will allow the successful achievement of the survey's goals. Forward modelling can take all these aspects into account.
EMI forward modelling transforms a geological subsurface model, with its own geometry and characterized by a set of electromagnetic physical properties, into an instrumental response, which also depends on the characteristics of the measurement device (the inter-coil distance; the transmitter-receiver coil configuration; the frequency of the primary magnetic field) and on the relative position with respect to the ground it assumes during measurements (the height of the coils above the ground surface). Such modelling is essentially done after data acquisition not only to infer the properties of the ground model by inversion, as it is an indispensable part of the inverse problem (it links the device responses- i.e., data space- with the subsurface electromagnetic properties- i.e., model space), but also to facilitate the interpretation of the results, making a correlation between the observed electromagnetic responses and the expected geological models. Forward modelling should also be done before data acquisition to aid the planning of an acquisition campaign. Paraphrasing Knapp and Steeples [33], in survey and instrument design we need to start with an objective in mind, which means knowing what we wish to see. Then, we should answer the questions: what do we need to see it, and how can we get what we need to see it? That is, what characteristics (amplitude and phase) should the instrumental response have? How does it vary according to the frequency, conductivity and magnetic permeability of the ground, the distance between the coils, and the measurement dimension? What is the required depth of in vestigation? What kind of device should be used? A multi-coil instrument with different coil configurations or a multi-frequency instrument? What sensitivity should it have? Being able to run nonlinear forward modelling before data acquisition would allow to address all these issues. In addition, forward modelling is also helpful in EMI mapping for device calibration and to free measured data (apparent electric conductivity) from the "bias" introduced by the nonlinear device response, the height of the instrument, and the topography [8].
Electromagnetic induction phenomena are 3D phenomena that request full 3D forward modelling and inversion [34]. However, the currently available measuring devices are not yet designed to explore and measure in 3D the secondary magnetic field produced by targets. This may be one of the reasons why 1D EMI modelling is still the most widely used, although the literature gives examples of 3D EMI modelling and inversions [35; 36; 37].
In this work, a new Matlab-based open-source EMI 1D modelling and inversion software, FDEMtools3, is introduced. FDEMtools3 comes with two graphical user interfaces (GUI), FDEMforward and FDEMinversion. The latter controls an updated version of the inversion software package described in Deidda et al. (2020), while the former drives a new sub-package devoted to EMI 1D nonlinear forward modelling, which is the focus of the present paper. Such forward modelling package has been built with the aim of providing a comprehensive tool helping to address all issues related to survey and instrument design, but also useful for an effective data inversion and a reliable data interpretation. FDEMforward is a user-friendly GUI, very well organized, and easy to access even for novice users. In addition, to make it comprehensible, as well as making the EMI method understandable to a non-specialist audience, this paper recalls the basics of electromagnetic induction and describes some mathematical aspects of the 1D forward modelling along with some key concepts of EMI methods, such as coil configurations, skin depth, induction number, sensitivity function, and depth of investigation. In this way, the present paper, together with the accompanied Matlab tool, may be viewed as a mini tutorial, ideal for teaching and training purposes. Finally, it is worth noting that the package can also be useful for advanced users since, being an open-source software, the code can be freely modified and the new functionalities can be added to meet their needs.
The structure of the present paper is as follows. Section 2 is an overview of the basic EMI theory, which Appendixes A and B complement briefly reviewing the Maxwell's equations and describing, step-by-step, the involved electromagnetic mutual induction processes. Section 3 presents the FDEMtools3 package as well as its GUIs, describing the installation process and how to use the software, by means of some numerical examples shown in Section 4. Finally, Section5 summarizes the content of the paper.
## 2 EMI theory overview
### Basics of electromagnetic induction
Electromagnetic induction phenomena, mostly governed by Faraday's and Ampere-Maxwell's laws, underpin the working principle of geophysical electromagnetic induction methods (Figure 1). In their simplest form, they involve the mutual induction among three coils as shown in Figure 1b (see Appendix B for a step-by-step explanation of this mutual induction process with some mathematical details). Two of them, named transmitter (Tx) and receiver (Rx) and usually laid out on the ground or in the air above it, are an integral part
of the sensor devices; they are real coils of metal wire, commonly considered as pure inductive circuits due to the negligible electrical resistance of the coil-winding. The third coil is an imaginary coil representing a sub-surface conductive magnetic body; it is assumed as an \(RL\) circuit to consider both resistive and inductive properties of the body.
An alternating current (\(I\!r\)) passed through the loop coil Tx (Figure 1a) generates an alternating magnetic field (the primary magnetic field, \(H\!r\)) around the loop, in-phase with the current and with the same rate of change (Figure 1c), according to Ampere-Maxwell's law. The primary magnetic field, spreading out below the ground surface, induces conduction currents and magnetization (currents of magnetization) in the conductive magnetic body. In fact, the alternating magnetic field generates a changing magnetic flux through the conductive body, which, according to Faraday's law, induces a voltage (\(\mathcal{E}\)) in the body, driving the so-called eddy currents (\(I_{\mathrm{ eddy}}\)) (Figure 1a). This voltage has the same frequency of the primary magnetic field, and it is phase shifted by \(90^{\circ}\) (Figures 2b and 2c) with respect to it, due to the time derivative of Faraday's law (Eq. 6). The electrical conductivity and magnetic permeability of the body might cause an additional phase shift \(\alpha\) (Figures 3c and 3d). Assuming the conductive body as an \(RL\) circuit (Figure 1b), its magnitude is (Eq. 10)
\[\alpha=arctan\Big{(}\frac{\omega t}{R}\Big{)}=arctan(\beta), \tag{1}\]
where \(\omega\) is the angular frequency of the current in the transmitter coil (the operating radial frequency), resistance \(R\) and inductance \(L\) account for the electrical resistivity, the magnetic permeability, and the geometry of the buried body, and \(\beta\) is the response parameter, sometimes called induction number. When the ground is perfectly conducting, the further phase shift amounts to \(90^{\circ}\), while in the case of a perfectly resistive ground, eddy currents do not suffer a further lag. Due to Ampere-Maxwell's law, the time-varying eddy currents have a magnetic field associated with them, the secondary magnetic field (\(H\!s\)) (Figure 1a), which lags the primary magnetic field by \(90^{\circ}\) plus \(\alpha\) degrees, as shown in Figures 3c and 3d. Finally, the receiver coil (Rx), placed at the ground surface or in the air above it, senses both primary and secondary magnetic fields, measuring the voltage they induce in the coil according to Faraday's law. Hence, EMI devices are designed to record a complex-value electromagnetic response (Eq. 19), usually separated into its real and imaginary parts (Eqs. 10 and 11), which are also called In-phase (\(P\)) and Quadrature (\(Q\)) components, respectively, according to the phase-shift of the secondary magnetic field with respect to the primary magnetic field (Figure 4c).
More generally, the electromagnetic response (i.e., \(P\) and \(Q\) components) is a complicated nonlinear function of many parameters, such as the electrical conductivity and the magnetic permeability of the ground, and the technical specifications of the measuring device (the transmitter-receiver coil separation, as well as their relative orientation and dimension; the height of the coils above the ground surface, and the frequency of oscillation of the primary magnetic field).
#### 2.2.1D forward modelling
Figure 1: (a) Schematic full process of electromagnetic induction (modified from [38]); blue and red lines depict imaginary force lines of the primary and secondary magnetic fields, respectively. (b) Equivalent single-loop coupled \(LR\) circuits (modified from [38]); \(Lr\) and \(Lk\) are the self-inductance of coils Tx and Rx, respectively, while \(M_{i}\) with \(i\),\(j\) = \(T\),\(R\),\(S\), denotes the mutual inductance of any two of the coils.
2.2.1. 1D layered ground model and loop-loop configurations of measuring devices
The forward modelling used to calculate the nonlinear EM response of a layered half-space for dipole source excitation is well known [39; 40]. It is based on Maxwell's equations (Appendix A), suitably simplified thanks to the cylindrical symmetry of the problem, since the magnetic field sensed by the receiver coil is independent of the rotation of the instrument around the vertical axis. The coil is assumed to have a finite number of horizontal and homogeneous layers below the ground surface, \(z_{1}\)= 0 m (Figure 2). Each horizontal layer, of thickness \(d_{\mathrm{v}}\), ranges from depth \(z_{\mathrm{k}}\) to \(z_{\mathrm{k}+1}\) (\(k\)= 1,..., \(n\)-1) and is characterized by an electrical conductivity \(\alpha_{\mathrm{v}}\) and a magnetic permeability \(\mu_{\mathrm{k}}\). The deepest layer, starting at \(z_{n}\), is a half-space with electrical conductivity \(\sigma_{n}\) and magnetic permeability \(\mu_{\mathrm{n}}\). In the free air, above the ground surface, the conductivity is zero while the magnetic permeability is \(\mu_{0}=4\pi\cdot 10^{-7}\) H/m.
Modern EMI measuring devices, which are designed to collect multiple depth responses, can be grouped into multi-receiver coil systems and multi-frequency systems. The former ones are endowed with multiple receiver (Rx) coils spaced at fixed distances from the transmitter (Tx) coil (Figure 1(a)), which usually operates at a fixed frequency; the latter ones work using multiple frequencies simultaneously, usually with a fixed transmitter-receiver geometry. In addition, devices of both groups can operate at different heights above ground level, as illustrated in Figure 1(a). Finally, all devices have two or more coil configurations, the most used of which are shown in Figure 1(b). Table 1 lists the specifications of some commercially available devices.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Manufacturer** & **Device** & **Configuration** & **Frequency (kHz)** & **Coil spacing (m)** & **Measurement*** \\ \hline \multirow{6}{*}{Gf Instruments} & CMD Mini-Explorer & HCP & 30 & 0.32, 0.71, 1.18 & Q (mS/m), P (ppt) \\ & CVD & VCP & 30 & 0.32, 0.71, 1.18 & Q (mS/m), P (ppt) \\ & CMD Explorer & HCP & 10 & 1.48, 2.82, 4.49 & Q (mS/m), P (ppt) \\ & CVD DUO & HCP & 0.925 & 10, 20, 40 & Q (mS/m), P (ppt) \\ & CVD DUO & VCP & 0.925 & 10, 20, 40 & Q (mS/m), P (ppt) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Specifications of some commercial measuring EMI devices.
Figure 2: (a) Schematic representation of the subsoil discretization and parametrization along with a typical measuring situation with multiple transmitter-receiver separations and/or at different heights above ground level. (b) Common loop-loop configurations used in EMI devices: horizontal coplanar position (HCP) or vertical magnetic dipole (V); vertical co-planar position (VCP) or horizontal magnetic dipole (H); loops perpendicular to each other (PERP) or magnetic dipoles perpendicular to each other.
#### 2.2.2 Skin depth and induction number
An alternating current flowing in a conductor tends to distribute itself in such a way that the current density is highest near the surface of the conductor and decreases with greater depths in it. Likewise, an alternating electromagnetic field tends to concentrate near the conductor surface. In electromagnetic theory, this phenomenon is known as skin effect, whose size is quantified by the skin depth, also called depth of penetration.
In terms of the complex wavenumber (Appendix A.1.), the skin depth in a homogeneous earth with electrical conductivity \(\sigma\)and magnetic permeability \(\mu\)is defined as
\[\delta=\sqrt{\frac{2}{\omega\mu\sigma}}, \tag{2}\]
which represents the exponential decay of the EM-field amplitude with depth. At depth \(\delta\) the EM-field amplitude has dropped by 1/e (e is Euler's number) with respect to its value at the surface. For a \(n\)-layer model, the penetration in depth of the EM-fields measured at the surface (\(C_{i}\)) is solved iteratively, with a recursive formula described by the EM-response function \(C_{j}\)[41]
\[C_{j}=\frac{1}{k_{j}}\frac{k_{j}c_{j+1}+tan(k_{j}d_{j})}{1+k_{j}c_{j+1}+tan(k _{j}d_{j})}, \tag{3}\]
where \(j=n-1,n-2,\ldots,1\), \(d_{j}\)is the thickness of the \(j\)th layer, \(k_{j}=\sqrt{t\omega\mu_{j}\sigma_{j}}\)is the complex wavenumber in the \(j\)th layer, and \(C_{n}=k_{n}\). Thus, the skin depth for a layered earth is
\[\delta=\sqrt{2|C_{1}|}. \tag{4}\]
This is the recursive algorithm implemented in FDEMtools3.
The induction number is another key quantity in EMI theory and practice. According to its definition in Eq. (1), it depends not only on the electromagnetic properties of the conductive body but also on its geometry, even if it is difficult to evaluate unless in particular conditions. Based on the work of Grant and West [42] Ward [43] showed that the induction number depends on a linear dimension of the whole system (conductive body plus coils). In particular, for a pair of coils over a homogeneous half-space, it was shown that the induction number can be defined in terms of the skin depth, \(\delta\) as
\[\beta=\frac{t}{\delta}, \tag{5}\]
where \(l\) becomes the transmitting-receiving coil separation \(\rho\)or the height \(h\) of the coils above the ground, when \(\rho\gg h\) or \(\rho\ll h\), respectively.
#### 2.2.3 Nonlinear forward modelling
Consider an EMI measuring device operating at angular frequency \(\omega\), with coils separated by a distance \(\rho\)and located at height \(h\) above a 1D layered earth, like the one in Figure 2a. Assuming the configurations of
the transmitting and receiving coil pair shown in Figure 2b, it would record the electromagnetic responses, defined as the ratio of the secondary (\(H\)s) to the primary (\(H\)p) EM field, given by
\[\begin{cases}M^{\mathit{HCP}}(\mathbf{\sigma},\mathbf{\mu};h,\omega,\rho)=-\rho^{3} \int_{0}^{\infty}e^{-2\lambda h}\lambda^{2}R_{\omega,0}(\lambda)J_{0}(\rho \lambda)d\lambda\\ M^{\mathit{VCP}}(\mathbf{\sigma},\mathbf{\mu};h,\omega,\rho)=-\rho^{2}\int_{0}^{\infty }e^{-2\lambda h}\lambda R_{\omega,0}(\lambda)J_{1}(\rho\lambda)d\lambda\\ M^{\mathit{PREP}}(\mathbf{\sigma},\mathbf{\mu};h,\omega,\rho)=-\rho^{2}\int_{0}^{ \infty}e^{-2\lambda h}\lambda^{2}R_{\omega,0}(\lambda)J_{1}(\rho\lambda)d \lambda\end{cases}, \tag{6}\]
where \(\mathbf{\sigma}=[\sigma_{1},\ldots,\sigma_{n}]^{T}\) and \(\mathbf{\mu}=[\mu_{1},\ldots,\mu_{n}]^{T}\) represent the conductivity and the magnetic permeability vectors, respectively, \(\lambda\) is an integration variable representing the depth below the ground, normalized by the inter-coil distance \(\rho_{J}J_{0}\) and \(J_{1}\) are Bessel functions of the first kind of zeroth and first orders, respectively, and \(R_{\omega,0}(\lambda)\) is the response kernel, also called reflection factor. The kernel \(R_{\omega,0}(\lambda)\), which is a complex-valued function of the parameters that describe the layered subsurface (conductivity, magnetic permeability, and layer thickness) besides the frequency \(\omega\) and \(\lambda\) can be written as
\[R_{\omega,0}(\lambda)=\frac{N_{\omega}(\lambda)-Y_{1}(\lambda)}{N_{\omega}( \lambda)+Y_{1}(\lambda)} \tag{7}\]
where \(N_{0}(\lambda)=\lambda/(i\omega\mu_{0})\) is the intrinsic admittance of the free space, \(Y_{1}(\lambda)\) is the surface admittance, \(i\) is the imaginary unit, \(\omega\) is the angular frequency, and \(\mu_{0}\) is the magnetic permeability of the free space. Setting \(Y_{n}(\lambda)=N_{n}(\lambda),\ Y_{1}(\lambda)\) can be obtained using Wait's recursive formula
\[Y_{k}=N_{k}\frac{\gamma_{k+1}+N_{k}tanh(d_{k}\mu_{k})}{N_{k}+Y_{k}+tanh(d_{k} \mu_{k})}\ k=n-1,\ldots,1, \tag{8}\]
where \(d_{k}\) is the thickness of the \(k\)th layer and
\[N_{k}=\frac{u_{k}(\lambda)}{i\omega\mu_{k}}, \tag{9}\]
is the intrinsic admittance of the \(k\)th layer, with
\[u_{k}(\lambda)=\sqrt{\lambda^{2}+i\sigma_{k}\mu_{k}\omega}. \tag{10}\]
#### 2.2.4 Linear approximation of the forward modelling
As a special case we recall here, for completeness, the linear case. We do this not only because it is still particularly used today in many applications but also to recall some of its limitations.
For a nonmagnetic half-space, when the coils are laid out on the ground and the operating frequency is small, the complicated relationships (6) can be formulated in a simplified form. Under these conditions and for different coil configurations, Wait [44; 45] gave a simplified expression for the secondary magnetic field as a function of the induction number \(\beta\)(e.g., [45], Eqs. 1, 3, and 4, p. 632, for the HCP, VCP, and PERP configuration, respectively). Furthermore, starting with the relationship in [45], McNeill [46] showed that, when the induction number is very small (\(\beta<<1\)), the imaginary part of the ratio of secondary to primary magnetic fields is linearly proportional to the half-space conductivity, \(\sigma\), for both HCP and VCP coil configurations, according to
\[M^{\mathit{HCP}}=M^{\mathit{VCP}}=Im\left(\frac{H_{S}}{H_{p}}\right)_{ \begin{subarray}{c}HCP\\ VCP\end{subarray}}=\frac{\omega\mu_{0}\rho^{2}}{4}\sigma. \tag{11}\]
This is the Q component of the EMI response at low induction number (LIN) condition. Most of the commercially available measuring devices incorporate the following equation to measure the apparent conductivity (as defined in [47]) directly in \(\mathrm{mS/m}\)
\[\sigma_{a}=\frac{4}{\omega\mu_{0}\rho^{2}}\cdot Im\left(\frac{H_{S}}{H_{p}} \right)_{\begin{subarray}{c}HCP\\ VCP\end{subarray}} \tag{12}\]
(this is why the Q component is also named LIN apparent conductivity --LINECa or LIN \(\sigma_{a}\), provided that the LIN condition is met). Under the same conditions, they also measure the in-phase component (in part per thousand -- ppt), which is usually very small in comparison to the Q component.
The general rule arising from the LIN condition can be summarized in: 1) quadrature and in-phase components are independent from each other (the in-phase component is not relevant to reproduce the observations by inversion; 2) the quadrature component is the only part directly related to the apparent conductivity
of the soil; 3) the in-phase component is closely related to the magnetics susceptibility of the measured material (that is, the in-phase component is negligible for nonmagnetic materials). However, great care must be taken when using this general rule since the LIN condition only occurs when the apparent conductivity is very low (less than a few tens of mS/m) [48], which is a condition rarely met in near surface geophysics applications. As explained in Appendix B, and also pointed out in [7], P and Q components are not independent. The P component does not necessarily depend on the magnetic permeability alone, but it is mainly determined by the relative values of the inductance property with respect to the resistance property of the measured material. In fact, for a given frequency, at a fixed magnetic permeability, the P component will increase as the electrical conductivity increases, as shown in [8] (Figure S3, Supplementary Materials). Therefore, in the case of very conductive soils, the P component may have the same importance as the Q component and, thus, a nonlinear inversion of the complex EMI response (simultaneous inversion of both Q and P components) is needed to correctly estimate the electric conductivity. In addition, it is worth noting that for very conductive soils, the increase of the P component caused by an increase in the magnetic permeability might be hidden by the increase that P undergoes due to the electric conductivity. This is very important when looking for magnetic targets since soil high conductivity might completely mask them, making the distinction between targeted objects and the surrounding soil a very difficult and challenging task.
### Sensitivity function of EMI measuring devices
The sensitivity function of a measuring device is defined by the ratio between the variation of the Output and the variation of the Input, which is the quantity to be measured. For EMI devices, the sensitivity function quantifies how much the complex electromagnetic response (Q and P components) measured by the device is affected by a change in the electrical conductivity and/or magnetic permeability of a particular point (area or section) of the subsurface. The higher the absolute value of sensitivity function, the greater the influence of the subsurface region on the measurement. For a homogeneous or a layered half-space, the sensitivity, \(S\), is usually calculated as a function of depth: \(S=S(z)\). For each depth, the value of \(S\) tells us how much measuring devices sense the changes in conductivity or magnetic permeability, given the device working parameters. At LIN conditions, the sensitivity for the different coil orientations is solely a function of depth, inter-coil distance, and height of the coils above the ground surface and it does not depend on the subsurface electromagnetic properties nor of the operating frequency of the device [46]. These sensitivity functions are those usually provided by manufacturers in the specifications of their devices (see, for example, GF Instruments, 2020 [49]). Otherwise, when the LIN condition is violated, the sensitivity function strongly depends on both ground conductivity and magnetic permeability, as well as on the specifications of the measuring system and its working parameters. Thus, for an EMI device with given frequency \(\omega\) and inter-coil separation \(\rho\), operating at height \(h\) above the ground, the sensitivity function can be estimated with respect to both electric conductivity and magnetic permeability, for each of the available coil configurations, thatis,
\[S_{\sigma}(z)=\left[\frac{\partial M^{\mathit{HCP},\mathit{VCP},\mathit{PERP} }}{\partial\sigma(z)}\right]_{\omega,\rho,h} \tag{13}\]
and
\[S_{\mu}(z)=\left[\frac{\partial M^{\mathit{HCP},\mathit{VCP},\mathit{PERP} }}{\partial\mu(z)}\right]_{\omega,\rho,h} \tag{14}\]
where \(M\) is the complex EMI response; see Eq. (6). The sensitivity function can take positive and negative values. Positive values of \(S=S_{\sigma,\mu}(z)\) mean the measuring device better senses the conductive (or magnetic) materials; when \(S\) takes negative values, in contrast, the device better senses poorly conductive (nonmagnetic) or resistive materials. Finally, the device no longer senses anything when the sensitivity is zero. Notice that gathering in a matrix the sensitivities of all forward responses with respect to all model parameters yields the Jacobian matrix. In this paper, such Jacobian (or sensitivity) matrix has been computed using the analytical expressions computed in [50; 51], which are implemented in the FDEMtool3 package. Using this package, however, users can also optionally estimate the Jacobian through finite differences approximation.
In summary, the sensitivity function is of uppermost importance both in the survey design, as its knowledge helps to select the most appropriate and best configured measuring device, and in the solution of
any nonlinear inversion, as it provides the link between the observed data and the model parameters in terms of the Jacobian matrix, allowing the update of the model vector.
### Depth of investigation (DOI)
As described above (Section 2.1), EMI methods measure selected components of an electromagnetic field induced in a conductive soil in response to an exciting electromagnetic field generated by a device at or above the ground surface. The maximum distance (usually indicated as depth) in the subsoil within which the electromagnetic properties (electrical conductivity and magnetic permeability) of a given target in a given host produce a response that can be measured by a specific device defines the so-called Depth of Investigation (DOI). This measure plays a key role in EMI surveys as well as in other geophysical investigations. Its value is not only one of the objectives usually set in survey design but it is also crucial in the inversion processes, as it allows to assess whether the inversion is data-driven or model-driven, preventing over- or misinterpretation of the inversion results [52].
Estimating the DOI is a difficult and challenging task because it depends on many variables, some of which are unknown (the real subsurface). Over the years, several estimates of it have been reported in literature. In some cases, the depth of investigation has been considered equal to the skin depth or to a multiple or a fraction of it. In other cases, it has been considered as a function of the skin depth [53; 54; 55; 56]. Other methods, the most widespread, are based on the sensitivity function or, better, on its integral form, the cumulative sensitivity function [46; 57; 58; 59; 60; 61]. According to these methods, the depth of investigation is the depth where its normalized integrated sensitivity function reaches a fixed threshold, such as, for example, 50%, 70%, 90%, or others. Without discussing the goodness of these proposals, it is worth noting that all of them estimate a pseudo-depth, more or less reliable, which can anyway provide useful information. In this paper, we adopted the method described by Deidda et al. (2020) in Section 5 of [62], also based on the sensitivity function. It has been implemented in the FDEMtools3 package with the prevalent, though not exclusive, aim of providing the DOI as a useful output to be used in survey design before data acquisition.
## 3 Inversion algorithm
As already remarked, recent FDEM devices allow the user to record multiple simultaneous measurements with different inter-coil distances \(\boldsymbol{\rho}=[\rho_{1},\ldots,\rho_{m_{\theta}}]^{T}\), operating frequencies \(\boldsymbol{\omega}=[\omega_{1},\ldots,\omega_{m_{\omega}}]^{T}\), and heights \(\boldsymbol{h}=[h_{1},\ldots,h_{m_{h}}]^{T}\). In order to reconstruct the distribution of the electrical conductivity and the magnetic permeability with respect to depth from the available dataset, we denote the measurements by \(b_{ti}^{v}\), where \(t=1,\ldots,m_{\rho}\), \(i=1,\ldots,m_{\omega}\), \(j=1,\ldots,m_{h}\), and \(v=\{\text{HCP,VCP,P}\}\) represents, respectively, the vertical, horizontal, and perpendicular orientation of the coils. The data values \(b_{tij}^{v}\) are then arranged by a suitable lexicographic ordering in a vector \(\boldsymbol{b}\in\mathbb{C}^{m}\), where \(m=\gamma m_{\rho}m_{\omega}m_{h}\) and \(\gamma\) is the number of the orientation of the coils.
To represent the misfit between the model prediction (6) and experimental data values, we define the residual function
\[\boldsymbol{r}(\boldsymbol{\sigma},\boldsymbol{\mu})=M^{v}( \boldsymbol{\sigma},\boldsymbol{\mu};h,\omega,\rho)-\boldsymbol{b}, \tag{15}\]
and we solve the following minimization problem
\[\underset{\boldsymbol{\sigma},\boldsymbol{\mu}}{m\overset{\ast}{ \boldsymbol{\mu}}}\|\boldsymbol{r}(\boldsymbol{\sigma},\boldsymbol{\mu})\|_{2}^ {2}, \tag{16}\]
where \(\|\cdot\|_{2}\) denotes the Euclidean norm.
The algorithm we use for the resolution of problem (16) is based on a regularized damped Gauss-Newton method, where the regularization is achieved by a low-rank approximation of the Jacobian of the nonlinear model. Such approximation is obtained by the truncated singular value decomposition (SVD) or by the truncated generalized SVD (GSVD), depending on the adopted regularizing term.
In recent years, this approach has been applied to the solution of (16) in various particular situations and coupled to specific techniques for evaluating the Jacobian and estimating the regularization parameter. For instance, in [50], the authors aimed at reconstructing the electrical conductivity of the soil assuming the permeability to be known considering as input just the quadrature component of the measurements, and determined the analytical expression of the Jacobian of the model with respect to the variation of conductivity. In
[63], the algorithm was adapted to devices that allow different configurations and can take simultaneous measurements. In this work, the authors also considered the possibility of processing the in-phase component of the signal.
Paper [51] focused on the identification of the magnetic permeability distribution under the assumption that the conductivity was known in advance. An important result in this work was to give the analytical expression of the Jacobian with respect to the variation of the magnetic permeability.
Later, the algorithm was updated in [62] to invert the whole complex signal sensed by the device, and to introduce a regularization term which promotes the sparsity of the solution, the so called minimum gradient support (MGS) stabilizer. The numerical algorithm was tested on real datasets collected at the Molentangius Saline Regional Nature Park, Sardinia, Italy.
A Matlab toolbox implementing the above inversion techniques was made publicly available in [64], where it was supplemented with a graphical user interface (GUI) aiming at assisting the interested researcher in setting the parameters of the method and performing the computation. This software was used in [65] to obtain a 2D reconstruction of the electrical conductivity of a vertical section of the soil by solving a variational problem.
Besides the introduction of a tool for studying the forward modelling of the problem, the software presented in this paper slightly extends the inversion module of the package. In particular, the perpendicular orientation for the device coils has been implemented and is now available for inversion. Moreover, a new iterative algorithm based on the minimal-norm solution, presented in [66; 67], has been included. It is concisely discussed in the following subsection.
### Minimal-norm solution
In real applications, problem (16) is usually strongly underdetermined, so it does not admit a unique solution. The standard Gauss-Newton iterative algorithm, implemented in the previous version of the FDEMtools package [64], ensures unicity by imposing a regularity constraint on the iteration step, not on the solution itself. The problem of imposing a regularity constraint directly on the solution of problem (16), i.e.,
\[\begin{cases}\underset{\mathbf{\sigma},\mathbf{\mu}}{min}\|L(\mathbf{\sigma},\mathbf{\mu})\|_ {2}^{2}\\ (\mathbf{\sigma},\mathbf{\mu})\in\left\{arg\underset{\mathbf{\sigma},\mathbf{\mu}}{min}\frac{1 }{\|\mathbf{r}(\mathbf{\sigma},\mathbf{\mu})\|_{2}^{2}}\right\}\end{cases} \tag{17}\]
where \(L\) is a suitable regularization matrix, has been studied in [66; 67].
Let us denote the solution by
\[\mathbf{x}_{k}=(\mathbf{\sigma}_{k},\mathbf{\mu}_{k}). \tag{18}\]
To ensure the computation of the minimal-norm solution, at the \(k\)th iteration, the Gauss-Newton approximation has to be orthogonally projected onto the null space of the Jacobian matrix \(J_{k}=J(\mathbf{x}_{k})\).
When the regularization matrix is \(L=I_{2n}\), the singular value decomposition of the matrix \(J_{k}\) is employed. Indeed, it is well-known that the orthogonal projector may be written in terms of the SVD
\[P_{N(J_{k})}=\mathbf{V}_{2}\mathbf{V}_{2}^{T}, \tag{19}\]
where the columns of the matrix \(\mathbf{V}_{2}\) are orthonormal vectors spanning the null space of \(J_{k}\). In case of \(L\neq I_{2w}\) the orthogonal projector may be expressed in terms of the GSVD; see [66; 67] for more details.
The resulting algorithm has been implemented in the following variants, all available in the new FDEMinversion GUI:
* MNGN \[\mathbf{x}_{k+1}=\mathbf{x}_{k}+\alpha_{k}\mathbf{q}_{k}-P_{N(J_{k})}\mathbf{x}_{k},\] (20)
where \(\mathbf{q}_{k}\) is the solution of (16), \(\alpha_{k}\) is a step length, and \(P_{N(J_{k})}\) is the orthogonal projector onto the nullspace of \(J_{k}\). The damping parameter \(\alpha_{k}\) is estimated by the Armijo-Goldstein principle. This implementation, introduced in [66], occasionally lacks to converge, because the projection step may cause the residual to increase considerably at particular iterations.
* MNGN2(\(\alpha\)): in [67], a further dampping parameter has been introduced for the projection term, through a second-order analysis of the residual \(\frac{1}{2}\|\mathbf{r}(\mathbf{x})\|_{2}^{2}\), as well as a strategy to automatically tune it. A simple choice is to consider a parameter \(\alpha_{k}\) to control both terms, \[\mathbf{x}_{k+1}=\mathbf{x}_{k}+\alpha_{k}(\mathbf{q}_{k}-P_{N(k)}\mathbf{x}_{k}),\] (21) and estimate it by the Armijo-Goldstein principle.
* MNGN2(\(\alpha\),\(\beta\)): another possibility is to consider two independent parameters \[\mathbf{x}_{k+1}=\mathbf{x}_{k}+\alpha_{k}\mathbf{q}_{k}-\beta_{k}P_{N(k)}\mathbf{x}_{k}.\] (22) Also in this case an automated tuning procedure has been introduced.
* MNGN2(\(\alpha\),\(\beta\),\(\beta\)): this implementation is identical to the previous one, but the parameter \(\beta_{k}\) is estimated by a different adaptive technique, which proved to be superior in the numerical experiments reported in [67].
The new implementation also allows the user to select a model profile \(\tilde{\mathbf{x}}\) for the solution, which is useful in applications where sufficient a priori information on the physical system under investigation is available. The FDEMinversion GUI allows to select a constant profile \(\tilde{\mathbf{x}}\), or to load a model from a file.
## 4 Software package
In this section, we describe the new tools available in the software package FDEMtools3, with respect to its previous version described in [64]. They consist of an extension of the forward model, the update of some of the computational routines concerning the inversion algorithm and of the corresponding graphical user interface (GUI), the introduction of a new GUI for forward modelling and some bugs corrections. In particular, the perpendicular orientation of the device coils has been integrated in the model, and a database of some of the most common commercial devices has been created. The database can be easily extended by the user by inserting the configuration of new devices, but also by introducing some non-currently available configurations, with the aim of investigating their performance.
The new Matlab toolbox FDEMtools3 is distributed as an archive file. It can be downloaded from the web page [https://bugs.umica.it/cana/software](https://bugs.umica.it/cana/software). By uncompressing it, a new directory "FDEMtools3" will be created, containing the computational code as well as a user manual. This directory must be added to Matlab search path in order to be able to use the software from other directories. The package requires the installation of P. C. Hansen's Regularization Tools package [68]; the directory of the package must be added to Matlab search path too. More information on the installation procedure can be found in the README.txt file in the main directory and in the manual.
The package contains routines for both the analysis of the forward model and the inversion procedure. Two subdirectories of the main directory, "dataforward" and "data", contain some datasets for running numerical tests with the forward and the inversion GUIs, respectively.
Table 2 lists the routines, divided in different groups, and reports a brief description for each of them. The first group "Forward Model Routines" includes the functions for computing the forward model, that is, the model prediction for a given conductivity and permeability distribution. The section "Computational Routines" contains the codes for forward and inverse modelling, including three GUIs, the "Test Scripts" are demonstration programs. Finally, the section "Auxiliary Routines" lists some routines needed to complete the whole process, which are unlikely to be called directly by the user, and the last group describes some further auxiliary files; see the file Contents.m for details.
\begin{table}
\begin{tabular}{l l} \hline \hline & **Forward Model Routines** \\ \hline aconduct & compute the apparent conductivity \\ hratio & compute the ratio \(H\!s/H\!r\), i.e., the device readings \\ inphase & compute the in-phase (real) component of the ratio \(H\!s/H\!r\) \\ quadracomp & compute the quadrature (complex) component of \(H\!s/H\!r\) \\ reflfact & compute the reflection factor \\ \hline \hline \end{tabular}
\end{table}
Table 2: FDEMtools3 reference.
The most straightforward way for using the package is to run the main interface, issuing the command FDEM in the Matlab window, or running directly one of the two GUIs available: FDEMforward and FDEMinversion; see Figure 3 and Figure 4.
Figure 3: FDEMforward graphical user interface.
Both GUIs are composed of a set of input panels that are described in detail in the user manual, and that we list here:
* FDEMforward
* Input data;
* Quantity to generate;
* Device configuration;
* Synthetic datasets;
* Discretization
* Plotoptions;
* FDEMinversion
* Physical quantity to be inverted;
* Data to be inverted;
* Device configuration;
* Data management;
* Synthetic Dataset;
* Discretization;
* Noise;
* Inversion options;
* Regularization.
In the FDEMforward interface two buttons are available, one for running the main computation and one to save the data; in this case, a suitable data structure is used to allow the user to upload the file as an experimental dataset in FDEMinversion interface. The FDEMinversion interface contains three buttons, that allow the user to start the computation, interrupt it, in case something goes wrong, and save the computed solution to a data file.
Figure 4: FDEMinversion graphical user interface.
The computational routines can also be called directly in a Matlab script without resorting to the GUIs. This may be useful in particular situations in which, e.g., the user wants to automatize a repeated computation. We provide three example scripts for doing so: driverforward.m deals with an example of forward modelling, while driver.m and driver2D.m present two examples in which a single data column, and a set of successive data columns, are processed for inversion.
## 4 Numerical examples
This section aims to illustrate a non-exhaustive overview of some outputs of the forward modelling routines available in the FDEMtools3 package, that may be useful in survey design. For the sake of brevity, we have limited our examples to only three well-known and frequently used EMI devices, two of which, the Dual-lem-21H and the CMD Explorer, are multi-receiver instruments, while the other one, the GEM-2, is a multi-frequency sensor; see Table 1. Multi-receiver and multi-frequency EMI devices are able to measure the earth response at multiple depths by changing the receiver separation or frequency, respectively. Thus, for a given earth model, they can supply data suitable for resolving, by inversion, depth-related variations of electrical conductivity and/or magnetic permeability, provided that changes in receiver separation or in frequency produce in the data changes that are large enough to be measured. The Dual-lem-21H has one transmitter coil with a fixed frequency of 9 kHz and six receiver coils: three in a horizontal coplanar (HCP) orientation, at 0.5, 1, and 2 m from the transmitter, and three in perpendicular arrangement (PERP), at 0.6, 1.1 and 2.1 m from the transmitter. The CMD Explorer operates with one transmitter coil at a frequency of 10 kHz and has three receiver coils, spaced 1.48 m, 2.82 m, and 4.49 m from the transmitter, arranged according to the HCP or the VCP configurations. Finally, the GEM-2 contains a transmitter coil and a receiver coil separated by 1.66 m, arranged in HCP or VCP configurations, and operates in a frequency band between 30 Hz and 93 kHz, using up to ten (but usually limited to six to guarantee good signal-to-noise ratio) simultaneous frequencies.
To show and compare their responses (the signal amplitude of both the Q and P components), along with the associated sensitivities and DOIs, we have considered two three-layers earth models (Figure 5) simulating a resistive (or conductive) layer trapped between two conductive (or resistive) ones, representing targets typically found in environmental, engineering and archaeological investigations, such as contaminant plumes, foundations, archaeological structures (e.g., walls, stone built remains, ditches, tombs, and so on). In detail, the 1D earth model consists of a top layer of nonmagnetic material, with a fixed conductivity of 0.1 S/m, an intermediate layer having a relative magnetic permeability of 1.01 with a conductivity between 0.001 S/m (low conductivity case) and 2 S/m (high conductivity case), and a third layer (a half-space) with a conductivity of 0.01 S/m and a relative magnetic permeability of 1.005. The thickness of the first layer is 1.5 m while that of the middle layer is 1 m. The magnetic permeability of the intermediate layer is probably a little higher than that usually found in real soils, but it has been used to better highlight the effects that magnetic materials might have on EMI responses. In the following, the earth models with the least conductive and most conductive middle layer will be named M1 (Figure 5a) and M2 (Figure 5b), respectively.
Figure 5: (a) Earth model M1; (b) earth model M2. Models differ only in the electrical conductivity of the middle layer.
EMI sensors can be hand-carried by a person using shoulder-harnesses or harness straps in station-by-station on-ground measurements or in a continuous-recording walking survey, but they can also be mounted on a sled or cart to be towed by a small all-terrain vehicle or tractor. However, it is worth noting that changing the way a sensor is used, whose choice is usually dictated by the desired speed of investigation, changes its operating height, which is a survey parameter that should be carefully selected as it may be a decisive factor for the success of the survey, for both imaging and mapping purposes. In fact, varying the probe height changes the depth of penetration of EMI devices, so that measurements investigate different, overlapping soil volumes [58]. This is the reason why, even when a device with a single frequency and a single receiver is used, data recorded at different instrumental heights can be inverted to get quantitative estimates of depth variations in true electrical conductivity [50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74]: the greater the effect of the height, the better the inverted result will be. The effect of the operating height remains important also for multiple depth responses collected with a multi-receiver device. To recover good estimates of conductivity with depth by inverting data measured at multiple inter-coil spacings, the device should operate at such a height that the values recorded by each coil are well separated. Concerning EMI mapping surveys, on the other hand, it is worth noting that an increase in the probe height usually lowers the amplitude of the measured response, causing the drawbacks discussed in Deidda et al. (2022) [8].
Therefore, knowing a priori how EMI responses vary as the operating height of the sensor changes, as shown by the graphs in Figures 6, 7, 8 and 9, may be very useful in survey design. For example, looking at the response of the Dualem-21H above the M1 model (Figure 6), an operating height of 0.9 m (the height the sensor would have by carrying it with a hamess trap) would provide well-separated quadrature values for both HCP and PERP configurations, well suited to be inverted. This is not the case for the responses (Figure 7) the Explorer would have recorded when operating at 0.9 m above the earth model M2. In fact, the HCP quadrature values for the inter-coil distances of 1.48 m and 2.82 m (Figure 7a), as well as the VCP quadrature values for the inter-coil distances of 2.82 m and 4.49 m (Figure 7b), differ by less than 2 mS/m, which in practice may be a value smaller than the noise level. Thus, with reference to earth model M1, it turns out that the Dualem-21H would operate better at heights greater than about 0.8 m, while the Explorer would provide good data when operating directly on the ground surface. Inspecting the responses above model M2 (Figures 8 and 9), it appears that both devices would record very good data at all the considered operating heights, except those from 0.3 m to 0.5 m and from 1 m to 1.4 m for the Explorer HCP quadrature response (Figure 9a).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Device** & \(\boldsymbol{\rho}\)**(m)** & \(\boldsymbol{f}\)**(Hz)** & \(\boldsymbol{\delta_{t}}\)**(m)** & \(\boldsymbol{\beta_{t}}\) & \(\boldsymbol{\delta_{2}}\)**(m)** & \(\boldsymbol{\beta_{2}}\) \\ \hline \multirow{4}{*}{Dualem-21H1\({}^{1}\)} & 0.5 (0.6) & 9,000 & 41.4 & 0.012 (0.015) & 8.8 & 0.057 (0.068) \\ & 1 (1.1) & 9,000 & 41.4 & 0.024 (0.027) & 8.8 & 0.113 (0.124) \\ & 2 (2.1) & 9,000 & 41.4 & 0.048 (0.051) & 8.8 & 0.226 (0.237) \\ \hline \multirow{4}{*}{CMD Explorer} & 1.48 & 10,000 & 38.8 & 0.038 & 8.1 & 0.183 \\ & 2.82 & 10,000 & 38.8 & 0.073 & 8.1 & 0.348 \\ & 4.49 & 10,000 & 38.8 & 0.116 & 8.1 & 0.554 \\ \hline \multirow{4}{*}{GEM-2} & 1.66 & 1,275 & 127.9 & 0.013 & 48.2 & 0.034 \\ & 1.66 & 4,250 & 64.9 & 0.026 & 16.9 & 0.098 \\ \cline{1-1} & 1.66 & 12,525 & 33.7 & 0.049 & 6.8 & 0.246 \\ \cline{1-1} & 1.66 & 28,725 & 19.6 & 0.085 & 3.9 & 0.427 \\ \cline{1-1} & 1.66 & 54,150 & 12.6 & 0.132 & 3.1 & 0.544 \\ \cline{1-1} & 1.66 & 82,150 & 9.3 & 0.179 & 2.8 & 0.592 \\ \hline \hline \end{tabular} \({}^{1}\) The values in parentheses are for the PERP configuration. \(\rho\) is the inter-coil distance and \(f\) the operating frequency of each device; \(\delta\)and \(\beta\) are the skin depth and the induction number, respectively. Subscripts 1 and 2 refer to earth models M1 and M2.
\end{table}
Table 3: Skin depths and induction numbers
Figure 6: Electromagnetic response of the Dualem21H above the earth model M1. (**a**) and (**b**) are the simulated HCP and PERP quadrature (Q) responses, respectively, both expressed as apparent conductivity in mS/m; (**c**) and (**d**) are the HCP and PERP in-phase (P) responses, respectively. Dots indicate the response values at the probe height of 0.9 m, which is a frequently used operating height for both devices.
Figure 7: Electromagnetic response of the Explorer above the earth model M1. (**a**) and (**b**) are the simulated HCP and VCP quadrature (Q) responses, respectively, both expressed as apparent conductivity in mS/m; (**c**) and (**d**) are the HCP and VCP in-phase (P) responses, respectively. Dots indicate the response values at the probe height of 0.9 m, which is a frequently used operating height for both devices.
Inspecting the in-phase component of the responses over the earth model M1, Figures 6c, 6d, 7c, and 7d show that for both multi-receiver devices the values are always very small and negative, with the only exception of the in-phase component of the Explorer at 4.49-m in HCP configuration, whose values are positive. The presence of negative values of the in-phase component is definitely linked to the presence of susceptible materials. In fact, by running the forward modelling over the earth model M1 with relative magnetic permeability equal to 1 for all layers, the values of the in-phase component become positive. This may be important because negative values may indicate the presence of susceptible materials. However, for negative values to be useful indications of such materials, the signal amplitude should be sufficiently large and equal to at least a few ppt
Figure 8: Electromagnetic response of the Dualem21H above the earth model M2. (**a**) and (**b**) are the simulated HCP and PERP quadrature (Q) responses, respectively, both expressed as apparent conductivity in mS/m; (**c**) and (**d**) are the HCP and PERP in-phase (P) responses, respectively. Dots indicate the response values at the probe height of 0.9 m, which is a frequently used operating height for both devices.
Figure 9: Electromagnetic response of the Explorer above the earth model M2. (**a**) and (**b**) are the simulated HCP and VCP quadrature (Q) responses, respectively, both expressed as apparent conductivity in mS/m; (**c**) and (**d**) are the HCP and VCP in-phase (P) responses, respectively. Dots indicate the response values at the probe height of 0.9 m, which is a frequently used operating height for both devices.
as, otherwise, it would be below the noise level. The responses over the earth model M2, on the other hand, show an in-phase component (Figure 8c, 8d, 9c, and 9d) with values higher than those of the previous case, which for the Explorer reach up to about 56 ppt (Figure 9c). Such high in-phase values are usually deemed to be evidence of magnetic materials. As already observed in section 2.1, this is not always the case, and it definitely is not in this example. As the two earth models differ only for their electrical conductivity, the strong increase of the in-phase component values is only due to the electrical conductivity and not to the magnetic permeability.
Figure 10 presents the complex electromagnetic response of the multi-frequency GEM-2 system over earth models M1 (Figure 10a) and M2 (Figure 10b). Both Q and P components of the response function are shown as a function of frequency, in the range of 30 Hz to 93 kHz. In addition, to show how the operating height affects the response values, the response has been estimated at the heights of 0.2 m and 0.9 m, which are the usual heights that the device would have when hand-carried with a shoulder-strap or harness.
As Figure 10 shows, for both earth models M1 and M2, the signal amplitude of Q and P components, very low at low frequencies, increases as the frequency increases. In addition, it is very clear that this increase is sharper over the more conductive earth model M2 and, for each of the two models M1 or M2, for small operating heights. The small responses at low frequencies are related to the corresponding low induction numbers, defined as \(\beta\leq 0.02\) in [75] (Figure 11). As explained in Appendix B (Figure 12), this means that both complex response functions become purely imaginary (resistive limit) as the induction numbers approach zero. In other words, this also means that inductive phenomena are negligible at low frequencies (low induction numbers) resulting in a small EMI response and a marginal frequency dependence, which render data inversion unfeasible. Therefore, to obtain the most useful information about earth models M1 and M2, obtaining responses that are strong, frequency dependent, and suitable for data inversion, the GEM-2 should be configured with the widest possible set of frequencies [76] to operate over a range of moderate induction number (defined as \(0.02<\beta<1\)) (Figure 11). A possible set of frequencies meeting these requirements is listed in Table 3 and shown in Figures 10 and 11. However, it is worth noting that using this set of frequencies, although both responses (at 0.2 m and 0.9 m) over the earth model M1 are frequency dependent, only the one estimated at 0.2 m still has acceptable signal amplitudes.
Figure 10: Electromagnetic response of GEM-2. (**a**) Quadrature and in-phase components of responses at heights of 0.2 and 0.9 m above ground surface of earth model M1; (**b**) Quadrature and in-phase components of responses at heights of 0.2 and 0.9 m above ground surface of earth model M2. Dots indicate the response values for a set of six selectable operating frequencies among those currently available for the GEM-2 (minimum frequency = 30 Hz; maximum frequency = 93 kHz).
Regarding the in-phase component, although not visible in Figure 10 for drawing scale reasons, it should be noted that at low frequencies it is negative in all cases. For both models M1 and M2, in detail, the curves tend asymptotically towards values of -440 ppm, for the one calculated with the sensor at 0.2 m above the ground, and -210 ppm, for that at 0.9 m. Such negative values of the in-phase component at low frequencies are due to the susceptible materials present in the earth models (Figure 5a and 5b) or, more specifically, to the induced magnetization the susceptible materials exhibit when subjected to a magnetic field (no matter whether alternating or static). In fact, as pointed out by Huang and Fraser (2003)[77], as the frequency (or the induction number) takes small values, the complex response function becomes dominated by the magnetization effect which is in-phase with and in the same directions as the primary magnetic field. Here, we wish to highlight that the low-frequency asymptotics depend exclusively on the magnetic permeability of the materials, unlike the values of the in-phase component at moderate and high frequencies, which, on the other hand, are influenced by electrical conductivity as well. This is a good reason to always induce a very low frequency among those to be selected to set up a multi-frequency device. In addition, we want to emphasize that when the recorded in-phase data contain negative values, a careful direct modelling performed a posteriori may be particularly useful for data interpretation, using, in case, an inversion algorithm taking into account both electrical conductivity and magnetic permeability simultaneously [78].
To quantify to which extent the complex EMI responses described above are affected by a change in the electrical conductivity and/or magnetic permeability of earth models M1 and M2, we have estimated a whole set of sensitivity functions for each device, using equations (13) and (14) and assuming an operating height of 0.9 m. Figures 12 and 13 show the sensitivity functions with respect to electrical conductivity and magnetic permeability for both the Q and the PERP components of the Dualem21 H device above the two earth models M1 (Figures 12a, b, c, and d, and Figures 13a, b, c, and d) and M2 (Figures 12e, f, g, and h, and Figures 13e, f, g, and h). Similarly, Figures 14 and 15 show the sensitivities for the Explorer. Finally, Figure 16 shows, frequency by frequency, the sensitivities estimated for the GEM-2 above earth models M1 and M2. Note that in all graphs, the sensitivities are plotted in a non-normalized form, with the values expressed using the appropriate units of measurement for both numerator (Q or P component of the response, in mS/m or ppm for the former and in pptor ppm for the latter) and denominator (electrical conductivity, in S/m, or magnetic permeability, in H/m) of equations (13) and (14). We adopted this graphical representation because it allows users to quantitatively compare the whole set of sensitivity functions. It is the standard representation used in the FDEMtools3 package; however, users can modify some scripts to get other representations, similar to the ones shown in the Supplementary material (Figures 55, 65, 75 and 85) in [7].
The analysis and comparison of the sensitivity functions in Figures 12, 13, 14, 15, and 16, certainly provide further useful information to select the most appropriate and best configured measuring device to better characterize the target in the M1 and M2 models. Here, we leave this choice to the reader, according to their own analyses, comparisons, and considerations.
Figure 11: Induction numbers spanned over earth models M1 (red curve) and M2 (blue curve) by the full range of frequency available for the GEM-2. The horizontal dashed line at 0.02 indicates the transition from low to moderate induction numbers [75]. Dots indicate the induction numbers for a set of six selectable operating frequencies among those currently available for the GEM-2 (minimum frequency = 30 Hz; maximum frequency = 93 kHz).
Figure 12: Sensitivity functions to electrical conductivity of the Dualem-21H. (**a,c**) Q and (**b,d**) P sensitivities at 0.9 m above model M1; (**e,g**) Q and (**f,h**) P sensitivities at 0.9 m above model M2.
Figure 13: Sensitivity functions to magnetic permeability of the Dualem-21H. (**a,c**) Q and (**b,d**) P sensitivities at 0.9 m above model M1; (**e,g**) Q and (**f,h**) P sensitivities at 0.9 m above model M2.
Figure 14: Sensitivity functions to electrical conductivity of the Explorer. (**a,c**) Q and (**b,d**) P sensitivities at 0.9 m above model M1; (**e,g**) Q and (**f,h**) P sensitivities at 0.9 m above model M2.
Figure 15: Sensitivity functions to magnetic permeability of the Explorer. (**a,c**) Q and (**b,d**) P sensitivities at 0.9 m above model M1; (**e,g**) Q and (**f,h**) P sensitivities at 0.9 m above model M2.
Finally, to complete this numerical example, we report the values that one of the Auxiliary Routines (fdemdoi.m) of FDEMtools3 provides for the DOI (Table 4). As mentioned above (Section 2.4), these values are only indicative and useful to obtain an approximate estimate of the DOI, according to the criterion adopted in [62]. For each earth model M1 or M2, Table 4 lists the DOIs achievable by each of the three devices, assuming an operating height of 0.9 m above the ground. Such values can also be graphically estimated by plotting the cumulative response functions, which are optional outputs of the forward modelling package in the FDEMtools3. Running the FDEMforward GUI with the "cumulative response graph" option activated, the cumulative response functions are firstly computed, by integrating the sensitivity functions, and then plotted down to the depth that coincides with the DOI.
## 5 Conclusions
A simulation of the model response to a prescribed distribution of the electromagnetic features in a stratified subsoil is a very useful tool to plan a data acquisition campaign and adopt an effective sensing device, with the best possible configuration. This is possible whenever a geophysicist has some a priori information on the surveying site and knows which physical target he is going to observe.
After recalling some basic concepts about the earth propagation of an electromagnetic field and discussing some physical quantities which are critical for the correct comprehension of the phenomenon, an interactive software tool for forward modelling has been introduced. It reproduces the model response to a given
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{6}{c}{**Dualem-2IH**} & \multicolumn{6}{c}{**CMD Explorer**} & \multicolumn{6}{c}{**GEM-2**} \\ \cline{2-13} & \({}^{\text{HCP}}\rho_{1}\)\({}^{\text{HCP}}\rho_{2}\)\({}^{\text{HCP}}\rho_{3}\)\({}^{\text{PEP}}\rho_{3}\)\({}^{\text{PEP}}\rho_{3}\)\({}^{\text{PEP}}\rho_{3}\)\({}^{\text{HCP}}\rho_{3}\)\({}^{\text{HCP}}\rho_{3}\)\({}^{\text{HCP}}\rho_{3}\)\({}^{\text{HCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\rho_{3}\)\({}^{\text{VCP}}\)\({}^{\text{VCP}}\
electromagnetic features distribution and allows the simulation of data acquisition by a specific instrument, either existing or hypothetical.
To illustrate the use of the package, two three-layers earth models have been analyzed by comparing the response of three commercial devices. The simulation shows that an effective choice of a specific sensing device, as well as its correct configuration, can only be performed by taking into consideration the target characteristics and the operating height of the device. At the same time, drawing sensitivity functions and cumulative response graphs is crucial to distinguish between data-driven and model-driven inversion results, and determine a reliable depth of investigation. A forward modelling software simulator turns out to be precious tool to assist a geophysicist in planning a surveying session.
Besides expanding the FDEMtools3 toolbox by implementing a graphical user interface for forward modelling and the corresponding computational routines, this paper also introduces in the package a new regularized minimal-norm inversion algorithm, which helps in selecting a suitably regular solution for the underdetermined least-squares problems to be solved, and allows the use of a model profile for the solution, in those cases where such information is available.
Conceptualization, G.P.D., P.D.A., F.P., and G.R.; methodology, G.P.D., P.D.A., F.P., and G.R.; software, G.P.D., P.D.A., F.P., and G.R.; validation, G.P.D., P.D.A., F.P., and G.R.; formal analysis, G.P.D., P.D.A., F.P., and G.R.; writing\(-\)original draft preparation, G.P.D., P.D.A., F.P., and G.R.; writing\(-\)review and editing, G.P.D., P.D.A., F.P., and G.R. All authors have read and agreed to the published version of the manuscript.
This research was partially funded by Fondazione di Sardegna, Progetto biennale bando 2021, "Computational Methods and Networks in Civil Engineering (COMANCHE). F.P., P.D.A. and G.R. were partially supported by the INdAM-GNCS 2022 project "Metodi e modelli di regolarizzazione per problemi malposti di grandi dimensioni". P.D.A. gracefully acknowledges Fondo Sociale Europeo REACT EU - Programma Operativo Nazionale Ricerca e Innovazione 2014 - 2020 and Ministero dell'Universita e della Ricerca for the financial support.
Not applicable
## Appendix A Brief review of the Maxwell equations
Electromagnetic induction phenomena obey Maxwell's equations, which describe how electric and magnetic fields are generated by charges, currents, and changes of the fields. The differential form in the time domain of these equations is given by
\[\nabla\cdot\mathbf{D}=q,\] Gauss' law (1)
\[\nabla\cdot\mathbf{B}=0,\] Gauss' law for magnetic fields (2)
\[\nabla\times\mathbf{E}=-\frac{\partial\mathbf{B}}{\partial t^{\prime}},\] Faraday's law (3)
\[\nabla\times\mathbf{H}=\mathbf{J}+\frac{\partial\mathbf{D}}{\partial t^{\prime}},\] Ampere-Maxwell's law (4)
where \(\mathbf{D}\) is the dielectric displaement (C/m\({}^{2}\)), \(\mathbf{B}\) the magnetic flux density or the magnetic induction (T), \(\mathbf{E}\) the electric field intensity (V/m), \(\mathbf{H}\) the magnetic field intensity (A/m), \(\mathbf{J}\) the electric current density (A/m\({}^{2}\)), and \(q\) the electric charge density (C/m\({}^{3}\)). The symbols \(\nabla\cdot\) and \(\nabla\times\) stand for divergence and curl operators, respectively. These equations are usually coupled through the following constitutive relations:
\[\mathbf{D}=\varepsilon\mathbf{E}, \tag{5}\]
\[\mathbf{B}=\mu\mathbf{H}, \tag{6}\]
\[\mathbf{J}=\sigma\mathbf{E}, \tag{7}\]
where \(\varepsilon\), \(\mu\), \(\sigma\), are the dielectric permittivity (F/m), the magnetic permeability (H/m), and the electric conductivity (S/m) of a conductive magnetic material. In free space, where the electric conductivity is zero, the dielectric permittivity and the magnetic permeability take the values \(\varepsilon_{0}=8.854\cdot 10^{-12}\) F/m and \(\mu_{0}=4\pi\cdot 10^{-7}\) H/m, respectively. For any medium other than a vacuum, the ratio of the permeabilities of a medium to that of free space defines the dimensionless relative permeability \(\mu_{r}=\frac{\mu}{\mu_{0}}\) as well as the ratio \(\varepsilon_{r}=\frac{\varepsilon}{\varepsilon_{0}}\)defines the relative dielectric permittivity.
For a magnetic material, equation (6) can be expressed in terms of a diagnostic parameter, the magnetic susceptibility \(\chi\), which measures how much a material is susceptible to being magnetized. In fact, when a magnetic field intensity, \(\mathbf{H}\), acts on a magnetic material, the latter acquires a magnetization
\[\mathbf{M}=\chi\mathbf{H}. \tag{8}\]
Therefore, the magnetic induction field, \(\mathbf{B}\), can be expressed as
\[\mathbf{B}=\mu\mathbf{H}=\mu_{0}\mathbf{H}+\mu_{0}\mathbf{M}=\mu_{0}\mathbf{H}+\mu_{0}\chi\mathbf{H}= \mu_{0}(1+\chi)\mathbf{H}, \tag{9}\]
which also shows the relationship between the magnetics susceptibility and the relative magnetic permeability:
\[\chi=\mu_{r}-1. \tag{10}\]
As shown in [42], Maxwell's equations together with the constitutive relations can be combined to yield the electromagnetic wave equations for propagation (as wave and diffusion) of electric and magnetic fields in an isotropic homogeneous lossy medium having electric conductivity \(\sigma\), magnetic permeability \(\mu\), and dielectric permittivity \(\varepsilon\). Taking the curl of equation (3) and usingin the order equations (6), (4), (7), and (5)
the use of the identity \(\nabla\times\nabla\times\mathbf{E}=-\nabla^{2}\mathbf{E}\), where the symbol \(\nabla^{2}\) stands for the Laplacian operator, gives the equation for the electric field in time domain
\[\nabla^{2}\mathbf{E}-\mu\sigma\frac{\partial\mathbf{E}}{\partial t}-\mu \varepsilon\frac{\partial^{2}\mathbf{E}}{\partial t^{2}}=0. \tag{111}\]
Likewise, taking the curl of equation (104) and using in the order equations (107), (103), (105), and (106), the use of the identity \(\nabla\times\nabla\times\mathbf{H}=-\nabla^{2}\mathbf{H}\) yields the equation for the magnetic field in time domain
\[\nabla^{2}\mathbf{H}-\mu\sigma\frac{\partial\mathbf{H}}{\partial t}-\mu \varepsilon\frac{\partial^{2}\mathbf{H}}{\partial t^{2}}=0. \tag{112}\]
Considering harmonically varying fields at angular frequency \(\omega\), thatis \(\mathbf{E}=\mathbf{E}_{0}e^{-i\omega t}\) and\(\mathbf{H}=\mathbf{H}_{0}e^{-i\omega t}\), equations (111) and (112) become
\[\nabla^{2}\mathbf{E}+i\omega\mu\sigma\mathbf{E}+\omega^{2}\mu\varepsilon \mathbf{E}=\nabla^{2}\mathbf{E}+k^{2}\mathbf{E}=0, \tag{113}\]
and
\[\nabla^{2}\mathbf{H}+i\omega\mu\sigma\mathbf{H}+\omega^{2}\mu\varepsilon \mathbf{H}=\nabla^{2}\mathbf{H}+k^{2}\mathbf{H}=0, \tag{114}\]
where
\[k=\sqrt{\omega^{2}\mu\varepsilon+i\omega\mu\sigma}=a+ib \tag{115}\]
is the complex wavenumber, whose real and imaginary parts are respectively given by [40]:
\[a=\omega\ \sqrt{\frac{\mu\varepsilon}{2}\Bigg{(}\sqrt{1+\frac{ \sigma^{2}}{\omega^{2}\varepsilon^{2}}}+1\Bigg{)}} \tag{116}\]
and
\[b=\omega\ \sqrt{\frac{\mu\varepsilon}{2}\Bigg{(}\sqrt{1+\frac{ \sigma^{2}}{\omega^{2}\varepsilon^{2}}}-1\Bigg{)}}. \tag{117}\]
The imaginary part, which is also called attenuation coefficient, plays a key role in electromagnetism since its inverse defines the skin depth \(\delta\)
### Quasi-stationary approximation
Alternating electromagnetic fields that vary slowly with time are referred to as low-frequency alternating fields or quasi-stationary fields. In the case of quasi-stationary fields, Maxwell's equations can be simplified by dropping the term \(\frac{\partial\mathbf{D}}{\partial t}\) in Ampere-Maxwell's law (eq. 104) but retaining the term \(\frac{\partial\mathbf{B}}{\partial t}\) in Faraday's law (eq. 103). This means that the displacement current is negligible with respect to the conduction current, which remains the only source of the quasi-stationary magnetic field. This also means that the electromagnetic properties of the medium are such that \(\sigma\gg\omega\varepsilon\). Then, equations (113) and (114) can be approximated as
\[\nabla^{2}\mathbf{E}+i\omega\mu\sigma\mathbf{E}\simeq 0 \tag{118}\]
and
\[\nabla^{2}\mathbf{H}+i\omega\mu\sigma\mathbf{H}\simeq 0, \tag{119}\]
which are known as the diffusinequations of electromagnetic fields. They describe the penetration of electromagnetic fields (but do not consider wave propagation) in an isotropic homogeneous lossy medium having electric conductivity \(\sigma\)and magnetic permeability \(\mu\) The complex wavenumber \(k\) becomes
\[k=\sqrt{i\omega\mu\sigma}=a+ib, \tag{100}\]
whose real and imaginary parts are
\[a=b=\sqrt{|k^{2}|}=\sqrt{\frac{\omega\mu\sigma}{2}}=\frac{1}{\delta} \tag{101}\]
where
\[\delta=\sqrt{\frac{2}{\omega\mu\sigma}} \tag{102}\]
is the skin depth.
## Appendix B Step-by-step electromagnetic induction
### Step 1
Let us first consider two nearby coils in free space (or in free air), as in Figure 14a. Suppose that coil Tx (Transmitter) is connected to an external alternating voltage source, while coil Rx (Receiver) is connected to a voltmeter to read voltages in it (Figure 14b). Let \(I_{p}=I_{0}\cdot e^{i\omega t}\) the sinusoidal alternating current driven in the primary coil by the external voltage source. According to Ampere-Maxwell's law, this current produces a time-varying magnetic field intensity, \(\mathbf{H}_{p}\), or magnetic flux density, \(\mathbf{B}_{p}=\mu_{0}\mathbf{H}_{p}\), around the loop, which alternates with the same frequency and phase as the current (Figure 14c). Both magnitude and direction of this field vary with position in a complex way around the coil, but its magnitude is always proportional to the current flowing in the coil: \(|\mathbf{B}_{p}|=\propto I_{p}\).
The time-varying magnetic field generates a changing magnetic flux through coil Rx, \(\Phi_{R}(\mathbf{B}_{p})\). Therefore, the magnetic field interacts with coil Rx to produce an electromotive force, according to Faraday's law:
\[\mathcal{E}=-\frac{\partial\Phi_{R}(\mathbf{B}_{p})}{\partial t}. \tag{103}\]
Since the magnetic field is proportional to the current \(I_{P}\) and the magnetic flux, by definition, is proportional to the magnetic field, the magnetic flux through coil Rx is proportional to the current flowing in coil Tx; that is:
\[\Phi_{R}(\mathbf{B}_{p})=M_{TR}I_{P} \tag{104}\]
where \(M_{TR}\) is the mutual inductance, which is defined as the magnetic flux that passes through coil R x due to a unit electric current circulating in coil Tx. The mutual inductance \(M_{TR}\) depends on the geometry of the coils, their relative orientation and distance, and on the magnetic permeability of free space \(\mu_{0}\) (\(4\pi\cdot 10^{-7}\) H/m). Combining equations (B1) and (B2), the voltage sensed by coil Rx is
\[\mathcal{E}_{TR}=-\frac{\partial\phi_{R}(\mathbf{B}_{P})}{\partial t}=-M_{TR} \frac{\partial l_{P}}{\partial t}=-i\omega M_{TR}l_{P}.\] (B3)
This voltage is usually employed to measure the primary magnetic field at the receiver.
### Step 2
Now, let us consider again the two coils Tx and Rx in free air but above a half-space containing a conductive magnetic body with electrical conductivity \(\sigma\) and magnetic permeability \(\mu\)(Figure B2a).
For a bulk material (the conductive magnetic body) there is not a loop per se, but many short-circuited loops. However, Faraday's law is general and it does not require the existence of a physical loop. Faraday's law states that when the magnetic flux through a surface changes, a time-varying electric field is induced along the boundary of that surface. This is true for any closed loop, either in empty space or in a physical material, through which the magnetic flux is changing over time. Thus, assuming S as one of these loops inside the body (Figure B2a), the standard integral form of Faraday's law reads
\[\oint_{S}\mathbf{E}_{S}\cdot\mathbf{d}\mathbf{l}=-\frac{\partial\phi(\mathbf{B}_{P})}{ \partial t},\] (B4)
where \(\mathbf{E}_{S}\) is the electric field at every point of such a loop and \(\mathbf{d}\mathbf{l}\) is an oriented displacement along the loop. The induced electromotive force \(\mathcal{E}_{TS}\) is related to \(\mathbf{E}_{S}\) by
\[\mathcal{E}_{TS}=\oint_{S}\mathbf{E}_{S}\cdot\mathbf{d}\mathbf{l}\] (B5)
Therefore, as in the case of coil Rx (Eq. B3), introducing the mutual induction \(M_{TS}\) the electromotive force induced in the loop S can be expressed in terms of the primary current \(I_{P}\) by
\[\mathcal{E}_{TS}=-\frac{\partial\phi(\mathbf{B}_{P})}{\partial t}=-M_{TS}\frac{ \partial l_{P}}{\partial t}=-i\omega M_{TS}l_{P}.\] (B6)
This electromotive force alternates with the same frequency as the primary current but lagging behind the current (or the primary magnetic field) by \(90^{\circ}\), as shown in Figure B2c. The mutual inductance \(M_{TS}\) depends on the geometry of coils Tx and S, on their relative orientation and distance, and on the magnetic permeability \(\mu\) of the core material in loop S.
### Step 3
The alternating voltage induced in the conductive body by the time-varying primary magnetic field causes alternating currents to flow in the bulk material as they do through wires. These are the eddy currents that flow along closed loops concentrated near the boundary surface of the body (skin effect) and in planes perpendicular to the magnetic field causing them. Let S be one of these closed loops (Figure 14a). Figure 14b shows its equivalent single-loop circuit with lumped resistance \(R\) and inductance \(L\). Let \(\mathcal{E}_{TS}(t)\) be the alternating voltage source that establishes the alternating current, \(I_{ eddy}\).
Applying Kirchhoff's voltage rule, the circuit equation reads
\[\mathcal{E}_{TS}-RI_{eddy}-\frac{dI_{eddy}}{dt}=0, \tag{10}\]
which, for the present time-harmonic case, yields
\[\mathcal{E}_{TS}=(R+i\omega L)I_{eddy}. \tag{11}\]
The complex quantity in the brackets is the impedance of the \(RL\) circuit, whose amplitude is given by
\[|Z|=\sqrt{R^{2}+\omega^{2}L^{2}}, \tag{12}\]
which, for the present time-harmonic case, yields the phase
\[\alpha=arctan\left(\frac{\omega L}{R}\right). \tag{13}\]
Therefore, letting \(\mathcal{E}_{TS}(t)\mathbf{\gamma}=\mathcal{E}_{0}\cdot e^{i\left(\alpha t-\frac {\pi}{2}\right)}\), the sinusoidal alternating current circulating in the circuit is
\[I_{eddy}(t)=\frac{\mathcal{E}_{0}}{\sqrt{R^{2}+\omega^{2}L^{2}}}\cdot e^{i \left(\alpha t-\frac{\pi}{2}-\alpha\right)}, \tag{14}\]
which lags the voltage by \(\alpha\) radians (Figure 14c) and the primary magnetic field (or primary current) by \(\alpha+\frac{\pi}{2}\) radians (Figure 14d).
The phase shift \(\alpha\) depends only on the response parameter \(\beta=\omega^{\frac{L}{R}}\), also known as dimensionless induction number. When \(\beta\to 0\) or equivalently \(R\to\infty\), the circuit becomes purely resistive as the amplitude and
phase of the impedance becomes \(|Z|=R\) and \(\alpha=0\), respectively. In this case, the current circulating in the circuit is in-phase with the induced voltage \(\epsilon_{TS}(t)\mathbf{\cdot}\) and is given by
\[I_{eday}(t)=\frac{\epsilon_{0}}{R}\cdot e^{i\left(\omega t-\frac{\pi}{2}\right)}. \tag{112}\]
When \(\beta\rightarrow\infty\) or equivalently \(R\to 0\), the circuit becomes purely inductive as the amplitude impedance takes the value \(|Z|=\omega L\) and the phase approaches \(\frac{\pi}{2}\) radians:
\[\alpha=\lim_{R\to 0}\left[arctan\left(\frac{\omega L}{R}\right)\right]= \frac{\pi}{2}. \tag{113}\]
In this case, thus, the current circulating in the circuit is in quadrature with the induced voltage \(\mathcal{E}_{TS}(t)\), lags the primary current by \(\pi\)radians, and is given by
\[I_{eday}(t)=\frac{\epsilon_{0}}{\omega L}\cdot e^{i\left(\omega t-\pi\right)}. \tag{114}\]
### Step 4
Eddy currents induced in the body generate a time-varying magnetic field around the body (Figure 10(a)) according to Ampere-Maxwell's law. This field, called secondary magnetic field, generates in turn a secondary voltage in coil Rx, according to Faraday's law:
\[\mathcal{E}_{SR}=-i\omega M_{SR}I_{eddy}=-i\omega M_{SR}\frac{\epsilon_{SR}}{ R+i\omega L}=-\frac{\omega^{2}M_{TS}M_{SR}}{R+i\omega L}\cdot I_{P}. \tag{115}\]
The receiver, then, simultaneously senses both primary and secondary magnetic fields, measuring both primary and secondary electromotive forces. In particular, the receiver records the whole electromagnetic response of the buried loop as the ratio of the secondary to the primary magnetic fields, which is equal to the ratio of the secondary to the primary voltages:
\[\frac{\epsilon_{S}}{\epsilon_{P}}=\frac{|\mathbf{B}_{S}|}{|\mathbf{B}_{P}|}=-\frac{M_ {TS}\cdot M_{SR}}{M_{PR}\cdot L}\cdot\frac{i\beta}{1+i\beta}=\kappa\cdot\frac {i\beta}{1+i\beta}=\kappa\left(\frac{\beta^{2}+i\beta}{1+\beta^{2}}\right). \tag{116}\]
The first factor
\[\kappa=-\frac{M_{TS}\cdot M_{SR}}{M_{PR}\cdot L}, \tag{117}\]
is the coupling coefficient. It depends only on relative size, shape, position, and orientation of the coils. The other factor, called response function, is a complex-valued function of \(\beta\), which depends on the frequency \(\omega\) and on the target's electromagnetic properties:
\[G(\beta)=\frac{i\beta}{1+i\beta}=\frac{\beta^{2}}{1+\beta^{2}}+i\frac{\beta}{1+ \beta^{2}} \tag{118}\]
Therefore, the electromagnetic response of the measuring device to of the buried body is given by
\[M=\frac{|B_{S}|}{|B_{P}|}=\kappa\cdot G(\beta), \tag{119}\]
\[Re\,M=\kappa\cdot\frac{\beta^{2}}{1+\beta^{2}} \tag{120}\]
\[Im\,M=\kappa\cdot\frac{\beta}{1+\beta^{2}} \tag{121}\]
The real part, which has the same phase as the primary magnetic field, is called In-phase component while the imaginary part, called Quadrature component, is \(90^{\circ}\) out-of-phase with the primary (Figure 10(c)).
The response function gets purely real when \(\beta\rightarrow\infty\) (inductive limit), that is when working at high frequency, or on a highly conductive (low \(R\)) or highly inductive target. Otherwise, the response function gets purely imaginary when \(\beta\to 0\) (resistive limit), which means working at low frequency, or on a poorly conductive target (high \(R\)). Figure 11 shows the graph of the real and imaginary parts of the response function \(G(\beta)\).
|
2304.03093
|
Inductive Graph Unlearning
|
As a way to implement the "right to be forgotten" in machine learning,
\textit{machine unlearning} aims to completely remove the contributions and
information of the samples to be deleted from a trained model without affecting
the contributions of other samples. Recently, many frameworks for machine
unlearning have been proposed, and most of them focus on image and text data.
To extend machine unlearning to graph data, \textit{GraphEraser} has been
proposed. However, a critical issue is that \textit{GraphEraser} is
specifically designed for the transductive graph setting, where the graph is
static and attributes and edges of test nodes are visible during training. It
is unsuitable for the inductive setting, where the graph could be dynamic and
the test graph information is invisible in advance. Such inductive capability
is essential for production machine learning systems with evolving graphs like
social media and transaction networks. To fill this gap, we propose the
\underline{{\bf G}}\underline{{\bf U}}ided \underline{{\bf I}}n\underline{{\bf
D}}uctiv\underline{{\bf E}} Graph Unlearning framework (GUIDE). GUIDE consists
of three components: guided graph partitioning with fairness and balance,
efficient subgraph repair, and similarity-based aggregation. Empirically, we
evaluate our method on several inductive benchmarks and evolving transaction
graphs. Generally speaking, GUIDE can be efficiently implemented on the
inductive graph learning tasks for its low graph partition cost, no matter on
computation or structure information. The code will be available here:
https://github.com/Happy2Git/GUIDE.
|
Cheng-Long Wang, Mengdi Huai, Di Wang
|
2023-04-06T14:21:48Z
|
http://arxiv.org/abs/2304.03093v2
|
# Inductive Graph Unlearning
###### Abstract
As a way to implement the "right to be forgotten" in machine learning, _machine unlearning_ aims to completely remove the contributions and information of the samples to be deleted from a trained model without affecting the contributions of other samples. Recently, many frameworks for machine unlearning have been proposed, and most of them focus on image and text data. To extend machine unlearning to graph data, _GraphEraser_ has been proposed. However, a critical issue is that _GraphEraser_ is specifically designed for the transductive graph setting, where the graph is static and attributes and edges of test nodes are visible during training. It is unsuitable for the inductive setting, where the graph could be dynamic and the test graph information is invisible in advance. Such inductive capability is essential for production machine learning systems with evolving graphs like social media and transaction networks. To fill this gap, we propose the **G**U**ided **In**D**ucity**E** Graph Unlearning framework (GUIDE). GUIDE consists of three components: guided graph partitioning with fairness and balance, efficient subgraph repair, and similarity-based aggregation. Empirically, we evaluate our method on several inductive benchmarks and evolving transaction graphs. Generally speaking, GUIDE can be efficiently implemented on the inductive graph learning tasks for its low graph partition cost, no matter on computation or structure information. The code will be available here: [https://github.com/Happy2Git/GUIDE](https://github.com/Happy2Git/GUIDE).
## 1 Introduction
In various complex real-world applications, we often encounter cases where the data is represented as graphs, such as medical diagnosis [29], social media [45], advertising industry [67], and financial industry [54]. The interactions between neighboring nodes make it promising to learn rich information from graph data. After showing great promise in effectively solving graph-based machine learning tasks such as node classification, link prediction, and graph classification, Graph Neural Networks (GNNs) with their large number of variants [63, 65, 26, 65] have received much attention from the machine learning community. Despite their success, recent deployments of GNNs simultaneously raise privacy concerns when the input graphs contain sensitive information of personal data, such as social networks and biomedical data. Recently, the "right to be forgotten" has been proposed in many regulations to protect users' personal information, such as the European Union's General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) [43, 44, 46, 47]. Broadly speaking, the "right to be forgotten" provides individuals the right to request the deletion of their personal information and the right to opt out of the sale of their personal information.
As a de facto way to implement the "right to be forgotten" in machine learning, machine unlearning allows the model owner to completely remove the trace of the samples to be deleted from a trained model without affecting the contributions of other samples, while its unlearning process requires significantly lower computational cost than retraining from scratch. In recent years, a long list of work on machine unlearning has been proposed, and these methods can be categorized into two classes: model-agnostic unlearning [6, 24, 64, 24] and model-intrinsic unlearning [8, 18, 19, 28, 50, 56]. As one of the most well-known model-agnostic methods, SISA [6] uses data partitioning mechanisms to achieve efficient unlearning without full retraining. Specifically, it first divides the dataset into multiple isolated shards and trains a submodel for each shard. Then it aggregates the predictions of all submodels to obtain the final prediction. Such submodels can limit the influence of each data sample throughout the training process. When there is a data removal request, the model owner only needs to partially retrain the submodel corresponding to the data to be removed. Compared to SISA, many unlearning methods, instead of retraining submodels, aim to obtain a shifted model that satisfies some unlearning criteria by modifying the weights of the existing trained model [56]. While these unlearning methods have computational advantages, they are not as transparent as SISA.
Although there are numerous studies on machine unlearning, most are only tailored for image or text data, and unlearning methods for graph data, i.e., graph unlearning, are still lacking. Due to the additional node dependency in graph data, existing unlearning methods cannot be directly applied, which indicates that graph unlearning is more challenging. Based on the SISA framework, [13] proposes the first graph unlearning method, GraphEraser, for graph neural network (GNN) models. Compared with the random partitioning in SISA, GraphEraser provides two balanced partition methods to preserve the additional structural information in graph data. Then, it applies a learning-based aggregation method to obtain the importance scores of submodels. Later, [16] proposes a Certified Graph Unlearning (CGU) method based on the Simplifying Graph Convolutional Network (SGC) [63], which is a linear GCN model. Unfortunately, such a model-specific method is inapplicable to general GNN models.
However, as we will show later, GraphEraser and CGU are inherently designed for the _transductive_ graph setting, where the attributes (but not labels) and edges of test nodes are visible during training. They are not designed for the _inductive_ graph setting (where test nodes and edges are invisible during training), which is ubiquitous in high-throughput production machine learning systems, as pointed out by [26]. For example, the evolving graphs in a transaction system constantly encounter unseen data samples every day. Thus, the associated fraud detection models should be able to generalize to the newly generated graphs efficiently. Besides transaction systems, such inductive capabilities are also crucial for GNN models in social media, advertising, etc.
For GraphEraser, the time cost of graph partitioning is exceedingly high, so it is not suitable to implement this framework for the evolving graph or multi-graph cases in the inductive setting. Graph unlearning requires that each shard retains a small piece of the training graph to train a submodel. However, the loss of visibility of test nodes and their connections makes submodel training more difficult in the inductive setting. For example, it is easy to learn a weak submodel due to the unfair label composition in each shard, as shown in Figure 1. Note that the _fairness_ here refers to group fairness, which ensures some form of statistical parity for members of different protected groups (e.g., gender or race) [3], i.e., the label distribution in each shard remains the same statistic as in the entire training graph. And we use _balance_ in the following discussions to represent that the subgraph of each shard has the same size (number of nodes). In addition, GraphEraser aggregates the predictions of submodels on the test nodes by learning important scores for all shards. Once one shard is updated, all other shards need to retrain their important scores, which brings more computational cost and privacy risk.
Thus, we can conclude that the main challenge in model-agnostic inductive graph unlearning is to preserve as much structural information of the original graph as possible while satisfying both fairness and balance constraints in graph partitioning efficiently. This is based on the insights that more structural information leads to higher model performance, a balanced partition ensures that the expected unlearning time cost is small when facing small batch unlearning, and a fair partition would lead to a more robust learning process.
**Our contributions:** Motivated by our above findings, in this
Figure 1: Behavior of GraphEraser in various settings. The colors represent the ground truth labels of the corresponding nodes. In the transductive setting, the features of the test nodes and their connections to other nodes are visible during the training process (in the same static graph with training nodes). The red-shaded subgraphs indicate the test nodes (surrounded by black circles) whose labels are unknown to the model owner in advance. In the inductive setting, training graphs can evolve over time or change incrementally. The test graphs are also completely invisible, resulting in limited information available for training each shard (with a small subgraph) without access to the test nodes and their edges.
paper we propose the first inductive graph unlearning framework called GUided InDuctiVE Graph Unlearning (GUIDE). Briefly, GUIDE consists of three components: guided graph partitioning with fairness and balance, efficient subgraph repairing, and similarity-based aggregation. Specifically, in guided graph partitioning, we propose two novel graph partitioning methods: GPFB-Fast and GPFB-SR, to obtain a graph partition that efficiently satisfies both fairness and balance constraints. According to our experimental results, the proposed methods are superior to GraphEraser with \(\sim 3\times\) balance and fairness scores. GPFB-Fast achieves \(\sim 10\times\) speedup on graph partitioning. To the best of our knowledge, this is also the first study on graph partitioning with fairness and balance constraints.
Due to graph partitioning, a lot of edges would be lost, destroying the structure of the original graph. Therefore, to restore this missing information as much as possible, we propose subgraph repair methods as the second component of GUIDE. Through our methods, missing neighbors and their connections with the corresponding nodes could be efficiently generated and added to these subgraphs to repair their structure. Notably, for each shard, our repairing procedures do not involve the information of other shards. After receiving node removal requests, the corresponding repaired subgraphs can be efficiently updated by deleting the corresponding nodes and edges.
As mentioned above, the learning-based aggregation method LBAggr proposed by [13] requires access to the entire training graph when updating the importance scores of the corresponding shards. To speed up the training process, LBAggr is trained on a constructed public subset of the training graph. However, once a shard is updated, all importance scores of other shards need to be updated as well, which introduces additional computational cost. We develop a novel similarity-based aggregation method as our third component to address these issues. Unlike previous methods, our method can compute the importance score of each shard independently, and the normalized similarity score between the partitioned subgraph and the test graph can be directly used as the corresponding importance score. Such independent updating will be more efficient than GrapnEraser when the unlearning batch size is small.
We perform extensive experiments to demonstrate the performance of GUIDE in the inductive setting. GUIDE achieves superior performance (\(\sim 3\times\)) than the existing state-of-the-art methods on popular node classification benchmarks and the fraud detection task on a real bitcoin dataset. We also introduce two metrics to evaluate the graph partitioning results: balance score and fairness score. Specifically, experimental results show that GUIDE has lower time cost than _GraphEraser_ while achieving higher fairness and balance scores in graph partitioning. In addition, we perform extensive ablation studies to demonstrate the utility of other components of GUIDE. Ablation studies show that our proposed subgraph repair methods can significantly improve the performance of GNN models trained on subgraphs. Furthermore, similarity-based aggregation can achieve comparable results to learning-based aggregation.
## 2 Preliminaries
### Graph Neural Networks
Given an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) is the set of nodes and \(\mathcal{E}\) is the set of edges, a basic graph neural network (GNN) model attempts to learn a node representation for downstream tasks from the graph structure and any feature information we have. To train a GNN model, we always use the message passing framework. During each iteration, the GNN model updates the node embedding for each node \(u\in\mathcal{V}\) by aggregating the information from \(u\)'s neighbors \(\mathcal{N}(u)\). The \(k\)-th update process can be formulated as follows [25]:
\[h_{u}^{(k+1)} =\text{UPDATE}^{(k)}\left(h_{u}^{(k)},\text{AGGR}^{(k)}(h_{v}^{( k)},\forall v\in\mathcal{N}(u))\right)\] \[=\text{UPDATE}^{(k)}\left(h_{u}^{(k)},m_{\mathcal{N}(u)}^{(k)} \right),\]
where UPDATE and AGGR are some differentiable functions and \(m_{\mathcal{N}(u)}^{(k)}\) is the aggregated'message' from the neighbors of \(u\). After \(K\) iterations of message passing, we can obtain the final embedding for each node. These node embeddings can be used for node classification, graph classification, and relation prediction tasks.
**Transductive and Inductive Graph Learning.** There are two settings for node classification tasks: the transductive setting and the inductive setting. In the transductive setting, the training nodes and test nodes are in the same static graph. Test nodes and their associated edges are involved in GNN's message passing updates, even though they are unlabeled and not used in the loss computation. In contrast, all test nodes and their edges are completely unobservable during training in the inductive setting. Besides, the training graph can also evolve over time. Compared to the transductive setting, the inductive setting is more common in production machine learning systems that operate on evolving graphs and constantly encounter unseen nodes, such as the daily user-video graphs generated on Youtube [26].
### Transductive Graph Unlearning
**Machine Unlearning.** Machine unlearning aims to fully eliminate any influence of the data to be deleted from a trained machine learning (ML) model. To implement machine unlearning, the most natural approach is to directly delete all the revoked samples and retrain the ML model from scratch by using the original training data without deleted samples. While retraining from scratch is easy to implement, its computation cost will be prohibitively large to make it efficient when
both the model and the training data are large-scale. Later on, several methods have been proposed to reduce the computation overhead. See the related work section A in Appendix for details.
**Graph Unlearning.** Graph unlearning refers to machine unlearning for graph data, and in this paper we will focus on GNN learning models. Compared to the standard machine unlearning, there are additional challenges in graph unlearning, e.g. the node dependency in graph data makes most of the existing unlearning methods hard to be applied. To solve this problem, [13] proposes the first graph unlearning framework, GraphEraser.
**GraphEraser.** Given an undirected graph \(\mathcal{G}_{F}=(\mathcal{V}_{F},\mathcal{E}_{F})\) whose node set \(\mathcal{V}_{F}\) consists of a training set \(\mathcal{V}\) and a test set \(\mathcal{V}_{T}\) (without labels). GraphEraser consists of three phases: (1) balanced graph partition; (2) shard model training; (3) shard model aggregation. Specifically, in step (1), GraphEraser designs two balanced graph partition algorithms (BLPA and BEKM) to get a partition of the training set \(\mathcal{V}\). Different from the vanilla methods such as community detection which are easy to output imbalanced partition, BLPA heuristically assigns the nodes with connections to the same group in a manner similar to Lloyd's algorithm for K-Means clustering until the size of the corresponding group arrives at some threshold. BEKM applies a similar method to the embeddings of graph data to achieve better performance. The balanced partition methods could avoid the case that the imbalanced partition contains large shards whose unlearning process is highly inefficient. Suppose the subgraph held by the \(i\)-th shard is \(\{\mathcal{V}_{i}\cup\mathcal{V}_{T},\mathcal{E}_{i\cup T}\}\), where \(\mathcal{E}_{i\cup T}\) is the edge set corresponding to \(\mathcal{V}_{i}\cup\mathcal{V}_{T}\). Then in step (2) GraphEraser trains a GNN model for each shard in a transductive manner where the unlabeled test nodes and their incident edges are visible to GNN during training. Then these GNN models are tested on the same graph to predict the labels of transductive test nodes. Considering these different shard models do not uniformly contribute to the final prediction, in step (3), GraphEraser applies a learning-based aggregation method (LBAggr) to optimize the importance scores of the shard models to improve the global model utility.
## 3 Inductive Graph Unlearning
### Problem Definition
**Notably, inductive training graphs are different from transductive training graphs.** Given an undirected graph \(\mathcal{G}_{F}=(\mathcal{V}_{F},\mathcal{E}_{F})\) whose node set \(\mathcal{V}_{F}\) consists of a training set \(\mathcal{V}\) and a test set \(\mathcal{V}_{T}\), the transductive training graph is \(\mathcal{G}_{F}=(\mathcal{V}_{F},\mathcal{E}_{F})\) except the labels of \(\mathcal{V}_{T}\), while the inductive training graph is \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{E}\) is the edge set corresponding to \(\mathcal{V}\). Thus, inductive graph unlearning refers to graph unlearning for inductive training graphs.
Similar to the transductive setting, we have three types of unlearning requests in the inductive setting: node unlearning, feature unlearning, and edge unlearning.
* For a node unlearning request on node \(u\in\mathcal{V}\), the service provider needs to retrain the GNN model on the new training graph \(\mathcal{G}_{u}=\mathcal{G}\backslash\{X_{u},e_{u,v}|\forall v\in\mathcal{N}_ {\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
which implies that all importance scores should be updated for optimal values if one shard model is updated, i.e., they are quite inefficient. Therefore, we must design new aggregation methods that assign an importance score for each shard independently and do not rely on additional data.
## 4 GUIDE Framework
### Overview of GUIDE Framework
We propose the Guided Inductive Graph Unlearning (GUIDE) framework to achieve the previous objectives. Generally speaking, GUIDE consists of three components: guided graph partition with fairness and balance, efficient subgraph repairing, and similarity-based aggregation. Figure 2 illustrates the framework of GUIDE.
**Guided Graph Partition.** To satisfy (C1), we first formulate the problem of finding balanced and fair graph partitions as spectral clustering with linear constraints, which is a quadratic programming problem with binary variables. To solve it efficiently, we relax the constraints and propose the method of GPFB-Fast to solve the relaxed problem. We then present an improved programming problem via spectral rotation and propose GPFB-SR to solve it. To the best of our knowledge, this is the first study on graph partition under fairness and balance constraints.
**Efficient Subgraph Repairing.** Such a method aims to satisfy (C2). During the partition process, we retain the original degree information of each node independently (note that this step is independent of future changes of other shards). When the partition is completed, we generate missing neighbors for each node independently according to its features and its original degree information. Specifically, we design three strategies: Zero-Feature Neighbor, Mirror-Feature Neighbor, and MixUp Augmented Neighbor, to reduce the side effects of our graph partition.
After repairing all subgraphs, the model owner trains GNN models (in parallel) for all shards isolatedly. The repaired nodes will involve in the GNN message-passing updates. However, the final layer embedding for those repaired nodes will not be used in loss computation.
**Similarity-based Aggregation.** We develop a similarity-based aggregation method to assign an importance score for each shard independently. The importance score for a shard is calculated by the similarity between its associated subgraph and the test graph. Once a shard is updated, its importance score can be updated efficiently without affecting other shards.
In the following subsections, we will provide details of our three components.
### Guided Graph Partition with Fairness and Balance
In this part, we aim to get a partition satisfying the balance and fairness constraints simultaneously. It is notable that such a task is challenging. On the one hand, while some previous work [10, 13] has proposed some heuristic K-Means clustering variants to achieve balanced graph partitions. Those algorithms are difficult to be extended to graph partitions satisfying two constraints. On the other hand, existing work on fair clustering also does not satisfy the population balance constraint [1, 17, 33]. For further introductions to graph-related clustering, see Appendix B.
Before showing our method, we first show how to incorporate these two constraints into the graph partition problem. Given a graph dataset \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with all node labels, we suppose \(|\mathcal{V}|=n\) and \(\mathcal{V}=\dot{\cup}_{k\in[\mathcal{E}]}\mathcal{C}_{k}\) where \(\mathcal{C}_{k}\) denotes the node set with label \(s\) (and there are \(h\) classes). It is obvious to see that the ratio of label \(s\) in the whole dataset is \(|\mathcal{C}_{k}|/n\). Motivated by [33], we can first construct a **label-membership indicator matrix**\(\mathbf{F}\in\mathbb{R}^{n\times h}\), where \(F_{i,s}=1\) if the label of node \(i\) is \(s\) and \(F_{i,s}=0\) otherwise. Thus, the sum of entries in the \(s\)-th column of \(\mathbf{F}\) is \(|\mathcal{C}_{s}|\), the number of nodes with label \(s\). For a given partition \(\mathcal{V}=\dot{\cup}_{i\in[\mathcal{V}]}\mathcal{V}_{i}\), we can easily see that the number of nodes with label \(s\) in the \(i\)-th shard is \(|\mathcal{C}_{s}\cap\mathcal{V}_{i}|\) and its ratio for the \(i\)-th shard is \(|\mathcal{C}_{s}\cap\mathcal{V}_{i}|/|\mathcal{V}_{i}|\).
Figure 2: Guided Inductive Graph Unlearning (GUIDE) Framework
In the most balanced case, the sizes of all shards are the same and the size of the \(i\)-th shard would be \(|\mathcal{V}_{i}|^{*}=n/v\). In the fairest case, the ratio of label \(s\) in each shard should be the same as its ratio in the entire dataset, i.e., \((|\mathcal{C}_{i}\cap\mathcal{V}_{i}|/|\mathcal{V}_{i}|)^{*}=|\mathcal{C}_{i}|/n\). Then when the graph partition satisfies the fairness and balance constraints at the same time, the number of label \(s\) in the \(i\)-th shard should be \(|\mathcal{C}_{i}\cap\mathcal{V}_{i}|^{*}=\frac{|\mathcal{C}_{i}|s|\mathcal{ V}_{i}|^{*}}{n}=\frac{|\mathcal{C}_{i}|}{v}\).1
Footnote 1: For simplicity, here we assume \(\frac{|\mathcal{C}_{i}|}{v}\) is an integer. It is easy to extend to general cases.
The following Theorem illustrates how we transform the fairness constraint and balance constraints to a linear constraint on the group-membership indicator matrix \(\mathbf{Y}\in\mathbb{R}^{n\times v}\) (see Section B in Appendix for the definitions of group-membership indicator matrix \(\mathbf{Y}\) and its normalized version \(\mathbf{H}\) for a partition).
**Theorem 1** (Transformation of Fairness and Balance Constraints on Indicator Matrix \(\mathbf{Y}\)).: _Based on the previous notations, denote the fairness and balance guided matrix by \(\mathbf{M}\in\mathbb{R}^{h\times v}\), i.e., \(M_{s,j}=\frac{|\mathcal{C}_{i}|}{v}\) denotes the optimal size of label \(s\) in the \(j\)-th shard. For a partition \(\mathcal{V}=\cup_{i\in[v]}\mathcal{V}_{i}\), it is fair and balanced if and only if \(\mathbf{F}^{\intercal}\mathbf{Y}=\mathbf{M}\), where \(\mathbf{Y}\in\{0,1\}^{n\times v}\) is the group-membership indicator matrix of the partition that has the form in (8)._
Based on Theorem 1, it is sufficient for us to find a group-membership indicator matrix \(\mathbf{Y}\) such that \(\mathbf{F}^{\intercal}\mathbf{Y}=\mathbf{M}\). To incorporate into the spectral clustering problem (9), we can leverage the spectral rotation theory by supposing there is an orthogonal matrix \(\mathbf{R}\in\mathbb{R}^{v\times v}\) such that \(\mathbf{HR}=\mathbf{Y}\) (as illustrated in problem (10)). In total, Theorem 1 suggests that to solve the spectral clustering problem with fairness and balance constraints, it is equivalent to solve
\[\begin{split}\min_{\mathbf{Y}\in\mathcal{T},\mathbf{H},\mathbf{R} }& Tr(\mathbf{H}^{\intercal}\mathbf{L}\mathbf{H})\\ s.t.&\mathbf{H}^{\intercal}\mathbf{H}=\mathbf{I}, \mathbf{F}^{\intercal}\mathbf{Y}=\mathbf{M},\mathbf{HR}=\mathbf{Y},\mathbf{R} ^{\intercal}\mathbf{R}=\mathbf{I}.\end{split} \tag{1}\]
However, problem (1) is a binary quadratic integer programming, which is hard to solve with low computation cost. By introducing a new balanced and fair guided matrix, we design a new linear constraint on the embedding matrix \(\mathbf{H}\) in (8) rather than \(\mathbf{Y}\).
**Theorem 2** (Transformation of Fairness and Balance Constraints on Embedding Matrix \(\mathbf{H}\)).: _Denote the normalized balanced and fair guided matrix by \(\widetilde{\mathbf{M}}\in\mathbb{R}^{h\times v}\), i.e., \(\widetilde{\mathbf{M}}_{s,j}=\frac{|\mathcal{C}_{i}|}{\sqrt{m}}\)._
_For a partition \(\mathcal{V}=\cup_{i\in[v]}\mathcal{V}_{i}\), it is fair and balanced if and only if \(\mathbf{F}^{\intercal}\mathbf{H}=\widetilde{\mathbf{M}}\), where \(\mathbf{H}\) is the normalized group-membership indicator matrix of the partition which has the form in (8).2_
Footnote 2: The omitted proof of Theorem 1 and 2 are provided in Appendix D.
Therefore, the optimization problem of finding a graph partition that satisfies the fairness and balance constraints based on RatioCut is
\[\min_{\mathbf{H}} Tr(\mathbf{H}^{\intercal}\mathbf{L}\mathbf{H}) s.t. \qquad\mathbf{H}\in\mathcal{H},\mathbf{F}^{\intercal}\mathbf{H}=\widetilde{ \mathbf{M}}, \tag{2}\]
where \(\mathcal{H}\) is the set of all normalized group-membership indicator matrices. Similar to the standard spectral clustering, we can relax it to
\[\min_{\mathbf{H}} Tr(\mathbf{H}^{\intercal}\mathbf{L}\mathbf{H}), s.t. \qquad\mathbf{H}^{\intercal}\mathbf{H}=\mathbf{I},\mathbf{F}^{\intercal} \mathbf{H}=\widetilde{\mathbf{M}}. \tag{3}\]
Problem (3) is equivalent to the following problem for a large enough \(\alpha\):
\[\min_{\mathbf{H}} Tr(\mathbf{H}^{\intercal}\mathbf{L}\mathbf{H})+\alpha\| \mathbf{F}^{\intercal}\mathbf{H}-\widetilde{\mathbf{M}}\|_{2}^{2} s.t.\qquad\mathbf{H}^{\intercal}\mathbf{H}=\mathbf{I}. \tag{4}\]
Problem (4) can be further written as a quadratic problem over the Stiefel manifold, which can be solved efficiently by the generalized power iteration method [41], i.e.,
\[\begin{split}\max_{H} Tr(\mathbf{H}^{\intercal}(\mathbf{W}-\mathbf{D}-\alpha \mathbf{F}\mathbf{F}^{\intercal})\mathbf{H}+2\alpha\mathbf{H}^{\intercal} \mathbf{F}\widetilde{\mathbf{M}})\\ s.t. \mathbf{H}^{\intercal}\mathbf{H}=\mathbf{I}\end{split}. \tag{5}\]
After we solve problem (5) and get the optimal solution \(\mathbf{H}^{*}\), we can apply any K-Means clustering algorithm to its rows to get the final partition of the graph. The optimization method for problem (5), Graph Partition with Fairness and Balance (Fast), is summarized into Algorithm 1 in Appendix C.
As pointed out in [27], the obtained relaxed continuous spectral solution could severely deviate from the optimal discrete solution. Motivated by [15, 60], we add a spectral rotation regularization term to learn better embedding and indicator matrices jointly. In total, we have the following problem.
\[\begin{split}\min_{\mathbf{H},\mathbf{Y}}& Tr(\mathbf{H}^{ \intercal}\mathbf{L}\mathbf{H})+\alpha\|\mathbf{F}^{\intercal}\mathbf{H}- \widetilde{\mathbf{M}}\|_{2}^{2}+\\ &\beta\|\mathbf{H}\mathbf{R}-\mathbf{D}^{-\frac{1}{2}}\mathbf{Y}( \mathbf{Y}^{\intercal}\mathbf{D}\mathbf{Y})^{-\frac{1}{2}}\|_{2}^{2}\\ s.t.&\mathbf{H}^{\intercal}\mathbf{H}=\mathbf{I}, \mathbf{R}^{\intercal}\mathbf{R}=\mathbf{I}\end{split} \tag{6}\]
It is notable that as compared with the above problem (5), we can get an indicator matrix directly without using K-Means clustering algorithms by solving problem (6). In Appendix D.1, we show how to solve problem (6) efficiently, and Algorithm 2 in Appendix C is our final method. When the objective function converges or satisfies certain convergence criteria, we can stop the iteration and get the final indicator matrix \(\mathbf{Y}\) satisfying fairness and balance constraints.
### Efficient Subgraph Repairing
Subgraph repair has been shown to be helpful in improving the performance of subgraph federated learning [71]. The missing neighbors to be repaired here refer specifically to the 1-hop neighbors of nodes. This is due to the fact that during each training iteration, each node aggregates information from its local (1-hop) neighbors, and as the iterations progress, each
node's embedding contains more and more information from further reaches of the graph [25]. But can these methods really be applied here?
**Federated Learning for Missing Neighbors?** In a subgraph federated learning system, nodes in each subgraph can potentially have connections with those in other subgraphs. To recover these cross-subgraph missing links, [71] proposes the FedSage+ method to generate the number of missing neighbors and the feature for each missing node. FedSage+ trains a local missing neighbor generation model NeighGen for each local client. The locally computed model gradients of the generative loss are transmitted among the system via the server. Unfortunately, such a federated subgraph repair method involves the training parameters of other clients, which is inapplicable in the setting of graph unlearning.
**Local Generator for Missing Neighbors?** What if we train a neighborhood-generated model on each subgraph? As mentioned by [71], the federated learning setting is very crucial for the training of NeighGen, which does not hold in graph unlearning. Besides, the additional time cost introduced by NeighGen is very high compared to the training time cost of GNN models. Such complex generative models for subgraph repair cannot be applied in graph unlearning, considering that the primary purpose of graph unlearning is to reduce the retaining time cost.
From the above discussions, we know that an appropriate subgraph repairing method for graph unlearning should satisfy the following properties: (1) It aims to restore the 1-hop neighbors of each node; (2) The repairing procedure should be simple since a complex local generative model will make the unlearning algorithm have high training cost; (3) Due to the unlearning requirement, its repairing procedure for each subgraph cannot rely on other subgraphs' information.
Motivated by the fact that the insight behind many successful node classification approaches is to explicitly exploit _homophily_, we propose to repair the missing nodes based on their preserved neighbors before partitioning. Generally, homophily refers to the tendency of nodes to share attributes with their neighbors [25, 40]. For example, people tend to form friendships with others who share the same interests. For a preserved node \(i\), when homophily exists between its neighbors, we know that its missing neighbors should also have characteristics similar to \(\mathbf{x}_{i}\). When heterogeneity exists, homophily cannot be applied to its neighbors. However, we can still use a simple and effective strategy to repair its local structure. Specifically, we design the following three efficient subgraph repair strategies.
**Zero-Feature Neighbor.** In this approach, each missing neighbor's attribute of node \(i\) will be constructed by
\[\mathbf{\tilde{x}}=\mathbf{0}_{d\times 1},\]
where \(\mathbf{0}_{d\times 1}\) is a \(d\)-dimentional vector of \(0\). As an extreme case where homophily does not exist, we construct \(\mathbf{\tilde{x}}\) without using any information from node \(i\)'s feature vector, \(\mathbf{x}_{i}\). As we show in Appendix E, this strategy is sufficient to recover a basic structure of the computation graph.
**Mirror-Feature Neighbor.** Here each missing neighbor's attribute of node \(i\) is constructed by
\[\mathbf{\tilde{x}}=\mathbf{x}_{i}.\]
As another extreme case of homophily, we directly copy the feature vector of node \(i\) as its missing neighbor's feature. Its repaired computation graph is also shown in Appendix E.
**MixUp Augmented Neighbor.** For a node, the MixUp Augmented Neighbor approach assigns a randomly masked version of the node to its neighbors. In detail, for note \(i\), the attribute of each its missing neighbor is constructed as follows.
\[\mathbf{\tilde{x}}=\lambda\mathbf{x}_{i}+(1-\lambda)\mathbf{0}_{d\times 1},\]
where \(\lambda\) is randomly sampled from the uniform distribution of \([0,1]\) each time for creating diverse neighbors. MixUp Augmented Neighbor strategy could be considered a trade-off between homophily and heterogeneity. Our strategy seems similar to MixUp [70], which has been used as an efficient
Figure 3: Efficient subgraph repairing. The side effects of partition on the computation graph (message passing operation) of a 2-layer GNN are reduced by the proposed subgraph repairing strategies. The computation graph on a full dataset without partition is provided in Appendix E.
data augmentation routine. In short, MixUp extends the training distribution based on the observation that linear interpolations of feature vectors lead to linear interpolations of the associated labels. However, our idea differs from MixUp in that we fix the zero vector in the linear combination and consider only the feature vector, while MixUp requires both features and labels. Directly applying MixUp to repair the missing neighbors of node \(i\) requires that node \(i\) can provide enough information about the features of its existing neighbors, which is unrealistic after graph partitioning. It is notable that the labels of these newly constructed nodes will not be used during the training process of GNN models. Thus, here we do not need to care what their labels will be.
The effects of MixUp Augmented Neighbor on the computation graph are shown in Figure 3. Such simple methods can recover the structure of the computation graph of the GNN model to some extent and with low computation cost.
### Similarity-based Aggregation
Our aggregation method is motivated by recent developments in the interpretability of GNNs. In particular, several GNN explanation studies have been proposed [38, 48, 68] and claim that the behavior of GNN models is strongly related to the structure of the training graph. [26] points out that for an inductive GNN model, its generalization to unseen nodes requires "aligning" newly observed subgraphs to the node embeddings on which the algorithm has already been optimized. A new graph with more similar substructures to the training graph is expected to yield better inference results. Thus, we should assign subgraphs which are more similar to the test graph higher importance scores during the inference stage. Here we can directly use graph kernels to measure such similarity. In this paper, we will use the pyramid match graph kernel [42] to compute the similarity score between the test graph and each subgraph, which is a state-of-the-art algorithm for measuring the similarity between unlabeled graphs. 3 Motivated by our ideas above, we propose our similarity-based aggregation method.
Footnote 3: Note that any similarity measuring algorithm can be used here, depending on the settings of different tasks.
Specifically, in our method we first represent each graph as a set of vectors corresponding to the embeddings of its vertices in the eigenspace. To find an approximate correspondence between two sets of vectors, we then map these vectors onto multi-resolution histograms and compare these two histograms through a weighted histogram intersection measure [42]. Given the test graph \(G_{t}\) and the subgraph \(G_{i}\) of shard \(i\) (with depth \(L\)), denote \(H_{G_{t}}^{l}\) and \(H_{G_{t}}^{l}\) as the histogram of \(G_{t}\) and \(G_{i}\) at level \(l\), respectively. We then calculate the pyramid match kernel over these two histograms:
\[\begin{split} k(G_{t},G_{i})=& I(H_{G_{t}}^{L},H_{G_ {i}}^{L})+\sum_{l=0}^{L-1}\frac{1}{2^{L-l}}(I(H_{G_{t}}^{l},H_{G_{i}}^{l})\\ &-I(H_{G_{t}}^{l+1},H_{G_{t}}^{l+1})),\end{split} \tag{7}\]
where \(I(H_{G_{t}}^{l},H_{G_{t}}^{l})\) is the number of nodes that match at level \(l\) in the two sets. We refer the readers to [42] for more details on this kernel. In practice, we can use the _grakel_ library [53]4 to implement the pyramid match graph kernel.
Footnote 4: [https://ysig.github.io/GraKeL/](https://ysig.github.io/GraKeL/)
### Discussions
**Choices of Different Components.** We recommend service providers to choose the appropriate partition and subgraph repair methods according to their needs. The choice between GPFB-Fast and GPFB-SR depends on the service provider's preference for graph partitioning: GPFB-SR could lead to a considerably fair and balanced partition, while GPFB-Fast is much faster than GPFB-SR. The choice of subgraph repair strategy depends on the GNN structure we plan to use, as shown in Table 3. Zero-Feature Neighbor is more appropriate for the GraphSAGE model, while Mirror-Feature Neighbor is more appropriate when using the GIN model. The MixUp Augmented method is a general method for all GNNs.
**Guarantee of Unlearning.** Each component of GUIDE follows the principle of minimizing the use of training graph information. The two proposed graph partitioning algorithms, GPFB-Fast and GPFB-SR, both require only the edge information of nodes with their IDs and labels. The feature information of the nodes is not involved in the graph partitioning step. The subgraph repair procedure uses only the degree information of the entire training graph and the corresponding feature information of each node. The similarity-based aggregation computes the importance score for each shard independently based on the similarity between its corresponding subgraph and the test graph during inference. After receiving an unlearning request, except for the graph partition, both its corresponding shard models and importance scores can be unlearned deterministically. Therefore, similar to SISA [6] and GraphEraser [13], GUIDE is an approximate unlearning approach. To prove the unlearning ability of GUIDE, we perform the membership inference attack on GUIDE in section 5.7 and show our results are close to random guessing. These results are consistent with the conclusion of existing work [6, 13, 14].
**Computation Complexity Analysis.** For GPFB-Fast, the time cost on initializing **B** is \(O(mwh)\). In each iteration, the time complexity of updating **P** is \(O(n^{2}v+mh)\), while the time complexity for computing **WH** and **FF\({}^{\intercal}\)H** is \(O(n^{2}v)\) and \(O(mwh)\) respectively. The complexity of calculating reduced SVD on **P** is \(O(n^{2}v)\). The computation cost of K-Means is \(O(nv^{2})\). Suppose the iteration number of updating **H** is \(t_{1}\), then the total computation cost of GPFB-Fast is \(O(t_{1}(n^{2}v+mh)+nv^{2}+mwh)\).
For GPFB-SR, the time cost of solving **R** is \(O(v^{3})\), and the computational complexity for obtaining **Y** is \(O(mv)\). Therefore, suppose the iteration number of updating \(\textbf{R},\textbf{H},\textbf{Y}\) is \(t_{2}\) and the iteration number for obtaining **Y** is \(t_{3}\), the total time
complexity of GPFB-Rotation is \(O(t_{2}(v^{3}+t_{1}(n^{2}v+m\hbar)+t_{3}(mv))+m\hbar)\).
Although the orders of the time complexity of GPFB-Fast and GPFB-SR both are quadratic in \(n\) in theory, the main bottleneck is the matrix computation which can be implemented efficiently in parallel. In Section 5.2, we will illustrate that in practice the computation costs of GPFB-Fast and GPFB-SR are less than the computation costs of BLPA and BEKM in [13], which must be performed sequentially by nodes.
## 5 Experimental Results
We evaluate the performance of GUIDE on the real-world Bitcoin illicit transactions detection task [61] and four popular inductive node classification benchmarks [4, 66, 51].
The evaluation aims to answer the following questions: (1) Unlearning and Implementation Efficiency: How fast can GUIDE handle batch unlearning requests? How efficient are GPFB-Fast and GPFB-SR in practice? (2) Model Utility: Can GUIDE provide state-of-the-art performance for inductive graph learning tasks? (3) Partition Efficacy: Can GPFB-Fast and GPFB-SR output fair and balanced partitions? (4) Efficacy of Subgraph Repairing: Will our subgraph repair strategies help to improve model performance? (5) Efficacy of Similarity-based Aggregation: Can our similarity-based aggregation method reach a level of performance comparable to previous learning-based aggregation methods? (6) Unlearning Ability: Can GUIDE really unlearn the requested nodes?
### Experimental Setup
**Datasets and Experimental Setup.** The Elliptic Bitcoin Dataset [61] consists of a time series graph (49 distinct time steps, evenly spaced with an interval of about two weeks) of over 200K bitcoin transactions (nodes) and 234K payment flows (edges) with a total value of $6 billion. Twenty-one percent of entities (42,019) are labeled licit (exchanges, wallet providers, miners, licit services, etc.). Two percent (4,545) are labeled illicit (scams, malware, terrorist organizations, ransomware, Ponzi schemes, etc.). The remaining transactions are not labeled with regard to licit versus illicit but have other features. A GNN detection model would learn from past transaction graphs and make a prediction for each entity of the new transaction graph. Similar to the temporal split in [61], which reflects the nature of the task, the first 30 time steps are used to train a GNN model for detecting illicit entities, the next 4 are used for validation, and the last 15 are used for testing. As such, the GNN model is trained in an inductive setting. We set the number of shards for Elliptic to 20, which means that the graph of each time step would be partitioned into 20 subgraphs.
The four popular node classification benchmarks consist of static citation networks and coauthor networks: Cora [66], CiteSeer [66], DBLP [4], and CS [51]. The details of four benchmarks are provided in Appendix F.1. We follow a generally accepted inductive setting in [63, 12]: we construct one graph containing only training nodes and another graph containing all nodes. Graph partitioning and GNN training are applied to the former one. That means the testing nodes are invisible during the training process. Similar to the setting of [13], we set the number of shards for Cora, CiteSeer, DBLP, and CS to 20, 20, 100, and 100, respectively, which makes the number of nodes in each shard similar. For all static graph datasets, we randomly split nodes into 80% and 20% for training and testing and report the average performance of all models over 10 random splits. In fairness to the evaluation, we also report the performance of graph unlearning methods on the transductive setting with the same data splitting and model architecture in Appendix G.5.
**Metrics.** For the illicit entity detection task, we opt for two commonly used metrics - AUC and Macro F1 score [55]. AUC measures the area under the ROC Curve. Macro F1 score, the mean of the F1-score of both classes without weighting, provides an objective measure of model performance in the face of extreme class imbalance. For inductive node classification benchmarks, we consider classification accuracy as in [63, 12].
To measure the quality of a graph partition, we design two partition metrics: balance score and fairness score. In the following, we provide the definitions of balance score and fairness score for a partition \(\{\mathcal{V}_{i}\}_{i=1}^{v}\) of the graph \((\mathcal{V},\mathcal{E})\) with number of nodes \(n\).
**Balance Score:** Denote the optimal size of the \(i\)-th shard as \(|\mathcal{V}_{i}|^{*}=\frac{n}{v}\). To quantify the degree of balance, we formally define its population balance score as follows:
\[\mathcal{B}_{b}=-\frac{1}{2}\Sigma_{i=1}^{v}\frac{||\mathcal{V}_{i}^{t}||-| \mathcal{V}_{i}^{t}|^{*}}{n},\]
where \(-1\leq\mathcal{B}_{b}\leq 0\). In the optimal case, \(||\mathcal{V}_{i}^{t}||-|\mathcal{V}_{i}^{t}|^{*}|=0\) for all \(i\in[v]\), which implies that \(\mathcal{B}_{b}=0\). When the partition of the \(i\)-th shard is unbalanced, we have \(||\mathcal{V}_{i}^{t}||-|\mathcal{V}_{i}|^{*}|>0\), i.e., \(\mathcal{B}_{b}<0\). We can also easily see that larger \(\mathcal{B}_{b}\) indicates that the partition is more balanced.
**Fairness Score:** Denote the node set with label \(s\) as \(\mathcal{C}_{s}\) for \(s\in[h]\), we have \(\mathcal{V}=\dot{\cup}_{s\in[h]}\mathcal{C}_{s}\). It is easy to know that the ratio of nodes with label \(s\) in the full dataset is \(\frac{|\mathcal{C}_{s}|}{|\mathcal{V}_{i}^{t}|}\). Similarly, the ratio of nodes with label \(s\) in the \(i\)-th shard is \(\frac{|\mathcal{C}_{s}|\mathcal{V}_{i}^{t}|}{|\mathcal{V}_{i}^{t}|}\). The fairness score can be computed by
\[\mathcal{B}_{f}=-\frac{1}{2v}\sum_{i=1}^{v}\sum_{s=1}^{h}|\frac{|\mathcal{C}_{s }\cap V_{i}|}{|V_{i}|}-\frac{|\mathcal{C}_{s}|}{n}\big{|},\]
where \(-1\leq\mathcal{B}_{f}\leq 0\). In the fairest case, the ratios for every class over all shards are equal, i.e., \(\frac{|\mathcal{C}_{s}\cap\mathcal{V}_{i}^{t}|}{|\mathcal{V}_{i}^{t}|}=\frac{| \mathcal{C}_{s}|}{|\mathcal{V}_{i}^{t}|}\) for all \(i\in[v]\), which implies that \(\mathcal{B}_{f}=0\). When the class \(s\) in the \(i\)-th shard is unfair, we have \(|\frac{|\mathcal{C}_{s}\cap\mathcal{V}_{i}^{t}|}{|\mathcal{V}_{i}^{t}|}-\frac{| \mathcal{C}_{s}|}{|\mathcal{V}_{i}^{t}|}|>0\) so that \(\mathcal{B}_{f}<0\). Moreover, we can see a larger \(B_{f}\) indicates the partition is fairer.
**Baselines.** We compare GUIDE with two standard baselines (Scratch, Random) and two graph unlearning methods (Eraser-BLPA, Eraser-BEKM). For the fraud detection task, we apply graph unlearning methods on a designed illicit entity detection GNN model. For inductive node classification task, we apply graph unlearning methods on 6 popular inductive GNN models to compare their efficiency and model utility, including GraphSAGE [26], GIN [65], GAT [58], GATv2 [7], SuperGAT [31], APPNP [34]. The detailed settings of those baselines and GNN models are reported in Appendix F.2.
For the implementation of GUIDE, we first apply GPFB-Fast or GPFB-SR on the training graph. The partitioned subgraphs are then repaired by our proposed graph repair strategies. After training the GNN model for each shard independently, we compute an importance score for each shard using the similarity-based aggregation. We name the two implementations of GUIDE (with different partition methods) as GUIDE-Fast (GUIDE with GPFB-Fast) and GUIDE-SR (GUIDE with GPFB-SR) for convenience, respectively.
For both GPFB-Fast and GPFB-SR, the regularization parameter \(\alpha\) is determined via grid search from \(\{0.0001,0.001,0.01\}\). For GPFB-SR, the regularization parameter \(\beta\) is determined by grid search from \(\{1,2,3,4,5\}\). Unless otherwise indicated, we take the MixUp Augmented Neighbor as our default repairing strategy. The performances of Zero-Feature Neighbor and Mirror-Feature Neighbor are also reported. We use the pyramid match graph kernel to compute the similarity score between each repaired subgraph and the test graph. But we argue that any method of measuring the similarity between graphs can be applied here.
**Implementation.** All experiments are conducted on a server with 128G memory, two NVIDIA RTX 3090 GPUs with 24GB RAM, and Ubuntu 20.04 LTS OS.
### Unlearning and Implementation Efficiency
**Batch Unlearning Time.** We compare the batch unlearning time of GUIDE and GraphEraser on three graph datasets. The time of Scratch is also reported as the baseline. Our results are shown in Figure 4. We can see that as the number of unlearning nodes increases, more and more shards are involved, so it will take a longer time to unlearn. When all shards need to be updated, the unlearning time tends to be stable. However, since the size of each subgraph is small, it is still faster than retraining from scratch on a large graph. The interesting point is that GUIDE is expected to have a lower unlearning time than GraphEraser due to a more balanced partition and independent importance score updates, but as shown in Appendix G.4, we can only observe such a trend when the batch size of unlearning is small. The reason is that subgraph repair makes each subgraph's size larger than its original size. The actual training time of each submodel may be higher than its training time on a smaller subgraph without repairing. The submodel training time will dominate the unlearning time when the unlearning batch size is large. However, we claim that such a trade-off between unlearning efficiency and model utility is reasonable because the unlearning efficiency degrades slightly in the comparison, while the model utility gets a significant improvement (as we will show later).
**Implementation Time.** In the inductive setting, the GNN model should learn continuously or keep life-long learning based on those incremental samples. Therefore, implementation efficiency is especially important when facing evolving graphs or multi-graphs. In the following, we report the graph partition time cost for four methods in Table 1.
It is notable that the results in [13] follow a different setting compared to our experiments. [13] sets the number of shards on CS to \(k=30\) and uses a pre-trained GNN model to generate node embeddings for BEKM in the transductive setting. In our setting, the number of shards on CS is \(k=100\). Following the requirements of the inductive setting, we generate node embeddings with the default setting of BEKM, which is time-consuming when the dataset size is large. For the Elliptic
Figure 4: Batch unlearning time on large-scale graph datasets.
dataset, we partition its temporal transaction graphs separately according to their timestamps. As observed in Table 1, GPFB-Fast takes the shortest time for partition. As explained in section 4.2, GPFB-Fast is simple to implement and can be solved efficiently by using standard linear algebra software. For GPFB-SR, we can see it is always faster than BEKM and is comparable with BLPA in some cases. Moreover, it is slower than GPFB-Fast, which is reasonable as it needs more iterations to find a better solution.
### Model Utility
#### 5.3.1 Fraud Detection
We construct a GNN model with three GINconv layers to conduct the illicit entity detection task. Each GINconv layer consists of a 3-layer MLP. After tuning the hyperparameters based on the validation data, we set the size of node embedding to 1024. The model is trained with 200 epochs. The performance of different graph unlearning methods based on our GNN model is shown in Figure 5. It is easy to see that GUIDE performs very close to Scratch for two metrics during the first 8 time steps, and there is a clear performance gap between GUIDE and others. Especially in the 38th time step, GUIDE outperforms other methods with more than 10% Macro F1 score. During the last 7 time steps, all methods provide similar Macro F1 scores due to the very limited illicit samples in the test graph, while GUIDE still produces higher AUC scores than other methods.
#### 5.3.2 Inductive Node Classification
We evaluate the model utility of different graph unlearning methods on 6 commonly used inductive GNN models. Table 2 presents the average results for these methods on four graph datasets. Comparing the results of Scratch and Random, we can find that there is a large gap between them. Most of the time, the node classification accuracy of the Random method is less than half of the Scratch method. Taking this gap as 100%, we can calculate normalized scores of the results given by other graph unlearning methods to quantify the improvement of those methods to the Random method. We can see the improvement of GraphEraser methods to the Random method is only 20%. It is not surprising because the available information is very limited in the inductive setting. Thus, we can see GraphEraser is unsuitable for the inductive setting. However, we also find that the GIN model can achieve the highest node classification accuracy with the help of GraphEraser sometimes (e.g., over the CS data). This is mainly due to the unbalanced partition since the learning-based aggregation assigns a small score to the shards with a small size. But we argue that it is not advisable to sacrifice too much balance for better performance since it will increase the unlearning time cost. Compared to the results of GraphEraser, GUIDE achieves the best performance for almost all models on four datasets. The normalized scores of GUIDE-Fast and GUIDE-SR are both \(\sim 2\times\) higher than the results of GraphEraser.
In comparison to the Certified Graph Unlearning method [16], we apply the SGC model to the Scratch method and five graph unlearning methods. The results are provided in Appendix G.1, showing that there is a large gap between the performance of SGC and the performance of state-of-the-art GNNs for inductive graph learning tasks. We also report the results of a 2-layer MLP (Multi-Layer Perceptron) model without considering the graph structure in Appendix G.1.
### Partition Efficacy
We can quantify the partition efficacy of graph unlearning methods by calculating the balance score and fairness score of each partition. The average results on different parts (20%, 40%, 60%, 80%) of four datasets are presented in Figure 6,
\begin{table}
\begin{tabular}{l r r|r r} \hline \hline Dataset & BLPA & BEKM & GPFB-Fast & GPFB-SR \\ \hline \hline Cora & 5.41 & 10.10 & **0.24** & 2.85 \\ CiteSeer & 6.36 & 14.56 & **0.31** & 3.54 \\ CS & 38.77 & 5454.36 & **15.71** & 40.02 \\ DBLP & 37.30 & 5182.10 & **14.44** & 33.52 \\ Elliptic & 303.02 & 1089.72 & **26.19** & 201.99 \\ \hline \hline \end{tabular}
* The huge increase in BEKM’s computation cost comes from its linear relationship with the number of shards and node embedding generation process.
\end{table}
Table 1: Graph partition time of 4 methods(s).
Figure 5: Illicit entity detection results over test time span.
where the partition score is the summation of the balance score and the fairness score. A smaller absolute value of this negative score indicates that the corresponding partition is fairer and more balanced. As we can see from Figure 6, the performance of GPPB-SR is always comparable with the performance of Random. The partition scores of GPFB-SR are \(\sim 3\times\) better than the scores of BLPA and BEKM. We also present the distribution of shard sizes in Appendix G.3, which supports this claim. Although the partition scores of GPFB-Fast are worse than those of GPFB-SR, they are almost always better than the results of BLPA and BEKM. The results demonstrate that GUIDE could bring about partitions with balance and fairness, achieving satisfactory performance.
### Efficacy of Subgraph Repairing
To test the efficacy of our proposed subgraph repair strategies, we compare the performance of our three strategies with the ground truth subgraphs and the subgraphs without repairing on the Cora dataset as an ablation study. As shown in Table 3, all three subgraph repairing strategies are helpful in improving model performance. The simplest Zero-Feature Neighbor could achieve a 62.32% improvement. It is not surprising that Mirror-Feature Neighbor behaves worse than Zero-Feature Neighbor since the contributions of the aggregated information from the Mirror-Feature Neighbor are zero. Here we randomly select \(\lambda\in[0,1]\) for each missing node to generate the mix-up between the zero and mirror feature. Considering the real application where heterogeneous neighbors may not share the same feature, we can also control this mix-up process by randomly selecting \(\lambda\in[0,\tau]\), where \(\tau\in[0,1)\) can be decided by testing on a small subset.
### Efficacy of Similarity-based Aggregation
To illustrate the performance of the similarity-based aggregation, we compare it with the average aggregation and the learning-based aggregation methods. We aggregate the predictions of GNN models trained based on the partition of GPFB-Fast and GPFB-SR. For convenience, we denote the two partition methods as 'Fast' and 'SR' respectively in Table 4. Even though we train the LBAggr on the full training graph, its performance is not as stunning as SimiAgg. The reason may be caused by the inductive setting, where the behaviors of those submodels on the training subgraphs may differ from those on the test graph. But still the differences between the three aggregation methods are quite small. It is because the fair and balance graph partition and subgrpah repair have improved each subgraph, leading to an improved submodel in each shard. Thus SimAgg isn't significantly bet
\begin{table}
\begin{tabular}{l l|c|c c c|c c} \hline \hline Dataset & Model & Scratch & Random & Eraser-BLPA & Eraser-BEKM & GUIDE-Fast & GUIDE-SR \\ \hline \hline \multirow{4}{*}{Cora} & SuperGAT & 89.17\(\pm\)0.00 & 31.57\(\pm\)0.04 & 41.74\(\pm\)0.16 & 44.92\(\pm\)0.57 & 65.69\(\pm\)0.04 & **66.49\(\pm\)0.12** \\ & GATv2 & 88.94\(\pm\)0.00 & 31.22\(\pm\)0.04 & 43.83\(\pm\)0.68 & 36.62\(\pm\)0.55 & 66.80\(\pm\)0.08 & **68.10\(\pm\)0.16** \\ & SAGE & 92.73\(\pm\)0.00 & 53.68\(\pm\)0.18 & 44.20\(\pm\)0.37 & 53.57\(\pm\)0.60 & 71.33\(\pm\)0.10 & **72.26\(\pm\)0.04** \\ & GIN & 87.07\(\pm\)0.13 & 56.49\(\pm\)0.26 & 67.84\(\pm\)0.14 & 65.55\(\pm\)0.29 & 76.40\(\pm\)0.05 & **77.06\(\pm\)0.06** \\ & GAT & 88.97\(\pm\)0.00 & 31.90\(\pm\)0.07 & 38.91\(\pm\)0.36 & 34.10\(\pm\)0.34 & 66.25\(\pm\)0.09 & **66.40\(\pm\)0.09** \\ & APPNP & 85.96\(\pm\)0.03 & 51.28\(\pm\)0.13 & 38.02\(\pm\)0.26 & 46.38\(\pm\)0.12 & 64.14\(\pm\)0.07 & **64.56\(\pm\)0.05** \\ \cline{2-10} & SuperGAT & 79.33\(\pm\)0.00 & 25.44\(\pm\)1.34 & 53.31\(\pm\)1.15 & 45.98\(\pm\)0.48 & 70.66\(\pm\)0.02 & **71.17\(\pm\)0.02** \\ & GATv2 & 79.53\(\pm\)0.00 & 25.88\(\pm\)1.45 & 58.50\(\pm\)0.36 & 41.04\(\pm\)1.58 & 70.78\(\pm\)0.02 & **71.26\(\pm\)0.02** \\ & SAGE & 83.08\(\pm\)0.00 & 69.10\(\pm\)0.05 & 66.90\(\pm\)0.06 & 69.25\(\pm\)0.05 & **72.71\(\pm\)0.02** & 72.38\(\pm\)0.01 \\ CiteSeer & GIN & 81.20\(\pm\)0.06 & 58.02\(\pm\)0.41 & 66.29\(\pm\)0.11 & 64.21\(\pm\)0.13 & 69.64\(\pm\)0.07 & **69.67\(\pm\)0.04** \\ & GAT & 79.61\(\pm\)0.00 & 26.32\(\pm\)1.46 & 58.57\(\pm\)0.64 & 43.46\(\pm\)1.17 & 70.66\(\pm\)0.02 & **71.02\(\pm\)0.02** \\ & APPNP & 77.49\(\pm\)0.00 & 72.98\(\pm\)0.02 & 66.33\(\pm\)0.40 & 71.29\(\pm\)0.04 & 73.09\(\pm\)0.03 & 73.43\(\pm\)0.02 \\ \cline{2-10} & SuperGAT & 84.21\(\pm\)0.00 & 44.67\(\pm\)0.00 & 70.27\(\pm\)0.01 & 69.84\(\pm\)0.01 & **71.67\(\pm\)0.01** & 69.29\(\pm\)0.01 \\ & GATv2 & 83.93\(\pm\)0.00 & 44.67\(\pm\)0.00 & 70.23\(\pm\)0.01 & 69.06\(\pm\)0.05 & **71.69\(\pm\)0.01** & 69.10\(\pm\)0.00 \\ & SAGE & 86.72\(\pm\)0.00 & 60.38\(\pm\)0.02 & 70.13\(\pm\)0.00 & 69.70\(\pm\)0.00 & 71.92\(\pm\)0.01 & **72.16\(\pm\)0.01** \\ DBLP & GIN & 87.35\(\pm\)0.01 & 67.76\(\pm\)0.02 & **79.09\(\pm\)0.02** & 75.78\(\pm\)0.09 & 77.11\(\pm\)0.03 & 77.51\(\pm\)0.00 \\ & GAT & 84.05\(\pm\)0.00 & 44.67\(\pm\)0.00 & 70.41\(\pm\)0.01 & 68.51\(\pm\)0.08 & **71.39\(\pm\)0.01** & 68.70\(\pm\)0.01 \\ & APPNP & 83.80\(\pm\)0.00 & 67.53\(\pm\)0.00 & 71.56\(\pm\)0.01 & 70.96\(\pm\)0.01 & **73.62\(\pm\)0.01** & 72.84\(\pm\)0.01 \\ \cline{2-10} & SuperGAT & 87.57\(\pm\)0.00 & 22.79\(\pm\)0.01 & 53.01\(\pm\)0.02 & 41.98\(\pm\)0.25 & **69.63\(\pm\)0.00** & 69.53\(\pm\)0.01 \\ & GATv2 & 86.98\(\pm\)0.00 & 22.79\(\pm\)0.01 & 53.58\(\pm\)0.04 & 40.08\(\pm\)0.29 & **73.28\(\pm\)0.01** & 73.15\(\pm\)0.01 \\ CS & SAGE & 91.79\(\pm\)0.00 & 71.96\(\pm\)0.02 & 57.37\(\pm\)0.04 & 74.38\(\pm\)0.01 & **80.68\(\pm\)0.00** & 80.67\(\pm\)0.00 \\ & GIN & 83.69\(\pm\)0.18 & 36.70\(\pm\)0.01 & 75.42\(\pm\)0.15 & **83.65\(\pm\)0.01** & 79.24\(\pm\)0.01 & 79.73\(\pm\)0.02 \\ & GAT & 87.37\(\pm\)0.00 & 22.79\(\pm\)0.01 & 53.24\(\pm\)0.01 & 43.17\(\pm\)1.04 & **69.55\(\pm\)0.01** & 69.45\(\pm\)0.01 \\ & APPNP & 78.70\(\pm\)0.01 & 58.03\(\pm\)0.01 & 48.24\(\pm\)0.10 & 47.81\(\pm\)0.09 & 74.38\(\pm\)0.01 & **74.44\(\pm\)0.01** \\ \cline{2-10} & Normalized Score & 100.00 & 0.00 & 20.42 & 23.71 & 59.52 & 59.40 \\ \hline \hline \end{tabular} \end{
ter than the average weighting. But it is still useful to make the framework more robust and more explainable.
### Unlearning Ability
Following the same setting as in [14], we evaluate the unlearning ability of GUIDE using the state-of-the-art privacy attack against machine unlearning. We take the aggregated model of GUIDE as the unlearned model after processing 100 random unlearning requests. Using an enhanced membership inference attack [14], the attacker with access to the original model and the unlearned model could determine whether a specific node is indeed removed from the unlearned model. The ratio of member and non-member is set to 1:1. As shown in Table 5, the AUC of membership inference attack on GUIDE is close to 50% (random guess), showing that GUIDE is enough to conduct machine unlearning with low privacy risk.
The study on the sensitivity of GUIDE to the number of shards is provided in Appendix G.2.
## 6 Conclusions
In this work, we proposed the first general framework, GUIDE, for solving the inductive graph unlearning problem. Generally speaking, GUIDE consists of three components: guided graph partition with fairness and balance, efficient subgraph repairing, and similarity-based aggregation. Due to its exceptional performance compared with the existing methods, we believe this work could serve as a cornerstone for future work on inductive graph unlearning tasks in production machine learning systems.
Although GUIDE offers advantageous performance, it comes with additional memory cost due to its subgraph repair, making each subgraph larger than the original size. Furthermore, a generalization of "partition fairness" to unsupervised graph learning is needed for further applications.
\begin{table}
\begin{tabular}{l l|c c c} \hline \hline \multicolumn{1}{c|}{Method} & Average & LBAggr & SimiAgg \\ \hline \hline \multirow{4}{*}{Fast} & SAGE & 71.11\(\pm\)0.10 & 69.02\(\pm\)0.17 & **71.33\(\pm\)0.10** \\ & GIN & **76.40\(\pm\)0.06** & 75.25\(\pm\)0.08 & **76.40\(\pm\)0.05** \\ & GAT & 66.10\(\pm\)0.09 & 65.97\(\pm\)0.18 & **66.25\(\pm\)0.09** \\ \cline{2-5} & SAGE & 72.07\(\pm\)0.05 & 70.55\(\pm\)0.16 & **72.26\(\pm\)0.04** \\ SR & GIN & 76.69\(\pm\)0.06 & 76.32\(\pm\)0.03 & **77.06\(\pm\)0.06** \\ & GAT & 66.01\(\pm\)0.09 & **67.15\(\pm\)0.20** & 66.40\(\pm\)0.09 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of 3 aggregation methods on Cora(%).
\begin{table}
\begin{tabular}{l|c|c|c|c c c} \hline \hline Partition Method & Model & Ground Truth & No Repairing & Mirror Feature & Zero Feature & MixUp \\ \hline \hline \multirow{4}{*}{GPFB-Fast} & SAGE & 77.26\(\pm\)0.04 & 59.98\(\pm\)0.19 & 63.22\(\pm\)0.08 & 73.55\(\pm\)0.05 & 71.33\(\pm\)0.10 \\ & GIN & 79.26\(\pm\)0.04 & 70.09\(\pm\)0.05 & 77.13\(\pm\)0.07 & 72.07\(\pm\)0.06 & 76.40\(\pm\)0.05 \\ & GAT & 70.52\(\pm\)0.10 & 49.63\(\pm\)0.08 & 62.90\(\pm\)0.09 & 66.52\(\pm\)0.07 & 66.25\(\pm\)0.09 \\ \cline{2-5} & SAGE & 77.78\(\pm\)0.02 & 59.98\(\pm\)0.09 & 65.67\(\pm\)0.08 & 74.38\(\pm\)0.04 & 72.26\(\pm\)0.04 \\ \cline{2-5} & GIN & 78.96\(\pm\)0.02 & 69.28\(\pm\)0.14 & 75.16\(\pm\)0.09 & 72.20\(\pm\)0.06 & 77.06\(\pm\)0.06 \\ \cline{2-5} & GAT & 70.85\(\pm\)0.06 & 50.00\(\pm\)0.20 & 65.18\(\pm\)0.09 & 67.08\(\pm\)0.08 & 66.40\(\pm\)0.09 \\ \cline{2-5} & Normalized Score & & 100.00 & 0.00 & 54.09 & 62.32 & 73.65 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of different subgraph repairing strategies on Cora(%)
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Dataset & SAGE & GAT & GIN \\ \hline \hline Cora & 51.34\(\pm\)0.08 & 49.78\(\pm\)0.02 & 53.57\(\pm\)0.19 \\ CiteSeer & 53.36\(\pm\)0.10 & 50.97\(\pm\)0.12 & 50.70\(\pm\)0.08 \\ DBLP & 53.34\(\pm\)0.07 & 51.22\(\pm\)0.19 & 55.83\(\pm\)0.7 \\ CS & 50.34\(\pm\)0.14 & 51.27\(\pm\)0.14 & 48.09\(\pm\)0.14 \\ \hline \hline \end{tabular}
\end{table}
Table 5: AUC of membership inference attack on GUIDE(%).
Figure 6: Partition scores of 5 methods on different datasets.
## Acknowledgments
Di Wang and Cheng-Long Wang were supported by BAS/1/1689-01-01, URF/1/4663-01-01, FCC/1/1976-49-01, RGC/3/4816-01-01, and REI/1/4811-10-01 of King Abdullah University of Science and Technology (KAUST) and KAUST-SDAIA Center of Excellence in Data Science and Artificial Intelligence.
|
2310.08059
|
Stability of Periodic Waves for the Defocusing Fractional Cubic
Nonlinear Schrödinger Equation
|
In this paper, we determine the spectral instability of periodic odd waves
for the defocusing fractional cubic nonlinear Schr\"odinger equation. Our
approach is based on periodic perturbations that have the same period as the
standing wave solution, and we construct real periodic waves by minimizing a
suitable constrained problem. The odd solution generates three negative simple
eigenvalues for the associated linearized operator, and we obtain all this
spectral information by using tools related to the oscillation theorem for
fractional Hill operators. Newton's iteration method is presented to generate
the odd periodic standing wave solutions and numerical results have been used
to apply the spectral stability theory via Krein signature as established in
[22] and [23].
|
Handan Borluk, Gulcin M. Muslu, Fábio Natali
|
2023-10-12T06:11:18Z
|
http://arxiv.org/abs/2310.08059v1
|
# Stability of Periodic Waves for the Defocusing Fractional Cubic Nonlinear Schrodinger Equation
###### Abstract
In this paper, we determine the spectral instability of periodic odd waves for the defocusing fractional cubic nonlinear Schrodinger equation. Our approach is based on periodic perturbations that have the same period as the standing wave solution, and we construct real periodic waves by minimizing a suitable constrained problem. The odd solution generates three negative simple eigenvalues for the associated linearized operator, and we obtain all this spectral information by using tools related to the oscillation theorem for fractional Hill operators. Newton's iteration method is presented to generate the odd periodic standing wave solutions and numerical results have been used to apply the spectral stability theory via Krein signature as established in [22] and [23].
_Keywords:_ defocusing fractional Schrodinger equation, periodic solutions via constrained minimization problem, spectral stability, Newton's iteration method.
## 1 Introduction
The main goal of this paper is to present new results concerning the existence and spectral stability of periodic standing waves for the defocusing fractional nonlinear Schrodinger equation (dfNLS)
\[iU_{t}-(-\Delta)^{s}U-|U|^{2}U=0. \tag{1.1}\]
Here \(U=u+iv=(u,v):\mathbb{T}\times\mathbb{R}\longrightarrow\mathbb{C}\) is a complex-valued function and \(2\pi\)-periodic at the first variable with \(\mathbb{T}:=[-\pi,\pi]\). In our context, the fractional Laplacian \((-\Delta)^{s}\) is defined as a pseudo-differential operator
\[(\widehat{-\Delta})^{s}V(\xi)=|\xi|^{2s}\widehat{V}(\xi), \tag{1.2}\]
where \(\xi\in\mathbb{Z}\) and \(s\in(0,1]\) (see [30]).
The dfNLS equation (1.1) admits the following conserved quantities \(E,F:H^{s}_{per}\times H^{s}_{per}\longrightarrow\mathbb{R}\) which are given by
\[E(U)=\frac{1}{2}\int_{-\pi}^{\pi}|(-\Delta)^{\frac{s}{2}}U|^{2}+\frac{1}{2}|U| ^{4}\;dx, \tag{1.3}\]
\[F(U)=\frac{1}{2}\int_{-\pi}^{\pi}\left|U\right|^{2}dx. \tag{1.4}\]
A standing periodic wave solution for the equation (1.1) has the form
\[U(x,t)=e^{i\alpha t}\varphi(x), \tag{1.5}\]
where \(\varphi:\mathbb{T}\longrightarrow\mathbb{R}\) is a smooth \(2\pi\)-periodic function and \(\alpha\in\mathbb{R}\) represents the wave frequency which is assumed to be negative for now. Substituting (1.5) into (1.1), we obtain the following differential equation with fractional derivative
\[(-\Delta)^{s}\varphi+\alpha\varphi+\varphi^{3}=0. \tag{1.6}\]
For \(\alpha:=-\omega<0\), we consider the standard Lyapunov functional defined as
\[G(U):=E(U)-\omega F(U). \tag{1.7}\]
By (1.6), we obtain \(G^{\prime}(\varphi,0)=0\), that is, \((\varphi,0)\) is a critical point of \(G\). In addition, the linearized operator around the pair \((\varphi,0)\) is given by
\[\mathcal{L}:=G^{\prime\prime}(\varphi,0)=\begin{pmatrix}\mathcal{L}_{1}&0\\ 0&\mathcal{L}_{2}\end{pmatrix}, \tag{1.8}\]
where
\[\mathcal{L}_{1}=(-\Delta)^{s}-\omega+3\varphi^{2}\qquad\text{and}\qquad \mathcal{L}_{2}=(-\Delta)^{s}-\omega+\varphi^{2}. \tag{1.9}\]
Both operators \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) are self-adjoint and they are defined in \(L_{per}^{2}\) with dense domain \(H_{per}^{2s}\). It is worth mentioning that operator \(\mathcal{L}\) in (1.8) plays an important role in our study. In order to set our spectral problem concerning periodic waves with respect to perturbation with the same period, we consider the complex evolution \(U=(u,v)\) associated with the equation (1.1). To simplify the notation, let us consider \(\Phi=(\varphi,0)\) and the perturbation
\[U(x,t)=e^{-i\omega t}(\Phi(x)+W(x,t)), \tag{1.10}\]
where \(W(x,t)=w_{1}(x,t)+iw_{2}(x,t)\equiv(w_{1}(x,t),w_{2}(x,t))\). Substituting (1.10) into (1.1) and neglecting all the nonlinear terms, we get the following linearized equation:
\[\frac{d}{dt}W(x,t)=J\mathcal{L}W(x,t), \tag{1.11}\]
where \(J\) is given by
\[J=\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right), \tag{1.12}\]
and \(\mathcal{L}\) is the diagonal operator given by (1.8).
To define the concept of spectral stability within our context, we need to substitute the growing mode solution of the form \(W(x,t)=e^{\lambda t}w(x)\) into the linear equation (1.11) to obtain the following spectral problem
\[J\mathcal{L}w=\lambda w.\]
The definition of spectral stability in our context reads as follows.
**Definition 1.1**.: _The periodic wave \(\Phi\) is said to be spectrally stable by periodic perturbations that have the same period as the standing wave solution if \(\sigma(J\mathcal{L})\subset i\mathbb{R}\). Otherwise, if there exists at least one eigenvalue \(\lambda\) associated with the operator \(J\mathcal{L}\) that has a positive real part, \(\Phi\) is said to be spectrally unstable._
The study of spectral (orbital) stability of periodic standing waves of the form \(U(x,t)=e^{i\alpha t}\varphi(x)\) associated to the cubic nonlinear Schrodinger equation
\[iU_{t}+U_{xx}+b|U|^{2}U=0, \tag{1.13}\]
has attracted the interest of a large number of researchers. Let \(\alpha>0\) be fixed. The case \(b=1\) in (1.13) represents the _focusing_ nonlinearity, while \(b=-1\) represents the _defocusing_ nonlinearity. In both cases,
sufficient conditions and applications of known techniques have been shown to be efficient. For the case \(b=1\), the author of [3] established the stability properties of periodic standing waves solutions with _dnoidal_ profile with respect to perturbations with the same period \(L\) by using the ideas introduced in [7] and [32], (see also [14] and [19]). Existence of smooth branches of solutions with _cnoidal_ profiles were also reported in [3]; however, the author was not able to obtain the orbital stability/instability in the energy space for these waves. Next, by using the techniques introduced in [17] and [18], the cnoidal waves were shown to be orbitally stable in [14] and [15] with respect to anti-periodic perturbations, i.e., when \(f\) satisfies \(f(x+L/2)=-f(x)\) for all \(x\in\mathbb{R}\). Spectral stability with respect to bounded or localized perturbations were also reported in [14]. For \(\alpha>0\) in a suitable interval \((0,\alpha*)\), the authors of [19] have established spectral stability results for the cnoidal waves with respect to perturbations with the same period \(L\) and orbital stability results in the space constituted by anti-periodic functions with period \(L/2\) (see also [28]). Their proofs rely on proving that the cnoidal waves satisfy a convenient minimization problem with constraints, which yields the orbital stability. The spectral stability follows by relating the coercivity of the linearized action with the number of eigenvalues with negative Krein signature of \(J\mathcal{L}\).
The integrability of the equation (1.13) can be used to determine spectral stability results of periodic waves. In [13] the authors studied periodic solutions with dnoidal and cnoidal type for the case \(b=1\). The same approach was used in [8] to prove spectral stability results for the case \(b=-1\) and snoidal type solutions. The spectral stability presented in both cases was with respect to subharmonic perturbations, that is, perturbation of an integer multiple \(n\) times the minimal period of the solution. The authors employed the arguments in [17] to conclude the orbital stability by considering the orbit generated only by one symmetry of the equation (1.13).
We present some recent contributions concerning the fractional version
\[iU_{t}-(-\Delta)^{s}U+b|U|^{2}U=0, \tag{1.14}\]
of the equation (1.13). Indeed, when \(s\in(0,1)\) and \(b=\pm 1\) the orbital stability of real-valued, even and anti-periodic standing wave solutions \(\varphi\) of (1.1) has been studied in [11]. The authors determined the existence of real solutions via a minimization problem in the context of anti-periodic functions (denoted by \(L^{2}_{a}(0,L)\)) and they established that the associated linearized operator acting in \(L^{2}_{a}(0,L)\) is non-degenerate. By the additional assumption \(\frac{d}{d\alpha}\int_{0}^{L}\varphi^{2}dx>0\), the authors were able to show that \(\varphi\) is orbitally stable with respect to anti-periodic perturbations in a suitable subspace of \(H^{s}(0,L)\cap L^{2}_{a}(0,L)\).
In [25] the authors studied the existence and orbital stability of positive and periodic standing wave solutions of the form \(U(x,t)=e^{i\alpha t}\varphi(x)\) for the equation (1.14) with with \(b=1\). The existence of periodic waves was determined by using a minimizing constrained problem in the complex setting and the orbital stability was proved by combining some tools regarding the oscillation theorem for fractional Hill operators and the Vakhitov-Kolokolov condition. The authors also presented a numerical approach to generate the periodic standing wave solutions of (1.14) with \(b=1\) by using Petviashvili's iteration method. It is important to mention that the numerical method has also been used to establish the values of the frequency \(\alpha>\frac{1}{2}\) and the index \(s>0\) in (1.13) where the wave \(\varphi\) is spectrally (orbitally) stable or not. In fact, if \(s\in\left(\frac{1}{4},\frac{1}{2}\right]\) the periodic wave is spectrally (orbitally) unstable. If \(s\in\left[s^{*},1\right]\), the periodic wave is spectrally (orbitally) stable, where \(s^{*}\approx 0.6\). For \(s\in\left(\frac{1}{2},s^{*}\right)\), the authors guaranteed the existence of a critical value \(\alpha_{c}>\frac{1}{2}\) such that the periodic wave is spectrally (orbitally) unstable if \(\alpha\in\left(\frac{1}{2},\alpha_{c}\right)\) and spectrally (orbitally) stable if \(\alpha>\alpha_{c}\).
Now, we give the main points of our paper. First, we show the existence of an odd periodic two-lobe solution \(\varphi\) for the equation (1.6). For this aim, we need to solve the real constrained minimization problem
\[\inf\left\{\mathcal{E}(u)=E(u,0):=\frac{1}{2}\int_{-\pi}^{\pi}((-\Delta)^{ \frac{s}{2}}u)^{2}+\frac{1}{2}u^{4}\;dx\;;\;u\in H^{s}_{per,odd},\;\int_{- \pi}^{\pi}u^{2}\;dx=\tau\right\}, \tag{1.15}\]
for fixed \(\tau>0\), where \(s\in\left(\frac{1}{4},1\right]\).
Periodic odd solutions of (1.15) are real functions \(\varphi\) and therefore the existence of a periodic standing wave having the form (1.5) is established without further problems. This fact is different from the approaches in [11] and [25] since they obtained, complex periodic solutions of a complex constrained minimization problem. In both cases, they need to assume suitable assumptions in order to get the existence of periodic standing waves of the form (1.5) (see [11, Lemma 2.2] and [25, Remark 3.3]). Our periodic solution obtained from the problem (1.15) enables us to consider a real-valued solution \(\varphi\) for the problem (1.15) which is
automatically odd. In addition, we can consider that \(\varphi\) has a two-lobe profile for all \(\omega>1\) (see Proposition 3.3).
A different way to construct periodic real-valued solutions associated with the equation (1.6) is established by using the local and global bifurcation theory as determined in [10]. First, we construct small amplitude periodic solutions in the same way as in [26] (see also [9]) for \(\omega>1\) and close to the bifurcation point 1. Afterwards, we establish sufficient conditions to extend \(\omega\) to the whole interval \((1,+\infty)\) by constructing an odd periodic continuous function \(\omega\in(1,+\infty)\longmapsto\varphi_{\omega}\in H^{2s}_{per,odd}\) where \(\varphi_{\omega}\) is a periodic solution of (1.6). It is important to mention that the periodic wave obtained by the global bifurcation theory may not have a two-lobe profile, and thus we can choose the periodic waves which arise as a minimum of the problem (1.15). The existence of small amplitude waves associated with the Schrodinger equation was determined in [14] for the equation (1.13) with \(b=\pm 1\). They first show that these waves are orbitally stable within the class of solutions that have the same period. For the case of general bounded perturbations, they prove that the small amplitude travelling waves are stable in the defocusing case and unstable in the focusing case.
Since the minimizer \(\varphi\) of (1.15) is a real odd two-lobe solution of (1.6), we obtain that \(n(\mathcal{L}_{1})=1\) (see Lemma 4.1), where \(n(\mathcal{A})\) denotes the number of negative eigenvalues of a certain linear operator \(\mathcal{A}\) (counting multiplicities). In addition, Lemma 4.1 also gives that \(\ker(\mathcal{L}_{1})=[\varphi^{\prime}]\) and we can use the implicit function theorem to obtain, for a fixed value \(\omega_{0}>1\), the existence of an open interval \(\mathcal{I}\) containing \(\omega_{0}\) and a smooth function
\[\mathcal{I}\ni\omega\longmapsto\varphi_{\omega}\in H^{2s}_{per,odd} \tag{1.16}\]
that solves equation (1.6). Deriving this equation with respect to \(\omega\in\mathcal{I}\), it follows that \(\mathcal{L}_{1}(\partial_{\omega}\varphi)=\varphi\), so that \(\varphi\in\mathrm{range}(\mathcal{L})\). Concerning the linear operator \(\mathcal{L}_{2}\), we obtain by Lemma 4.2 that \(n(\mathcal{L}_{2})=2\) and \(\ker(\mathcal{L}_{2})=[\varphi]\). Gathering all spectral information regarding \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\), we obtain from the fact \(\mathcal{L}\) in (1.8) has a diagonal form that \(n(\mathcal{L})=3\) and \(\ker(\mathcal{L})=[(\varphi^{\prime},0),(0,\varphi)]\).
The strategy to prove our spectral instability result is based on an adaptation of the arguments in [22] and [23]. Let \(z(\mathcal{A})\) denote the dimension of the kernel of a certain linear operator \(\mathcal{A}\). Since in our case we have \(z(\mathcal{L})=2\), let \(\Theta_{1}=(\varphi^{\prime},0)\) and \(\Theta_{2}=(0,\varphi)\) represent the elements in \(\ker(\mathcal{L})\). Let \(V\) be the \(2\times 2\) matrix whose entries are given by
\[V_{jl}=(\mathcal{L}^{-1}J\Theta_{j},J\Theta_{l})_{L^{2}_{per}\times L^{n}_{per }}, \tag{1.17}\]
where \(1\leqslant j,l\leqslant 2\). Thus, \(V\) is given by
\[\begin{array}{rcl}V&=&\left(\begin{array}{cc}(\mathcal{L}^{-1}J\Theta_{1},J\Theta_{1})_{L^{2}_{per}}&(\mathcal{L}^{-1}J\Theta_{1},J\Theta_{2})_{L^{2}_{ per}}\\ (\mathcal{L}^{-1}J\Theta_{2},J\Theta_{1})_{L^{2}_{per}}&(\mathcal{L}^{-1}J \Theta_{2},J\Theta_{2})_{L^{2}_{per}}\end{array}\right).\\ \\ &=&\left(\begin{array}{cc}(\mathcal{L}^{-1}_{2}\varphi^{\prime},\varphi^{ \prime})_{L^{2}_{per}}&0\\ 0&(\mathcal{L}^{-1}_{1}\varphi,\varphi)_{L^{2}_{per}}\end{array}\right).\end{array} \tag{1.18}\]
On the other hand, the equality
\[k_{r}+k_{c}+k_{-}=n(\mathcal{L})-n(V), \tag{1.19}\]
is given in [23] and the left-hand side of the equality (1.19) is called _hamiltonian Krein index_. Concerning operator \(\mathcal{L}\) in (1.8), let \(k_{r}\) be the number of real-valued and positive eigenvalues (counting multiplicities). The number \(k_{c}\) denotes the number of complex-valued eigenvalues with a positive real part and \(k_{-}\) is the number of pairs of purely imaginary eigenvalues with negative Krein signature of \(\mathcal{L}\). Since \(k_{c}\) and \(k_{-}\) are always even numbers, we obtain that if the right-hand side in (1.19) is an odd number, then \(k_{r}\geqslant 1\) and we have automatically the spectral instability. Moreover, if the difference \(n(\mathcal{L})-n(V)\) is zero, then \(k_{c}=k_{-}=k_{r}=0\) which implies the spectral stability.
Since \(n(\mathcal{L})=3\) and \(z(\mathcal{L})=2\), the case \(n(V)=3\) cannot be considered according to the square matrix in (1.18). Now, if \(n(V)=0\), we obtain that \(n(\mathcal{L})-n(V)=3\) which implies the spectral instability. When \(n(V)=1\), we cannot conclude the spectral instability since the difference \(n(\mathcal{L})-n(V)=2\) is an even number. The spectral stability result is inconclusive since the values of \(k_{c}\) and \(k_{-}\) are always even numbers (we can have zero or two eigenvalues with positive real part associated with the operator \(J\mathcal{L}\)). However, if \(n(V)=2\), then \(n(\mathcal{L})-n(V)=1\) and this implies the spectral instability. To obtain a suitable conclusion for the spectral stability, we need to calculate \((\mathcal{L}^{-1}_{2}\varphi^{\prime},\varphi^{\prime})_{L^{2}_{per}}\) and \((\mathcal{L}^{-1}_{1}\varphi,\varphi)_{L^{2}_{per}}\). In our approach, we shall consider the restrictions of the linearized operator \(\mathcal{L}\) in (1.8) to even and odd functions. These operators
will be denoted as \(\mathcal{L}_{\text{even}}\) and \(\mathcal{L}_{\text{odd}}\) and we can conclude in our case that \(n(\mathcal{L}_{even})=2\), \(n(\mathcal{L}_{odd})=1\), \(\ker(\mathcal{L}_{even})=[(\varphi^{\prime},0)]\), and \(\ker(\mathcal{L}_{odd})=[(0,\varphi)]\). Thus, for the matrix \(V\) with these restrictions we have \(V_{even}=(\mathcal{L}_{2}^{-1}\varphi^{\prime},\varphi^{\prime})_{L_{per}^{2}}^ {2}\) and \(V_{odd}=(\mathcal{L}_{1}^{-1}\varphi,\varphi)_{L_{per}^{2}}\), where \(V_{even}\) and \(V_{odd}\) are, respectively, the restriction of the matrix \(V\) to the even and odd periodic functions.
To calculate \(V_{even}\) and \(V_{odd}\) we use a numerical approach. To the best of our knowledge, the exact solution of the equation (1.1) is known only for \(s=1\). Therefore, we first generate the odd periodic two-lobe solution \(\varphi\) by using Newton's iteration method for \(s\in(\frac{1}{4},1]\). We then evaluate the necessary inner products numerically by using the trapezoidal rule. The numerical approach enables us to conclude that \(V_{even}=(\mathcal{L}_{2}^{-1}\varphi^{\prime},\varphi^{\prime})_{L_{per}^{2}}\) is positive while \(V_{odd}=(\mathcal{L}_{1}^{-1}\varphi,\varphi)_{L_{per}^{2}}\) is negative. Since \(n(\mathcal{L})=3\) and \(n(V)=1\), we see that the difference \(n(\mathcal{L})-n(V)=3-1=2\) is even number and the spectral stability is inconclusive. However, if we restrict our analysis to the space \(L_{per,odd}^{2}\times L_{per,odd}^{2}\) of odd periodic functions, we obtain \(n(\mathcal{L}_{odd})=1\) and \(V_{odd}>0\). Since the difference \(n(\mathcal{L}_{odd})-n(V_{odd})=1-0=1\) is an odd number, we obtain that the wave \(\Phi\) is spectrally unstable. The same stability property occurs if one considers the operator \(\mathcal{L}\) in the space \(L_{per,even}^{2}\times L_{per,even}^{2}\) of even periodic functions. In this case, we have \(n(\mathcal{L}_{even})=2\) and \(V_{even}<0\). Since the difference \(n(\mathcal{L}_{even})-n(V_{even})=2-1=1\) is also an odd number, we obtain that the wave \(\Phi\) is spectrally unstable.
In both cases, our main result is given by the following theorem:
**Theorem 1.2**.: _Let \(s\in\left(\frac{1}{4},1\right]\) and \(\omega>1\) be fixed. Consider \(\varphi\) as the odd and periodic two-lobe solution for the equation (1.6) obtained by the minimization problem (1.15). The periodic wave is spectrally unstable._
**Remark 1.3**.: _We can employ the abstract approach from [17] to establish the orbital instability of periodic waves within the energy space \(H_{per}^{s}\times H_{per}^{s}\). In fact, the proof of this assertion revolves around demonstrating the orbital instability within the space \(H_{per,odd}^{s}\times H_{per,odd}^{s}\) by exclusively taking into account the rotational symmetry. This is because the translational symmetry is not an invariant in this space. Since \(n(\mathcal{L}_{odd})=1\) and \(\ker(\mathcal{L}_{odd})=[(0,\varphi)]\), we can define a smooth function \(\mathsf{d}:(1,+\infty)\longrightarrow\mathbb{R}\) as follows: \(\mathsf{d}(\omega)=E(\varphi,0)-\omega F(\varphi,0)=G(\varphi,0)\). Utilizing equation (4.3), we deduce from the fact that \((\varphi,0)\) is a critical point of \(G\) in (1.7) that \(\mathsf{d}^{\prime\prime}(\omega)=-\frac{1}{2}\frac{d}{dx}\int_{-\pi}^{\pi} \varphi(x)^{2}dx=-\frac{1}{2}V_{odd}<0\). By applying the instability theorem from [17], we conclude that the periodic wave \(\varphi\) is orbitally unstable in both \(H_{per,odd}^{s}\times H_{per,odd}^{s}\) and, consequently, in \(H_{per}^{s}\times H_{per}^{s}\)._
Our paper is organized as follows: In Section 2, we give some remarks on the orbital stability and the global well-posedness for the Cauchy problem associated to the equation (1.1). The existence of odd periodic minimizers with a two-lobe profile as well as the existence of small amplitude periodic waves are determined in Section 3. In Section 4, we present spectral properties for the linearized operator related to the dGNLS equation. Finally, our result about orbital instability associated with periodic waves is shown in Section 5.
**Notation.** For \(s\geq 0\), the real Sobolev space \(H_{per}^{s}:=H_{per}^{s}(\mathbb{T})\) consists of all real-valued periodic distributions \(f\) such that
\[\|f\|_{H_{per}^{s}}^{2}:=2\pi\sum_{k=-\infty}^{\infty}(1+k^{2})^{s}|\hat{f}(k) |^{2}<\infty, \tag{1.20}\]
where \(\hat{f}\) is the periodic Fourier transform of \(f\) and \(\mathbb{T}=[-\pi,\pi]\). The space \(H_{per}^{s}\) is a Hilbert space with the inner product denoted by \((\cdot,\cdot)_{H_{per}^{s}}\). When \(s=0\), the space \(H_{per}^{s}\) is isometrically isomorphic to the space \(L_{per}^{2}:=H_{per}^{0}\) (see, e.g., [21]). The norm and inner product in \(L_{per}^{2}\) will be denoted by \(\|\cdot\|_{L_{per}^{2}}\) and \((\cdot,\cdot)_{L_{per}^{2}}\), respectively. To avoid verloading of notation, we omit the interval \([-\pi,\pi]\) of the space \(H_{per}^{s}(\mathbb{T})\) and we denote it simply by \(H_{per}^{s}\). In addition, the norm given in (1.20) can be written as (see [2])
\[\|f\|_{H_{per}^{s}}^{2}=\|(-\Delta)^{\frac{s}{2}}f\|_{L_{per}^{2}}^{2}+\|f\|_{L_ {per}^{2}}^{2}. \tag{1.21}\]
For \(s\geq 0\), we denote \(H_{per,odd(even)}^{s}:=\{f\in H_{per}^{s}\ ;\ f\ \text{ is an odd(even) function}\}\), endowed with the norm and inner product in \(H_{per}^{s}\). Since \(\mathbb{C}\) can be identified with \(\mathbb{R}^{2}\), notations above can also be used in the complex/vectorial case in the following sense: For \(f\in H_{per}^{s}\times H_{per}^{s}\) we have \(f=f_{1}+if_{2}\equiv(f_{1},f_{2})\), where \(f_{i}\in H_{per}^{s}\), \(i=1,2\).
Remarks on the orbital stability and global well-posedness.
Our aim in this section is to give a brief remark concerning the orbital stability of periodic waves, local and global well-posedness for the associated Cauchy problem associated to the dfNLS equation as
\[\begin{cases}iU_{t}+(-\Delta)^{s}U-|U|^{2}U=0,\\ U(x,0)=U_{0}(x).\end{cases} \tag{2.1}\]
Indeed \(U=U(x,t)\) is a solution of (1.1), so are \(e^{-i\zeta}U\) and \(U(x-r,t)\) for any \(\zeta,r\in\mathbb{R}\). Considering \(U=(u,v)\), we obtain that (1.1) is invariant under the transformations
\[S_{1}(\zeta)U:=\left(\begin{array}{cc}\cos\zeta&\sin\zeta\\ -\sin\zeta&\cos\zeta\end{array}\right)\left(\begin{array}{c}u\\ v\end{array}\right) \tag{2.2}\]
and
\[S_{2}(r)U:=\left(\begin{array}{c}u(\cdot-r,\cdot)\\ v(\cdot-r,\cdot)\end{array}\right). \tag{2.3}\]
The actions \(S_{1}\) and \(S_{2}\) define unitary groups in \(H^{s}_{per}\times H^{s}_{per}\) with infinitesimal generators given by
\[S^{\prime}_{1}(0)U:=\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right)\left(\begin{array}{c}u\\ v\end{array}\right)=J\left(\begin{array}{c}u\\ v\end{array}\right)\mbox{ and }S^{\prime}_{2}(0)U:=\partial_{x}\left( \begin{array}{c}u\\ v\end{array}\right).\]
Since the equation (1.1) is invariant under the actions of \(S_{1}\) and \(S_{2}\), we define the orbit generated by \(\Phi=(\varphi,0)\) as
\[\mathcal{O}_{\Phi}=\left\{S_{1}(\zeta)S_{2}(r)\Phi;\zeta,r\in\mathbb{R}\right\} =\left\{\left(\begin{array}{cc}\cos\zeta&\sin\zeta\\ -\sin\zeta&\cos\zeta\end{array}\right)\left(\begin{array}{c}\varphi(\cdot -r)\\ 0\end{array}\right);\;\zeta,r\in\mathbb{R}\right\}.\]
The pseudometric \(d\) in \(H^{s}_{per}\times H^{s}_{per}\)is given by \(d(U,W):=\inf\{\|U-S_{1}(\zeta)S_{2}(r)W\|_{H^{s}_{per}\times H^{s}_{per}};\; \;\zeta,r\in\mathbb{R}\}.\) The distance between \(U\) and \(W\) is the distance between \(U\) and the orbit generated by \(W\) under the action of rotation and translation, so that \(d(U,\Phi)=d(U,\mathcal{O}_{\Phi})\).
We now present our notion of orbital stability.
**Definition 2.1**.: _We say that \(\Phi\) is orbitally stable in \(H^{s}_{per}\times H^{s}_{per}\) provided that, given \(\varepsilon>0\), there exists \(\delta>0\) with the following property: if \(U_{0}\in H^{s}_{per}\times H^{s}_{per}\) satisfies \(\left|U_{0}-\Phi\right|_{H^{s}_{per}}<\delta\), then the global solution \(U(t)\) defined in the semi-interval \([0,+\infty)\) satisfies \(d(U(t),\mathcal{O}_{\Phi})<\varepsilon\), for all \(t\geqslant 0.\) Otherwise, we say that \(\Theta\) is orbitally unstable in \(H^{s}_{per}\times H^{s}_{per}\)._
Now, we present a global well-posedness result in \(H^{s}_{per}\times H^{s}_{per}\).
**Proposition 2.2**.: _Let \(s\in\left(\frac{1}{4},1\right]\) be fixed. The Cauchy problem in (2.1) is globally well-posed in \(H^{s}_{per}\times H^{s}_{per}\). More precisely, for any \(U_{0}\in H^{s}_{per}\times H^{s}_{per}\) there exists a unique global solution \(U\in C([0,+\infty),H^{s}_{per}\times H^{s}_{per})\) such that \(U(0)=U_{0}\) and it satisfies (1.1). Moreover, for each \(T>0\) the mapping_
\[U_{0}\in H^{s}_{per}\times H^{s}_{per}\longmapsto U\in C([0,T],H^{s}_{per} \times H^{s}_{per})\]
_is continuous._
Proof.: The existence of local solutions can be established from the arguments in [6] where the authors have used Galerkin's method and the fact that \(E(U)\) is a non-negative conserved quantity.
**Remark 2.3**.: _We see from Definition 2.1 that one of the requirements to establish the orbital instability in the energy space \(H^{s}_{per}\times H^{s}_{per}\) is the existence of a convenient initial-data \(U_{0}\) and a finite time \(T^{\ast}>0\) such that \(\lim_{t\longrightarrow T^{\ast}}\|U(t)\|_{H^{s}_{per}\times H^{s}_{per}}=+\infty\). Since \(E(U)\) in (1.3) is non-negative, we can obtain global solutions in time in \(H^{s}_{per}\times H^{s}_{per}\) by a simple a priori estimate argument and the time \(T^{\ast}>0\) above does not exist. Thus, the results obtained in Remark 1.3 give us that \(\Phi\) is orbitally unstable in \(H^{s}_{per}\times H^{s}_{per}\) even though the evolution \(U\) is global in time._
## 3 Existence of periodic waves.
This section is devoted to prove the existence of odd periodic waves for the equation (1.6) using two approaches. First, we use a variational characterization by minimizing a suitable constrained functional to obtain the existence of odd periodic waves with a two-lobe profile. Second, we present some tools concerning the existence of small amplitude periodic waves using bifurcation theory.
### Existence of periodic waves via variational approach.
In this subsection, we prove the existence of even periodic solutions for (1.6) by considering the variational problem given by (1.15). Before that, we define the concept of solution with _two-lobe_ profile.
**Definition 3.1**.: _We say that a periodic wave satisfying the equation (1.6) has a two-lobe profile if there exists only one maximum and minimum on \([-\pi,\pi]\). Without loss of generality, we assume that the maximum point occurs at \(x=\frac{\pi}{2}\) and the minimum point at \(x=-\frac{\pi}{2}\)._
Let \(\tau>0\) be fixed. Consider the set
\[\mathcal{Y}_{\tau}:=\left\{u\in H^{s}_{per,odd}\;;\;\|u\|^{2}_{L^{2}_{per}}= \tau\right\}. \tag{3.1}\]
It is clear that
\[\mathcal{E}(u):=E(u,0)\geq 0. \tag{3.2}\]
One can establish the result of existence as follows:
**Proposition 3.2**.: _Let \(s\in\left(\frac{1}{4},1\right]\) and \(\tau>0\) be fixed. The minimization problem_
\[\Gamma:=\inf_{u\in\mathcal{Y}_{\tau}}\mathcal{E}(u) \tag{3.3}\]
_has at least one solution, that is, there exists a real-valued function \(\varphi\in\mathcal{Y}_{\tau}\) such that \(\mathcal{E}(\varphi)=\Gamma\). Moreover, there exists \(\omega>0\) such that \(\varphi\) satisfies_
\[(-\Delta)^{s}\varphi-\omega\varphi+\varphi^{3}=0.\]
Proof.: Using the smoothness of the functional \(\mathcal{E}\), we may consider a sequence of minimizers \((u_{n})_{n\in\mathbb{N}}\subset Y_{\tau}\) such that
\[\mathcal{E}(u_{n})\longrightarrow\Gamma,\;\;\;n\longrightarrow\infty. \tag{3.4}\]
Since \(||(-\Delta)^{\frac{s}{2}}u_{n}||^{2}_{L^{2}_{per}}+||u_{n}||^{2}_{L^{2}_{per}} \leq 2\mathcal{E}(u_{n})+\tau\), we obtain by (3.4) that the sequence \((u_{n})_{n\in\mathbb{N}}\subset\mathbb{R}\) is bounded in \(H^{s}_{per,odd}\). For \(s\in\left(\frac{1}{4},1\right]\), we see that the Sobolev space \(H^{s}_{per,odd}\) is reflexive. Thus, there exists \(\varphi\in H^{s}_{per,odd}\) such that (modulus a subsequence),
\[u_{n}\longrightarrow\varphi\text{ weakly in }H^{s}_{per,odd}. \tag{3.5}\]
Again, for \(s\in\left(\frac{1}{4},1\right]\) we obtain that the embedding
\[H^{s}_{per,odd}\hookrightarrow L^{4}_{per,odd}\hookrightarrow L^{2}_{per,odd} \tag{3.6}\]
is compact (see [4, Theorem 2.8] or [1, Theorem 5.1]). Thus, modulus a subsequence we also have
\[u_{n}\longrightarrow\varphi\text{ in }L^{4}_{per,odd}\hookrightarrow L^{2}_{per,odd}. \tag{3.7}\]
Moreover, using the estimate
\[\left|\int_{0}^{L}\left(u_{n}^{4}-\varphi^{4}\right)\,dx\right| \leq \int_{0}^{L}\left|u_{n}^{4}-\varphi^{4}\right|dx\] \[\leq \left(\|\varphi^{3}\|_{L^{4}_{per}}+\|\varphi\|^{2}_{L^{4}_{per}} \left\|u_{n}\right\|_{L^{4}_{per}}+\|\varphi\|_{L^{4}_{per}}\left\|u_{n}\right\| ^{2}_{L^{4}_{per}}+\|u_{n}\|^{3}_{L^{4}_{per}}\right)\|u_{n}-\varphi\|_{L^{4}_ {per}}\]
and (3.7), it follows that \(\|\varphi\|^{2}_{L^{2}_{per}}=\tau\). Furthermore, since \(\mathcal{E}\) is lower semi-continuous, we have
\[\mathcal{E}(\varphi)\leq\liminf_{n\rightarrow\infty}\mathcal{E}(u_{n})\]
that is,
\[\mathcal{E}(\varphi)\leq\Gamma. \tag{3.8}\]
On the other hand, once \(\varphi\) satisfies \(\|\varphi\|^{2}_{L^{2}_{per}}=\tau\), we obtain
\[\mathcal{E}(\varphi)\geq\Gamma. \tag{3.9}\]
Using (3.8) and (3.9), we conclude
\[\mathcal{E}(\varphi)=\Gamma=\inf_{u\in\mathcal{Y}_{\tau}}\mathcal{E}(u).\]
In other words, the function \(\varphi\in\mathcal{Y}_{\tau}\subset H^{s}_{per,odd}\) is a minimizer of the problem (3.3). Notice that since \(\tau>0\), we see that \(\varphi\) is a real-valued function such that \(\varphi\not\equiv 0\).
By the Lagrange multiplier theorem, there exists a constant \(c_{1}\in\mathbb{R}\) such that
\[(-\Delta)^{s}\varphi+\varphi^{3}=c_{1}\varphi.\]
Denoting \(c_{1}:=\omega\), we see that
\[\int_{-\pi}^{\pi}((-\Delta)^{\frac{s}{2}}\varphi)^{2}+\varphi^{4}dx=\omega \int_{-\pi}^{\pi}\varphi^{2}dx.\]
Thus, \(\omega>0\) and \(\varphi\) is a periodic minimizer of the problem (1.15) satisfying the equation
\[(-\Delta)^{s}\varphi-\omega\varphi+\varphi^{3}=0. \tag{3.10}\]
\(\blacksquare\)
**Proposition 3.3** (Existence of Odd Solutions).: _Let \(s\in\left(\frac{1}{4},1\right]\) be fixed. Let \(\varphi\in H^{s}_{per,odd}\) be the real-valued periodic minimizer given by the Proposition 3.2. If \(\omega\in(0,1]\), then \(\varphi\) is the zero solution of the equation (1.6). If \(\omega>1\), then \(\varphi\) is the odd periodic two-lobe solution for the equation (1.6)._
Proof.: First, by a bootstrapping argument we infer that \(\varphi\in H^{\infty}_{per,odd}\) (see [12, Propostion 3.1] and [26, Proposition 2.4]). Second, the solution can be zero and we need to avoid this case in order to guarantee that the minimizer has a two-lobe profile. Indeed, if \(\varphi=0\), the operator \(\mathcal{L}_{1}\) in (1.9) is then given by
\[\mathcal{L}_{1}=(-\Delta)^{s}-\omega. \tag{3.11}\]
Using the Poincare-Wirtinger inequality, we have that
\[(\mathcal{L}_{1}u,u)_{L^{2}_{per}}=((-\Delta)^{\frac{s}{2}}u,(-\Delta)^{\frac {s}{2}}u)_{L^{2}_{per}}-\omega||u||^{2}_{L^{2}_{per}}\geqslant(1-\omega)||u|| ^{2}_{L^{2}_{per}}, \tag{3.12}\]
for all \(u\in H^{2s}_{per,odd}\). Thus, operator \(\mathcal{L}_{1}\) in (3.11) is non-negative when \(\omega\in(0,1]\), so that \(n(\mathcal{L}_{1})=0\) when \(\mathcal{L}_{1}\) is defined over the space \(L^{2}_{per,odd}\). On the other hand, we see that \(\varphi\) is also a periodic minimizer of \(G\) restricted only to one constraint and it is expected that \(n(\mathcal{L}_{1})\leqslant 1\). In addition, we will see in Section 4 that if \(\varphi\) is a nonconstant minimizer then \(\mathcal{L}_{1}\varphi^{\prime}=0\) which implies, by Sturm's oscillation theorem for fractional linear operators, in fact that \(\mathrm{n}(\mathcal{L}_{1})=1\). Thus, we conclude that the zero solution \(\varphi\equiv 0\) is a minimizer of (3.3) only for \(\omega\in(0,1]\) and for \(\omega\in(1,+\infty)\), solution \(\varphi\) is a non-constant minimizer.
Second, let us consider \(\psi:=\varphi\left(\,-\frac{\pi}{2}\right)\) a translation of \(\varphi\) by a quarter of the period \(2\pi\). Since \(\varphi\) is odd, we see that \(\psi\) is even and it is easy to see that \(\mathcal{E}(\psi)=\Gamma\). Using this new minimizer \(\psi\), we can consider the even symmetric rearrangements \(\psi^{\star}\) associated with \(\psi\) and it is well known that such rearrangements are invariant under our constraint \(\int_{-\pi}^{\pi}u^{2}=\tau\) and under the norm in \(L^{4}_{per}\) by using [11, Appendix A]. Moreover, due to the fractional Polya-Szego Inequality in [11, Lemma A.1] (see also [29, Theorem 1.1]), we obtain the following inequality
\[\int_{-\pi}^{\pi}\left((-\Delta)^{\frac{s}{2}}\psi^{\star}\right)^{2}dx\leqslant \int_{-\pi}^{\pi}\left((-\Delta)^{\frac{s}{2}}\psi\right)^{2}dx.\]
Thus, by (3.3), we also obtain \(\mathcal{E}(\psi^{\star})=\Gamma\) in the Sobolev \(H^{s}_{per,even}\) with \(\psi^{\star}\) being symmetrically decreasing away from the maximum point \(x=0\). To simplify the notation, we assume \(\psi=\psi^{\star}\), so that \(\varphi\) has an odd two-lobe profile according to the Definition 3.1. \(\blacksquare\)
### Small-amplitude periodic waves
The existence and convenient formulas for the odd small amplitude periodic waves associated to the equation (1.6) will be presented in this section. We show that the local bifurcation theory used to determine the existence of odd small amplitude waves can be extended and the local solutions can be considered as global solutions and they are unique. This fact is very important in our context since it can be used as an alternative form to prove the existence of periodic even solutions (not necessarily having a two-lobe profile) for the equation (1.6) when \(s\in(0,1]\) and to do so, we use the theory contained in [10, Chapters 8 and 9].
In addition, the existence of small amplitude periodic waves helps us in the numeric experiments contained in Section 4.
We will give some steps to prove the existence of small amplitude periodic waves. For \(s\in(0,1]\), let \(F:H^{2s}_{per,odd}\times(0,+\infty)\longrightarrow L^{2}_{per,odd}\) be the smooth map defined by
\[F(g,\omega)=(-\Delta)^{s}g-\omega g+g^{3}. \tag{3.13}\]
We see that \(F(g,\omega)=0\) if and only if \(g\in H^{2s}_{per,odd}\) satisfies (1.6) with correspondent frequency of the wave \(\omega\in(0,+\infty)\). The Frechet derivative of the function \(F\) with respect to the first variable is then given by
\[D_{g}F(g,\omega)f=\left((-\Delta)^{s}-\omega+3g^{2}\right)f. \tag{3.14}\]
Let \(\omega_{0}>0\) be fixed. At the point \((0,\omega_{0})\in H^{2s}_{per,odd}\times(0,+\infty)\), we have that
\[D_{g}F(0,\omega_{0})=(-\Delta)^{s}-\omega_{0}. \tag{3.15}\]
As far as we can see, the nontrivial kernel of \(D_{g}F(0,\omega_{0})\) is determined by odd periodic functions \(h\in H^{2s}_{per}\) such that
\[\widehat{h}(k)(-\omega_{0}+|k|^{2s})=0,\qquad\;k\in\mathbb{Z}. \tag{3.16}\]
It follows that \(D_{g}F(0,\omega_{0})\) has the one-dimensional kernel if and only if \(\omega_{0}=|k|^{2s}\) for some \(k\in\mathbb{Z}\). In other words, we have
\[\text{Ker}D_{g}F(0,\omega_{0})=[\tilde{\varphi}_{k}], \tag{3.17}\]
where \(\tilde{\varphi}_{k}(x)=\sin(kx)\).
We are enabled to apply the local bifurcation theory contained in [10, Chapter 8.4] to obtain the existence of an open interval \(I\) containing \(\omega_{0}>0\), an open ball \(B(0,r)\subset H^{2s}_{per,odd}\) for some \(r>0\) and a unique smooth mapping
\[\omega\in I\longmapsto\varphi:=\varphi_{\omega}\in B(0,r)\subset H^{2s}_{per, odd}\]
such that \(F(\varphi,\omega)=0\) for all \(\omega\in I\) and \(\varphi\in B(0,r)\).
Next, for each \(k\in\mathbb{N}\), the point \((0,\tilde{\omega}_{k})\) where \(\tilde{\omega}_{k}:=|k|^{2s}\) is a bifurcation point. Moreover, there exists \(a_{0}>0\) and a local bifurcation curve
\[a\in(0,a_{0})\longmapsto(\varphi_{k,a},\omega_{k,a})\in H^{2s}_{per,odd} \times(0,+\infty) \tag{3.18}\]
which emanates from the point \((0,\tilde{\omega}_{k})\) to obtain odd small amplitude \(\frac{2\pi}{k}\)-periodic solutions for the equation (1.6). In addition, we have \(\omega_{k,0}=\tilde{\omega}_{k}\), \(D_{a}\varphi_{k,0}=\tilde{\varphi}_{k}\) and all solutions of \(F(g,\omega)=0\) in a neighbourhood of \((0,\tilde{\omega}_{k})\) belongs to the curve in (3.18) depending on \(a\in(0,a_{0})\).
**Proposition 3.4**.: _Let \(s\in(0,1]\) be fixed. There exists \(a_{0}>0\) such that for all \(a\in(0,a_{0})\) there is a unique even local periodic solution \(\varphi\) for the problem (1.6) given by the following expansion:_
\[\varphi(x)=a\sin(x)+\frac{1}{4(3^{2s}-1)}a^{3}\sin(3x)+\mathcal{O}(a^{5}), \tag{3.19}\]
_and_
\[\omega=1+\frac{3}{4}a^{2}+\mathcal{O}(a^{4}), \tag{3.20}\]
_For \(s\in(\frac{1}{4},1]\), the pair \((\varphi,\omega)\in H^{s}_{per,odd}\times(1,+\infty)\) is global in terms of the parameter \(\omega>1\) and it satisfies (1.6)._
Proof.: The first part of the proposition has been determined in (3.18) by considering \(k=1\). The expression in (3.19) can be established similarly to [27, Proposition 3.1].
To obtain that the local curve (3.18) extends to a global one for the case \(s\in(\frac{1}{4},1]\), we need to prove first that \(D_{g}F(g,\omega)\) given by (3.14) is a Fredholm operator of index zero. Let us define the set \(S=\{(g,\omega)\in D(F):F(g,\omega)=0\}\). Consider \((g,\omega)\in H^{2s}_{per,odd}\times(1,+\infty)\) as a solution of \(F(g,\omega)=0\). We have, for \(Y:=L^{2}_{per,odd}\) that
\[\mathcal{L}_{1|_{Y}}\psi\equiv D_{g}F(g,\omega)\psi=\left((-\Delta)^{s}+3g^{2 }\right)\psi-\omega\psi=0, \tag{3.21}\]
has two linearly independent solutions and at most one belongs to \(H^{2s}_{per,odd}\) (see [11, Theorem 3.12]). If there are no solutions in \(H^{2s}_{per,odd}\backslash\{0\}\), then the equation \(\left((-\Delta)^{s}-\omega+3g^{2}\right)\psi=f\) has a unique non-trivial solution \(\psi\in H^{2s}_{per,odd}\) for all \(f\in Y\) since \(\mathrm{Ker}(\mathcal{L}_{1|_{Y}})^{\perp}=\mathrm{Range}(\mathcal{L}_{1|_{Y} })=Y\).
On the other hand, if there is a solution \(\theta\in H^{2s}_{per,odd}\) we can use the standard Fredholm Alternative to obtain that (3.21) has a solution if, and only if,
\[\int_{-\pi}^{\pi}\theta(x)f(x)dx=0,\]
for all \(f\in Y\). We then conclude in both cases that the Frechet derivative of \(F\) in terms of \(g\) given by (3.14) is a Fredholm operator of index zero.
Let us prove that every bounded and closed subset of \(S\) is a compact set on \(H^{2s}_{per,odd}\times(1,+\infty)\). For \(g\in H^{2s}_{per,odd}\) and \(\omega>1\), we define \(\widetilde{F}(g,\omega)=((-\Delta)^{s}-\omega)^{-1}g^{3}\). Since \(s\in(\frac{1}{4},1]\), we see that \(\widetilde{F}\) is well defined since \(H^{2s}_{per,odd}\) is a Banach algebra, \((g,\omega)\in S\) if and only if \(\widetilde{F}(g,\omega)=g\) and \(\widetilde{F}\) maps \(H^{2s}_{per,odd}\times(1,+\infty)\) into \(H^{4s}_{per,odd}\). The compact embedding \(H^{4s}_{per,odd}\hookrightarrow H^{2s}_{per,odd}\) shows that \(\widetilde{F}\) maps bounded and closed sets in \(H^{2s}_{per,odd}\times(1,+\infty)\) into \(H^{2s}_{per,odd}\). Thus, if \(R\subset S=H^{2s}_{per,odd}\times(1,+\infty)\) is a bounded and closed set, we obtain that \(\widetilde{F}(R)\) is relatively compact in \(H^{2s}_{per,odd}\). Since \(R\) is closed, any sequence \(\{(\varphi_{n},\omega_{n})\}_{n\in\mathbb{N}}\) has a convergent sub-sequence in \(R\), so that \(R\) is compact in \(H^{2s}_{per,odd}\times(1,+\infty)\) as desired.
Finally, the frequency of the wave given by (3.20) is not constant and we are enabled to apply [10, Theorem 9.1.1] to extend globally the local bifurcation curve given in (3.18). More precisely, there is a continuous mapping
\[(1,+\infty)\ni\omega\longmapsto\varphi(\cdot,\omega)=\varphi_{\omega}\in H^{ 2s}_{per,odd} \tag{3.22}\]
where \(\varphi_{\omega}\) solves equation (1.6).
**Remark 3.5**.: _It is important to mention that \(\varphi\in H^{2s}_{per,odd}\) given by (3.19) is a solution of the minimization problem (3.3) by using similar arguments as in [27, Remark 3.2]._
## 4 Spectral analysis
Using the variational characterization determined in the last section, we obtain useful spectral properties for the linearized operator \(\mathcal{L}\) in (1.8) around the periodic wave \(\varphi\) obtained in Theorem 3.3. Let \(s\in\left(\frac{1}{4},1\right]\) and \(\omega>1\) be fixed. Consider \(\varphi\in H^{\infty}_{per,odd}\) as the periodic minimizer obtained by Theorem 3.3. Our intention is to study the spectral properties of the matrix operator
\[\mathcal{L}=\left(\begin{array}{cc}\mathcal{L}_{1}&0\\ 0&\mathcal{L}_{2}\end{array}\right):H^{2s}_{per}\times H^{2s}_{per}\subset L^{ 2}_{per}\times L^{2}_{per}\longrightarrow L^{2}_{per}\times L^{2}_{per},\]
where \(\mathcal{L}_{1},\mathcal{L}_{2}\) are defined by
\[\mathcal{L}_{1}=(-\Delta)^{s}-\omega+3\varphi^{2}\qquad\text{and}\qquad \mathcal{L}_{2}=(-\Delta)^{s}-\omega+\varphi^{2}. \tag{4.1}\]
We see that operators \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) are the real and imaginary parts of the main operator \(\mathcal{L}\). Using (3.2), we obtain the inequality
\[\mathcal{G}(u):=G(u,0)\leq\mathcal{E}(u).\]
**Lemma 4.1**.: _Let \(s\in\left(\frac{1}{4},1\right]\) and \(\omega>1\) be fixed. If \(\varphi\in H^{\infty}_{per,odd}\) is the periodic minimizer given in Theorem 3.3, then \(n(\mathcal{L}_{1})=1\) and \(z(\mathcal{L}_{1})=1\). In particular, we have \(n(\mathcal{L}_{1,even})=1\), \(z(\mathcal{L}_{1,even})=1\), \(n(\mathcal{L}_{1,odd})=0\), and \(z(\mathcal{L}_{1,odd})=0\)_
Proof.: The fact that \(\varphi\) is a minimizer of \(\mathcal{E}\) defined in \(H^{1}_{per,odd}\), enables us to deduce that \(\varphi\) also is a minimizer of \(\mathcal{G}\) defined in \(H^{1}_{per,odd}\). By [5, Theorem 30.2.2], we infer
\[\mathcal{L}_{1,odd}\big{|}_{{}_{\{\varphi\}^{\perp}}}\geq 0,\]
where \(\mathcal{L}_{1,odd}\) is the restriction of \(\mathcal{L}_{1}\) in \(L^{2}_{per,odd}\) (the subspace of \(L^{2}_{per}\) constituted by odd periodic functions). Since \(\varphi\) is a minimizer of the constrained variational problem (3.3) with only one constraint, we have
\(n(\mathcal{L}_{1,odd})\leqslant 1\). In addition, \(\varphi^{\prime}\) is even and it has two symmetric zeroes, namely \(\pm x_{0}\), in the interval \([-\pi,\pi]\). By Krein Rutman's theorem, we see that the first eigenvalue of \(\mathcal{L}_{1}\) needs to be associated to an even periodic function and by oscillation theorem, we have \(n(\mathcal{L}_{1})=n(\mathcal{L}_{1,odd})+n(\mathcal{L}_{1,even})\leqslant 2\). On the other hand, let us consider \(\psi=\varphi(\cdot-\pi/2)\). Function \(\psi\) is an even minimizer of the problem
\[\Gamma:=\inf_{u\in\widehat{\mathcal{Y}}_{\tau}}\mathcal{E}(u), \tag{4.2}\]
where \(\widehat{\mathcal{Y}}_{\tau}=\{u\in H^{s}_{per,even}\ ;\ \|u\|^{2}_{L^{2}_{per}}=\tau\}\). Using similar arguments as above, it follows that \(\widetilde{\mathcal{L}}_{1,even}=-\widehat{\mathcal{L}}_{x}^{2}-\omega+3 \psi^{2}\) satisfies \(n(\widetilde{\mathcal{L}}_{1,even})\leqslant 1\). Again, by Krein Rutman's theorem, it follows that the first eigenvalue of \(\widetilde{\mathcal{L}}_{1}\) needs to be associated with an even periodic function, and thus \(n(\widetilde{\mathcal{L}}_{1,even})=1\). Next, by [20, Lemma 3.3 - (L1)], we have that \(\psi^{\prime}\) corresponds to the lowest eigenvalue of \(\widetilde{\mathcal{L}}_{1,odd}\) and it results to be simple. Therefore \(n(\widetilde{\mathcal{L}}_{1,odd})=0\), so that \(n(\mathcal{L}_{1})=1\) and the eigenfunction associated with the negative eigenvalue results to be positive (or negative) and even by an application of the Krein-Rutman theorem.
We prove that \(z(\mathcal{L}_{1})=1\). First, again by [20, Lemma 3.3 - (L1)], we see that \(0\) is the first eigenvalue of \(\widetilde{\mathcal{L}}_{1,odd}\) and it results to be simple. Using the implicit function theorem and similar arguments as in [27, Lemma 2.8], we obtain that for a fixed value \(\omega_{0}>1\) the existence of a open interval \(\mathcal{I}\) containing \(\omega_{0}\) and a smooth function
\[\mathcal{I}\ni\omega\longmapsto\psi(\cdot,\omega):=\psi_{\omega}\in H^{2s}_{ per,even} \tag{4.3}\]
that solves equation (1.6). Deriving this equation with respect to \(\omega\in\mathcal{I}\), it follows that \(\mathcal{L}_{1}(\partial_{\omega}\psi)=\psi\) and since \(\varphi=\psi(\cdot+\pi/2)\), we automatically obtain \(\mathcal{L}_{1}(\partial_{\omega}\varphi)=\varphi\), so that \(\varphi\in\operatorname{range}(\mathcal{L})\).
Suppose the existence of \(\omega_{0}>1\) such that \(\{\varphi^{\prime},\bar{y}\}\) is an orthogonal basis for \(\ker(\mathcal{L}_{1})\). Since \(\varphi\) is odd and \(\varphi^{\prime}\in\ker(\mathcal{L}_{1})\) is even, we see that \(\bar{y}\) is odd and defined in the symmetric interval \([-\pi,\pi]\). The oscillation theorem implies that \(\bar{y}\) has exactly two zeroes over the interval \([-\pi,\pi)\) since \(\varphi^{\prime}\) has two symmetric zeroes \(\pm x_{0}\) in the interval \((-\pi,\pi)\). We can suppose, without loss of generality that \(\bar{y}<0\) in \((-\pi,0)\) and \(\bar{y}>0\) in \((0,\pi)\) and this behaviour is also satisfied by our solution \(\varphi\) since we have considered that \(\varphi\) has a two-lobe profile by Proposition 3.3,. The fact that \(\mathcal{L}_{1}\) is a self-adjoint operator defined in \(L^{2}_{per}\) with domain \(H^{2s}_{per}\), enables us to conclude that \(\operatorname{range}(\mathcal{L}_{1})=[\ker(\mathcal{L}_{1})]^{\perp}\), and thus \((\varphi,\bar{y})_{L^{2}_{per}}=(\mathcal{L}_{1}(\mathcal{L}_{\omega}\varphi), \bar{y})_{L^{2}_{per}}=0\). This leads to a contradiction since we have also \(\varphi<0\) in \((-\pi,0)\) and \(\varphi>0\) in \((0,\pi)\). \(\blacksquare\)
**Lemma 4.2**.: _Let \(s\in\left(\frac{1}{4},1\right]\) and \(\omega>1\) be fixed. If \(\varphi\in H^{\infty}_{per,odd}\) is the periodic minimizer given by Theorem 3.3, then \(n(\mathcal{L}_{2})=2\) and \(z(\mathcal{L}_{2})=1\). In particular, we have \(n(\mathcal{L}_{2,even})=1\), \(z(\mathcal{L}_{2,even})=0\), \(n(\mathcal{L}_{2,odd})=1\), and \(z(\mathcal{L}_{2,odd})=1\)_
Proof.: First, we see that \(\varphi\) is an odd eigenfunction of \(\mathcal{L}_{2}\) associated to the eigenvalue \(0\) having two zeroes in the interval \([0,L)\). From the oscillation theorem for fractional linear operators, we obtain that \(0\) needs to be the second or the third eigenvalue of \(\mathcal{L}_{2}\).
On the other hand, let \(\mathcal{L}_{2,even}\) be the restriction of \(\mathcal{L}_{2}\) in the even sector of \(L^{2}_{per}\). Thus, by Krein-Rutman's theorem we see that the first eigenvalue of \(\mathcal{L}_{2}\) is always simple and it associated to a positive (negative) even eigenfunction, so that \(n(\mathcal{L}_{2,even})\geqslant 1\). We obtain by Courant's min-max characterization of the first eigenvalue that
\[\lambda_{1}=\inf\{(\mathcal{L}_{2,odd}u,u)_{L^{2}_{per}},\ ||u||_{L^{2}_{per}}=1\}=\inf\{( \mathcal{L}_{1,odd}u,u)_{L^{2}_{per}}-2(\varphi^{2}u,u)_{L^{2}_{per}},\ ||u||_{L^{2}_{per}}=1\}. \tag{4.4}\]
Next, \(n(\mathcal{L}_{1})=1\) and the first negative eigenvalue of \(\mathcal{L}_{1}\) is associated to a even eigenfunction. In addition, we see that \(0\) is associated to the eigenfunction \(\varphi^{\prime}\) which is also even and thus, for \(u\in H^{2s}_{per,odd}\) such that \(||u||_{L^{2}_{per}}=1\), we obtain \((\mathcal{L}_{1,odd}u,u)_{L^{2}_{per}}>0\) and by (4.4), we get
\[\lambda_{1} = \inf\{(\mathcal{L}_{1,odd}u,u)_{L^{2}_{per}},\ ||u||_{L^{2}_{per}}=1\}-2\sup\{(\varphi^{2}u,u)_{L^{2}_{per}},\ ||u||_{L^{2}_{per}}=1\}\] \[< -2\left(\varphi^{2}\frac{\varphi}{||\varphi||_{L^{2}_{per}}},\frac{ \varphi}{||\varphi||_{L^{2}_{per}}}\right)_{L^{2}_{per}}<0.\]
Since \(\lambda_{1}<0\), we obtain by (4.5) that \(n(\mathcal{L}_{2,odd})\geqslant 1\). The fact \(n(\mathcal{L}_{2})=n(\mathcal{L}_{2,odd})+n(\mathcal{L}_{2,even})\) and the oscillation theorem for fractional linear operators give us that \(n(\mathcal{L}_{2})=2\) as requested.
We prove that \(z(\mathcal{L}_{2})=1\). Indeed, since \(n(\mathcal{L}_{2,odd})=1\), we see that the corresponding eigenfunction \(p\) associated to the first eigenvalue of \(\mathcal{L}_{2,odd}\) is odd and consequently, \(q=p\left(\cdot-\frac{\pi}{2}\right)\) is an even function that changes its sign. Consider \(\widetilde{\mathcal{L}}_{2}=(-\Delta)^{s}-\omega+\psi^{2}\) a linear operator where \(\psi=\varphi\left(\cdot-\frac{\pi}{2}\right)\) is even. By Krein-Rutman's theorem, we have that the first eigenfunction of \(\mathcal{L}_{2}\) is simple and it is associated with a positive (negative) even periodic function and thus, \(0\) can not be an eigenvalue associated with \(\widetilde{\mathcal{L}}_{2,odd}\). Since \(z(\widetilde{\mathcal{L}}_{2})=z(\widetilde{\mathcal{L}}_{2,odd})+z( \widetilde{\mathcal{L}}_{2,even})\), we obtain from the fact \(\psi\) is even that \(z(\widetilde{\mathcal{L}}_{2})=z(\widetilde{\mathcal{L}}_{2,even})=1\). Therefore, using the translation transformation \(f=g\left(\cdot-\frac{\pi}{2}\right)\), we obtain \(z(\mathcal{L}_{2})=z(\mathcal{L}_{2,odd})=1\) as requested. \(\blacksquare\)
As a consequence of Lemma 4.1, we obtain the existence of a smooth curve of positive and periodic solutions \(\varphi_{\omega}\) depending on the wave frequency \(\omega>1\) all of the with the same period \(2\pi\).
**Proposition 4.3**.: _Let \(s\in\left(\frac{1}{4},1\right]\) and \(\varphi_{0}\in H^{\infty}_{per,odd}\) be the solution obtained in the Proposition 3.3 which is associated to the fixed value \(\omega_{0}>1\). Then, there exists a \(C^{1}\) mapping \(\omega\in\mathcal{I}\longmapsto\varphi_{\omega}\in H^{s}_{per,odd}\) defined in an open neighbourhood \(\mathcal{I}\subset(1,+\infty)\) of \(\omega_{0}>0\) such that \(\varphi_{\omega_{0}}=\varphi_{0}\)._
Proof.: The proof follows from the implicit function theorem. The fact has already been used in the proof of Lemma 4.1. \(\blacksquare\)
**Remark 4.4**.: _We cannot guarantee that for each \(\omega\in\mathcal{I}_{\omega_{0}}\) given by Proposition 4.3 that \(\varphi_{\omega}\) solves the minimization problem (3.3) except at \(\omega=\omega_{0}\)._
The results determined in this subsection can be summarized in the following proposition:
**Proposition 4.5**.: _Let \(\varphi\) be the two-lobe profile obtained in Proposition 3.3. We have that \(n(\mathcal{L})=3\) and \(\mathrm{Ker}(\mathcal{L})=[(\varphi^{\prime},0),(0,\varphi)]\)._
\(\blacksquare\)
## 5 Numerical experiments - Proof of Theorem 1.2
In this section, we generate the periodic standing wave solutions of the dNLS equation by using Newton's iteration method. The method is used to construct the standing wave solutions for the focusing fractional NLS equation [24], the fractional KdV equation [26] and the fractional modified KdV equation [27]. We then calculate sign of the inner products \(V_{even}=(\mathcal{L}_{2}^{-1}\varphi^{\prime},\varphi^{\prime})_{L^{2}_{per}}\) and \(V_{odd}=(\mathcal{L}_{1}^{-1}\varphi,\varphi)_{L^{2}_{per}}\) for \(s\in(\frac{1}{4},1]\), numerically.
### Numerical generation of odd periodic waves
Applying the Fourier transform to the equation (1.6), we obtain
\[F(\widehat{\varphi})=\left(|\xi|^{2s}-\omega\right)\widehat{\varphi}+\widehat {\varphi^{3}}=0. \tag{5.1}\]
We choose the space interval as \([-\pi,\pi]\) and \(N=2^{12}\) Fourier modes. Numerically, \(\widehat{\varphi}\) is approximated by a discrete Fourier transform. Since \(\widehat{\varphi}\) is a vector with size \(N\times 1\) we need to solve a nonlinear system (5.1). Therefore, we employ a Newton iteration method as
\[\widehat{\varphi}_{n+1}=\widehat{\varphi}_{n}-\mathcal{J}^{-1}F(\widehat{ \varphi}_{n}). \tag{5.2}\]
Here, the Jacobian \(\mathcal{J}\) is defined by
\[\mathcal{J}\widehat{Q}=\left(|\xi|^{2s}-\omega\right)\widehat{Q}+3\widehat{ \varphi^{2}Q} \tag{5.3}\]
for some vector \(Q\). To avoid the calculation of the inverse of Jacobian directly we use Newton-Krylov method. Therefore, the inverse of the Jacobian is computed by the generalized minimal residual (GMRES) algorithm [31]. The iteration is stopped when the residual norm of the numerical solution is of order \(10^{-6}\).
The periodic standing wave solution of the dfNLS equation with \(s=1\) is given in [16] as
\[\varphi(x)=\eta\ \mathrm{sn}\left(2\frac{\mathrm{K(k)}}{\pi}x,\ k\right), \tag{5.4}\]
where \(\eta=2\sqrt{2}k\dfrac{\mathrm{K(k)}}{\pi}\). Here \(\mathrm{K}(k)\) is the complete elliptic integral of first kind and \(\omega=4(1+k^{2})\dfrac{\mathrm{K}^{2}(k)}{\pi^{2}}\).
In order to test the accuracy of our scheme, we compare the exact solution (5.4) with the numerical solution obtained by using (3.19) as the initial guess. In the left panel of Figure 5.1, we present the exact and numerical solutions for the frequency \(\omega=1.5\). In the right panel, we illustrate \(L_{\infty}\)-error between the exact and numerical solutions. These results show that our numerical scheme captures the solution remarkably well.
The exact solutions of the dfNLS equation are not known for \(s\in(0,1)\). The left panel of Figure 5.2 shows the numerically generated periodic wave profiles for several values of \(s\) with \(\omega=1.5\). We do not observe any significant change in the wave profiles for different values of \(s\). In the right panel of Figure 5.2 we present numerical wave profiles for various values of \(\omega\), with fixed \(s=0.5\). The results indicate that the amplitude of the wave increases with the increasing values of \(\omega\).
### Numerical results for stability
In this section, we numerically determine the behavior of the inner products \((\mathcal{L}_{1}^{-1}\varphi,\varphi)_{L_{per}^{2}}\) and \((\mathcal{L}_{2}^{-1}\varphi^{\prime},\varphi^{\prime})_{L_{per}^{2}}\) in order to conclude the spectral instability. In fact, since \(z(\mathcal{L}_{1})=1\) and \(\ker(\mathcal{L}_{1})=[\varphi^{\prime}]\), we obtain that \(\varphi\in\mathrm{range}(\mathcal{L}_{1})=\ker(\mathcal{L}_{1})^{\perp}\). Therefore, there exist a unique \(\chi\in H_{per,odd}^{2s}\) such that \(\mathcal{L}_{1}\chi=\varphi\). On the other
Figure 5.1: The exact and the numerical solutions of the dfNLS equation (left) and the \(L_{\infty}\)-error between the exact and numerical solutions (right) where the wave frequency \(\omega=1.5\) and \(s=1\).
Figure 5.2: Numerical wave profiles for various values of \(s\), with fixed wave frequency \(\omega=1.5\) (left) and numerical wave profiles for various values of \(\omega\) with fixed \(s=0.5\) (right).
hand, by deriving equation (1.6) with respect to \(\omega\) we have
\[(\ (-\Delta)^{s}-\omega+3\varphi^{2})\frac{d\varphi}{dw}=\varphi\]
which yields \(\frac{d\varphi}{dw}=\mathcal{L}_{1}^{-1}\varphi=\chi\) by uniqueness. Taking the inner product with \(\varphi\) gives
\[(\mathcal{L}_{1}^{-1}\varphi,\varphi)_{L_{per}^{2}}=\frac{1}{2}\frac{d}{dw} \|\varphi\|_{L_{per}^{2}}^{2}.\]
To determine the behavior of the inner products \((\mathcal{L}_{1}^{-1}\varphi,\varphi)_{L_{per}^{2}}\) and \((\mathcal{L}_{2}^{-1}\varphi^{\prime},\varphi^{\prime})_{L_{per}^{2}}\) we first obtain the wave profile \(\varphi\) numerically when \(\omega>1\). For small values of \(\omega\), we use (3.19) as the starting iteration. As it is seen from Figure 5.2, the amplitude of the periodic wave is increasing for increasing values of \(\omega\). Therefore, the small amplitude solution (3.19) can not be used as an initial iteration for larger values of \(\omega\). For this reason, we use a continuation method, i.e., we use the numerical solution for the previous \(\omega\) as an initial iteration and then the solutions are uniquely continued in \(\omega\). Next, we use the trapezoidal rule to approximate the integral \(\|\varphi\|_{L_{per}^{2}}^{2}\) for each \(\omega>1\). The value of the inner products obtained by using the numerical wave profile and the exact solution (5.4) are compared in the left panel of Figure 5.3. We observe that the results coincide very well. The right panel of the figure shows the inner product \(\|\varphi\|_{L_{per}^{2}}^{2}\) for several values of \(s\). The numerical results indicate that \(\|\varphi\|_{L_{per}^{2}}^{2}\) is an increasing function of \(\omega>1\) therefore the sign of the inner product \((\mathcal{L}_{1}^{-1}\varphi,\varphi)_{L_{per}^{2}}\) is positive of \(s\in(\frac{1}{4},1]\). By Lemmas 4.1 and 4.2, we see that \(n(\mathcal{L}_{odd})=1\) and since \(V_{odd}=(\mathcal{L}_{1}^{-1}\varphi,\varphi)_{L_{per}^{2}}\) is positive, the difference \(n(\mathcal{L}_{odd})-n(V_{odd})=1-0=1\) is an odd number. Thus, the wave \(\Phi\) is spectrally unstable.
Now, we compute the sign of \((\mathcal{L}_{2}^{-1}\varphi^{\prime},\varphi^{\prime})_{L_{per}^{2}}\) for different values of \(s\). In fact, since \(z(\mathcal{L}_{2})=1\) and \(\ker(\mathcal{L}_{2})=[\varphi]\), we obtain that \(\varphi^{\prime}\in\operatorname{range}(\mathcal{L}_{2})=\ker(\mathcal{L}_{2} )^{\perp}\). Therefore, there exist a unique \(\beta\in H_{per,even}^{2s}\) such that \(\mathcal{L}_{2}\beta=\varphi^{\prime}\). Hence, we need to solve
\[(-\Delta)^{s}\beta-\omega\beta+\varphi^{2}\beta=\varphi^{\prime}. \tag{5.5}\]
Applying the Fourier transform to (5.5) we obtain,
\[\left(|\xi|^{2s}-\omega\right)\widehat{\beta}+\widehat{\varphi^{2}\beta}-i \xi\widehat{\varphi}=0. \tag{5.6}\]
To solve (5.6) we use a Newton iteration method as described above. Figure 5.4 presents the sign of \((\mathcal{L}_{2}^{-1}\varphi^{\prime},\varphi^{\prime})_{L_{per}^{2}}\) for several values of \(s\). Numerical results show that the inner product is negative for all \(s\in(\frac{1}{4},1]\). By Lemmas 4.1 and 4.2, we see that \(n(\mathcal{L}_{even})=2\) and since \(V_{odd}=(\mathcal{L}_{1}^{-1}\varphi,\varphi)_{L_{per}^{2}}\) is negative, the difference \(n(\mathcal{L}_{even})-n(V_{even})=2-1=1\) is an odd number. Therefore, the wave \(\Phi\) is spectrally unstable.
## Acknowledgments
F. Natali is partially supported by CNPq/Brazil (grant 303907/2021-5).
|
2307.13908
|
Points-to-3D: Bridging the Gap between Sparse Points and
Shape-Controllable Text-to-3D Generation
|
Text-to-3D generation has recently garnered significant attention, fueled by
2D diffusion models trained on billions of image-text pairs. Existing methods
primarily rely on score distillation to leverage the 2D diffusion priors to
supervise the generation of 3D models, e.g., NeRF. However, score distillation
is prone to suffer the view inconsistency problem, and implicit NeRF modeling
can also lead to an arbitrary shape, thus leading to less realistic and
uncontrollable 3D generation. In this work, we propose a flexible framework of
Points-to-3D to bridge the gap between sparse yet freely available 3D points
and realistic shape-controllable 3D generation by distilling the knowledge from
both 2D and 3D diffusion models. The core idea of Points-to-3D is to introduce
controllable sparse 3D points to guide the text-to-3D generation. Specifically,
we use the sparse point cloud generated from the 3D diffusion model, Point-E,
as the geometric prior, conditioned on a single reference image. To better
utilize the sparse 3D points, we propose an efficient point cloud guidance loss
to adaptively drive the NeRF's geometry to align with the shape of the sparse
3D points. In addition to controlling the geometry, we propose to optimize the
NeRF for a more view-consistent appearance. To be specific, we perform score
distillation to the publicly available 2D image diffusion model ControlNet,
conditioned on text as well as depth map of the learned compact geometry.
Qualitative and quantitative comparisons demonstrate that Points-to-3D improves
view consistency and achieves good shape controllability for text-to-3D
generation. Points-to-3D provides users with a new way to improve and control
text-to-3D generation.
|
Chaohui Yu, Qiang Zhou, Jingliang Li, Zhe Zhang, Zhibin Wang, Fan Wang
|
2023-07-26T02:16:55Z
|
http://arxiv.org/abs/2307.13908v1
|
# Points-to-3D: Bridging the Gap between Sparse Points and Shape-Controllable Text-to-3D Generation
###### Abstract
Text-to-3D generation has recently garnered significant attention, fueled by 2D diffusion models trained on billions of image-text pairs. Existing methods primarily rely on score distillation to leverage the 2D diffusion priors to supervise the generation of 3D models, _e.g._, NeRF. However, score distillation is prone to suffer the view inconsistency problem, and implicit NeRF modeling can also lead to an arbitrary shape, thus leading to less realistic and uncontrollable 3D generation. In this work, we propose a flexible framework of Points-to-3D to bridge the gap between sparse yet freely available 3D points and realistic shape-controllable 3D generation by distilling the knowledge from both 2D and 3D diffusion models. The core idea of Points-to-3D is to introduce controllable sparse 3D points to guide the text-to-3D generation. Specifically, we use the sparse point cloud generated from the 3D diffusion model, Point-E, as the geometric prior, conditioned on a single reference image. To better utilize the sparse 3D points, we propose an efficient point cloud guidance loss to adaptively drive the NeRF's geometry to align with the shape of the sparse 3D points. In addition to controlling the geometry, we propose to optimize the NeRF for a more view-consistent appearance. To be specific, we perform score distillation to the publicly available 2D image diffusion model ControlNet, conditioned on text as well as depth map of the learned compact geometry. Qualitative and quantitative comparisons demonstrate that Points-to-3D improves view consistency and achieves good shape controllability for text-to-3D generation. Points-to-3D provides users with a new way to improve and control text-to-3D generation.
text-to-3D, diffusion models, NeRF, point cloud +
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
confirmation email (Conference ACM MM '23)_. ACM, New York, NY, USA, 10 pages. [https://doi.org/XXXXXXXXXXXXXX](https://doi.org/XXXXXXXXXXXXXX)
## 1. Introduction
Recently, phenomenal advancements have been made in the field of text-to-image generation [38, 39, 42, 44, 59], mainly due to the significant achievements in large aligned image-text datasets [47], vision-language pre-training models [20, 24, 37], and diffusion models [9, 15, 42]. Inspired by these text-to-image generation results, many works have explored text-conditional diffusion models in other modalities, _e.g.,_ text-to-video [16, 17, 48] and text-to-3D [19, 25, 28, 36, 54]. In this work, we focus specifically on the field of text-to-3D generation, which aims to create 3D content and can potentially be applied to many applications, _e.g.,_ gaming, virtual or augmented reality, and robotic applications.
Training text-to-3D generative models can be challenging since it is difficult to attain plentiful text and 3D data pairs compared to 2D images. Most recently, DreamFusion [36] first addresses the challenge by using score distillation from a pre-trained 2D text-to-image diffusion model [44] to optimize a Neural Radiance Fields (NeRF) [29] to perform text-to-3D synthesis. The following literatures [28, 54] also use the score distillation paradigm. These methods provide and verify the solution for text-to-3D content generation without requiring 3D supervision. Despite their considerable promise, these methods are plagued by a notable issue known as the multi-face problem, or _Janus problem_, which results in inconsistencies across views. Besides, another important issue in text-to-3D generation is the lack of control over the shape of the generated 3D objects, _i.e._, these methods may produce objects with arbitrary shapes that meet the requirements of the input text by setting different seeds. Latent-NeRF [28] first introduces sketch-shape guided 3D generation, which uses a predefined mesh as a target to supervise the geometry learning of the NeRF. However, this approach is costly and time-consuming, as it requires the predefinition of a mesh shape for each 3D generation every time.
This has motivated us to explore the possibility of cultivating prior knowledge in both 2D and 3D diffusion models to guide both the appearance and geometry learning of text-to-3D generation. Inspired by the conditional control paradigm in text-to-image diffusion models, _e.g.,_ ControlNet [59] and T2I-Adapter [32], which use extra conditions (_e.g.,_ sketch, mask, depth) with text prompts to guide the generation process, achieving more controllability and spatial consistency of the image. We seek a way to incorporate this conditional control mechanism into text-to-3D generation.
In this work, we propose a novel and flexible framework, dubbed Points-to-3D, to improve view consistency across views and achieve flexible controllability over 3D shapes for text-to-3D generation. The core idea of Points-to-3D is to introduce controllable sparse 3D points to guide the text-to-3D generation in terms of geometry and appearance. To achieve this, inspired by Point-E [35], we propose to distill the sparse point clouds from pre-trained 3D point cloud diffusion models as the geometry prior. These sparse 3D points are conditioned on a single reference image, which can be provided either by the user or generated by a text-to-image model. However, it is not trivial to leverage the generated sparse point clouds, which only contain 4096 3D points. To overcome this issue, we propose a point cloud guidance loss to encourage the geometry of a randomly initialized NeRF to closely resemble the shape depicted in the reference image.
In addition to geometry, we propose to optimize the appearance conditioned on text prompt as well as the learned depth map. More concretely, we perform score distillation [28, 36] to the publicly available and more controllable 2D image diffusion models, ControlNet [59], in a compact latent space. Our approach, Points-to-3D, can bridge the gap between sparse 3D points and realistic shape-controllable 3D generation by distilling the knowledge of 2D and 3D diffusion priors. As depicted in Figure 1, given an imaginative reference image, Points-to-3D can generate realistic and shape-controllable 3D contents that vary with different text prompts.
In summary, the contributions of this paper are as follows:
* We present a novel and flexible text-to-3D generation framework, named Points-to-3D, which bridges the gap between sparse 3D points and more realistic and shape-controllable 3D generation by distilling the knowledge from pre-trained 2D and 3D diffusion models.
* To take full advantage of the sparse 3D points, we propose an efficient point cloud guidance loss to optimize the geometry of NeRF, and learn geometry-consistent appearance via score distillation by using ControlNet conditioned on text and learned depth map.
* Experimental results show that Points-to-3D can significantly alleviate inconsistency across views and achieve good controllability over 3D shapes for text-to-3D generation.
## 2. Related Work
Text-to-Image GenerationImage generation achieves the first breakthrough results when encountering Generative Adversarial Networks (GANs) [13, 21], which train a generator to synthesize images that are indistinguishable from real images. Recently, image generation has achieved another phenomenal progress with the development of diffusion models [49]. With the improvements in modeling [9, 15], denoising diffusion models can generate various high-quality images by iteratively denoising a noised image. In addition to image-driven unconditional generative, diffusion models can generate text-conditioned images from text descriptions [38, 44]. The following works propose to add more conditions to text-to-image generation, including semantic segmentation [42], reference images [43], sketch [53], depth map [32, 59], and other conditions [18, 32, 59], which greatly promote the development and application of text-to-image generation. Driven by the success of text-to-image diffusion models, many works have explored text-conditional diffusion models in other modalities, _e.g.,_ text-based manipulation [4], text-to-video [17, 48], and text-to-3D [25, 28, 36, 54]. In this work, we focus on the field of text-to-3D generation.
Neural Radiance Fields (NeRF)There is plenty of work on 3D scene representation, including 3D voxel grids [51], mesh [11], point clouds [1, 27, 30, 60], and implicit NeRF [29, 34]. In recent years, as a series of inverse rendering methods, NeRF-based methods have emerged as an important technique in 3D scene representation, which are capable of synthesizing novel views and reconstructing geometry surface [29, 34, 55]. Specifically, NeRF [29] represents scenes as density and radiance fields with the neural network
(MLP), allowing for photorealistic novel view synthesis. However, the computational cost of densely querying the neural network in 3D space is substantial. To improve the efficiency of NeRF, recent research has explored designing hybrid or explicit structures based on NeRF (Beng et al., 2017; Chen et al., 2018; Chen et al., 2019) to achieve fast convergence for radiance field reconstruction, as well as accelerating the rendering speed of NeRF (Chen et al., 2018; Chen et al., 2019; Chen et al., 2019). Most of these methods require multiple views and corresponding camera parameters for training, which can not be always satisfied, especially in novel text-to-3D content generation. In this work, we view NeRF as a basic scene representation model and focus on devising a new framework for text-to-3D generation.
Single Image 3D ReconstructionVarious approaches exist for single image 3D reconstruction, which aims at reconstructing the object present in the image. Different formats can be used to represent the reconstructed object, such as voxels (Chen et al., 2018; Chen et al., 2019), polygonal meshes (Chen et al., 2019), point clouds (Chen et al., 2019), and more recently, NeRFs (Chen et al., 2019; Chen et al., 2020). However, these methods are typically trained and evaluated on specific 3D datasets (Chen et al., 2019), making generalization to general 3D reconstruction challenging due to the lack of sufficient 3D training data. Recently, Point-E (Chen et al., 2019) explores an efficient method for general 3D content generation in the form of point clouds. It first generates a single synthetic image using a pre-trained text-to-image diffusion model, and then produces a sparse (4096 points) 3D point cloud using a point cloud diffusion model, which is conditioned on the generated image. The generalization ability of Point-E is attributed to its training on several millions of 3D data (Chen et al., 2019). In this work, we innovatively leverage Point-E as a point cloud foundation model, to provide sparse geometry guidance for more realistic and shape-controllable text-to-3D generation.
Text-to-3D GenerationIn recent times, the progress in text-to-image generation and 3D scene modeling has sparked a growing interest in text-to-3D content generation. Earlier work like CLIP-forge (Chen et al., 2019) consists of an implicit autoencoder conditioned on shape codes and a normalizing flow model to sample shape embeddings from textual input. However, it needs 3D training data in voxel representation, which is difficult to scale in real applications. Pure-CLIPNeRF (Chen et al., 2019) uses pre-trained CLIP (Chen et al., 2019) for guidance with a voxel grid model for scene representation to perform text-to-3D generation without access to any 3D datasets. CLIP-Mesh (Chen et al., 2019) presents a method for zero-shot 3D generation using a textual prompt, it also relies on a pre-trained CLIP model that compares the input text with differentiably rendered images of the generated 3D model. DreamFields (Chen et al., 2019) first proposes to optimize the 3D representation of NeRF (Chen et al., 2019), by employing a pre-trained CLIP as guidance as well, such that all rendering views of NeRF are encouraged to match the text prompt.
More recently, DreamFusion (Chen et al., 2019) proposes to utilize a powerful pre-trained 2D text-to-image diffusion model (Chen et al., 2019) to perform text-to-3D synthesis. They propose a Score Distillation Sampling (SDS) loss to supervise the rendered views of 3D objects modeled by NeRF. The following Stable-DreamFusion (Chen et al., 2019), Latent-NeRF (Chen et al., 2019), and SJC (Chen et al., 2019) adapt the score distillation to the publicly available and computationally efficient Stable Diffusion model (Chen et al., 2020), which apply the diffusion process in a compact latent space and facilitate the development of text-to-3D generation. We build upon these works and propose a flexible Points-to-3D framework for text-to-3D generation by bridging the gap between sparse 3D points and more realistic shape-controllable 3D content generation.
## 3. Approach
### Preliminaries
In this section, we provide a brief introduction to some of the key concepts that are necessary for understanding our proposed framework in Section 3.2.
Diffusion ModelDiffusion models are first proposed by (Chen et al., 2019) and recently promoted by (Chen et al., 2019; Chen et al., 2020). A diffusion model usually consists of a forward process \(q\) that gradually adds noise to the image \(x\in X\), and a reverse process \(p\) of gradually removing noise from the noisy data. The forward process \(q\) can be formulated as:
\[q(x_{t}|x_{t-1})=\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}\mathbf{ I}), \tag{1}\]
where timesteps \(t\in[0,T]\), \(\beta_{t}\) denotes noise schedule. DDPM (Chen et al., 2019) proposes to directly attain a given timestep of the noising procedure:
\[x_{t}=\sqrt{\alpha_{t}}x_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon, \tag{2}\]
where \(\bar{\alpha}_{t}=\prod_{0}^{t}1-\beta_{t}\), and \(\epsilon\sim\mathcal{N}(0,\mathbf{I})\). The denoising process \(p_{\theta}(x_{t-1}|x_{t})\) starts from random noise and slowly reverses the noising process. DDPM (Chen et al., 2019) proposes to parameterize the distribution by modeling the added noise \(\epsilon\). Recently, latent diffusion model (LDM), as a specific form of diffusion model, has achieved great progress in text-to-image generation. The well-known Stable Diffusion (Chen et al., 2020) and ControlNet (Chen et al., 2020) are both latent diffusion models.
Score Distillation Sampling (SDS)Score distillation sampling (SDS) is first proposed by DreamFusion (Chen et al., 2019), which achieves text-to-3D creation by incorporating two modules: a scene representation model (Chen et al., 2019) and a pre-trained text-to-image diffusion model (Imagen, 2019). During training, a learnable NeRF model \(\theta\) first performs view synthesizes with a differentiable render \(g\): \(x=g(\theta)\), which can render an image at a given random camera pose. Then, random noise is added to \(x\) and the diffusion model \(\phi\) is to predict the added noise \(\epsilon\) from the noisy image with a learned denoising function \(\epsilon_{\phi}(x_{t};y,t)\) given the noisy image \(x_{t}\), text embedding \(y\), and noise level \(t\). This score function provides gradient to update the NeRF parameters \(\theta\), which is calculated as:
\[\nabla_{\theta}\mathcal{L}_{\text{SDS}}(\phi,g(\theta))=\mathbb{E}_{t,e}\big{[} \omega(t)(\epsilon_{\phi}(x_{t};y,t)-\epsilon)\frac{\partial\pi}{\partial \theta}\big{]}, \tag{3}\]
where \(\omega(t)\) is a weighting function that depends on \(\alpha_{t}\). Inspired by Stable-DreamFusion (Chen et al., 2019) and Latent-NeRF (Chen et al., 2019), which use Stable Diffusion (Chen et al., 2020), we propose to perform score distillation with a more controllable LDM, ControNet (Chen et al., 2020), to generate more realistic and shape-controllable 3D contents.
### Points-to-3D
In this section, we elaborate on our Points-to-3D framework, which is depicted in Figure 2.
ArchitectureFirst of all, we describe the architecture of our Points-to-3D framework. As shown in Figure 2, Points-to-3D mainly consists of three models: a scene representation model (a coordinate-based MLP (Chen et al., 2019)), a text-to-image 2D diffusion model (ControlNet (Chen et al., 2020)), and a point cloud 3D diffusion model (Point-E (Chen et al., 2019)).
\(\bullet\)**Scene Model.** Neural Radiance Field (NeRF) [29] has been an important technique used for scene representation, comprising of a volumetric raytracer and an MLP. Previous literature [28, 36, 54] has used NeRF as the scene representation model for text-to-3D generation, mainly because a NeRF model can implicitly impose the spatial consistency between different views owing to the spatial radiance field and rendering paradigm. A NeRF model usually produces a volumetric density \(\sigma\) and an RGB color \(c\). In this work, we adopt the efficient design of Latent-NeRF [28] that produces five outputs, including the volume density \(\sigma\) and four pseudo-color channels \(\{C=(c^{1},c^{2},c^{3},c^{4})\}\in\mathbb{R}^{64\times 64\times 4}\) that correspond to the four input latent features for latent diffusion models [42]:
\[(c^{1},c^{2},c^{3},c^{4},\sigma)=\mathrm{MLP}(x,y,z,d;\theta), \tag{4}\]
where \(x,y,z\) denote 3D coordinates, \(d\) is the view direction. We use Instant-NGP [34] as the scene representation model by default.
\(\bullet\)**Text-to-Image 2D Diffusion Model.** Since Imagen [44] used by DreamFusion [36] is not publicly available, we use Stable Diffusion as the text-to-image diffusion model initially, as previously explored in existing literature [28, 52, 54]. However, the original Stable Diffusion v1.5 is not controllable to support additional input conditions. In this work, we first propose to use the pre-trained ControlNet [59] conditioned on depth map as the 2D diffusion model in Points-to-3D. As depicted in Figure 2, in addition to the input text prompt, _e.g._, "_a Nissan GTR tracing car_", we further introduce the predicted depth map \(M\in\mathbb{R}^{H\times W\times 1}\) of our NeRF model as the conditional control. The depth map is computed as follows, for simplicity, we only show the depth value calculation on one pixel:
\[M_{t}=\sum_{k=1}^{K}\mathrm{w}_{k}t_{k}, \tag{5}\]
and
\[w_{k}=\alpha_{k}\prod_{j<k}(1-\alpha_{j}),\text{ and }\alpha_{k}=1-\exp(- \sigma_{k}||t_{k}-t_{k+1}||). \tag{6}\]
where \(K\) is the total number of sampling points along a ray, and \(t_{k}\) denotes the depth hypothesis at point \(k\). The better and more accurate the predicted depth map \(M\), the more geometrically consistent views ControlNet will synthesize.
\(\bullet\)**Point Cloud 3D Diffusion Model.** To control the geometry of NeRF for text-to-3D generation, we propose in this paper, for the first time, the distillation of prior knowledge from the pre-trained large point cloud diffusion model, Point-E [35]. Point-E [35] is an efficient 3D diffusion model for generating sparse 3D point clouds (4096 points) from text prompts or images in about 1 minute. As illustrated in Figure 2, we utilize the pre-trained Point-E model to regulate the geometry learning of NeRF. Specifically, the model generates a sparse 3D point cloud consisting of 4096 points, which is conditioned on a reference image and can flexibly represent the object's shape depicted in the image. However, it is not trivial to guide the NeRF's geometry with only sparse 3D points, we propose a sparse point cloud guidance loss \(\mathcal{L}_{\text{point-cloud}}\) to address this issue, which is illustrated in the next section.
It is worth noting that Points-to-3D enables users to easily control the shape of the generated content by providing a reference image, which can be a real image or a generated image via text-to-image models [32, 42, 59].
_Sparse 3D Points Guidance._ The core idea of our Points-to-3D is to introduce controllable sparse 3D points to guide the text-to-3D generation. In this section, we elaborate on how to leverage the sparse 3D points. It is challenging to use a sparse 3D point cloud to guide the geometry learning of NeRF. Previous work on improving
Figure 2. Illustration of the proposed Points-to-3D framework for text-to-3D generation. Points-to-3D mainly consists of three parts: a scene representation model (a coordinate-based NeRF [34]), a text-to-image 2D diffusion model (ControlNet [59]), and a point cloud 3D diffusion model (Point-E [35]). During training, both 2D and 3D diffusion models are frozen.
NeRF's geometry uses the depth of sparse points to supervise the predicted depth (Krause et al., 2017; Zhang et al., 2018). However, the 3D points are computed using multiple views via COLMAP (Zhou et al., 2017), and the information about which view each 3D point belongs to has been calculated in advance. In our case, only a single RGB image is used to generate the sparse 3D points, when we project all the points to the current view to attain a sparse depth map, there will be aliasing problems between the front and the rear 3D points.
In this work, we present a sparse point cloud guidance loss. Specifically, let \(P_{s}=\{(x_{i},y_{i},z_{i})\}_{i=1}^{40\%}\) be the original sparse 3D points generated by Point-E (Zhou et al., 2017) conditioned on a reference image. Instead of using \(P_{s}\) directly, we experimentally find that making the sparse point cloud to be dense can provide better geometry supervision and produce more realistic 3D contents. We propose to upsample \(P_{s}\) by iteratively performing 3D points interpolation via a simple rule, _i.e._, for each point \(p_{i}\), we add a new 3D point at the middle position between each of its nearest \(q\) neighbor points and \(p_{i}\). The process is depicted in Figure 3. We set \(q=20,n=2\) by default. Now we get the dense 3D points \(P_{d}\), which contain about 500k points after eliminating duplicate points.
Ideally, we want to align the geometry (the volume density \(\sigma\)) of NeRF with the shape of \(P_{d}\) to ensure that the generated 3D content of Points-to-3D closely resembles the reference image. In addition, we also want to provide NeRF with a level of flexibility and adaptability in its geometry to enable the generation of new details while satisfying different text prompts. Instead of using the per-view sparse depth map supervision, which has a front-rear aliasing issue as discussed above, and is also not efficient as it only optimizes the current view's depth, we propose an efficient point cloud guidance loss \(\mathcal{L}_{\text{point-cloud}}\) to directly optimize the whole geometry (\(\sigma\)) in 3D space. Specifically, we encourage the occupancy (\(\alpha\)) corresponding to the NeRF points \(P_{nerf}\) that near the point cloud \(P_{d}\) to be close to 1, while the occupancy of the NeRF points that far from the point cloud \(P_{d}\) to be close to 0. Furthermore, we make the geometry capable of generating new details adaptively by ignoring the supervision of some parts of occupancy. We first compute the closest distance between each point in \(P_{nerf}\) and all points in \(P_{d}\): \(\mathcal{D}=\text{Dist}(P_{nerf},P_{d})\), \(\mathcal{D}\in\mathbb{R}^{S\times 1}\), where \(S\) denotes the number of points in \(P_{nerf}\). Then, normalize \(\mathcal{D}\) via: \(\widehat{\mathcal{D}}=\frac{\mathcal{D}}{0.5\cdot(\max(P_{nerf})-\min(P_{nerf}))}\). Finally, The calculation of \(\mathcal{L}_{\text{point-cloud}}\) can be formulated as:
\[\mathcal{L}_{\text{point-cloud}}=\text{CrossEntropy}(\alpha(P_{nerf}),O(P_{nerf })), \tag{7}\]
and
\[O_{i}=\begin{cases}1-\widehat{\mathcal{D}}_{i},&\text{if }1-\widehat{ \mathcal{D}}_{i}>\tau_{1};\\ 0,&\text{else if }1-\widehat{\mathcal{D}}_{i}<\tau_{2};\\ -1,&\text{otherwise};\end{cases} \tag{8}\]
where \(O(P_{nerf})\) denotes the target occupancy of all NeRF points, \(1-\widehat{\mathcal{D}}\) indicates the degree of proximity to the guided point cloud \(P_{d}\), and \(\tau_{1}\), \(\tau_{2}\) are two hyperparameters that are experimentally set to 0.95 and 0.9 respectively. We ignore the supervision of points with \(\tau_{2}<1-\widehat{\mathcal{D}}<\tau_{1}\), allowing the model to adaptively add new details into the geometry to match the text prompts, as well as fix broken holes in the imperfect guided point cloud \(P_{d}\).
Training Objectives.The training objectives of Points-to-3D consist of three parts: the point cloud guidance loss \(\mathcal{L}_{\text{point-cloud}}\), the score distillation sampling loss \(\mathcal{L}_{\text{SDS}}\), and a sparsity loss \(\mathcal{L}_{\text{sparse}}\). The sparsity loss is suggested by (Zhou et al., 2017), which can suppress floaters by regularizing the rendering weights:
\[\mathcal{L}_{\text{sparse}}=-\sum_{k}(w_{k}\log w_{k}+(1-w_{k})\log(1-w_{k})). \tag{9}\]
We introduce the depth map condition \(M\) calculated by Equation 5 and update the score distillation sampling loss in Equation 3 as follows:
\[\nabla_{\theta}\mathcal{L}_{\text{SDS}}(\phi,g(\theta))=\mathbb{E}_{t,\epsilon }\big{[}o(t)(\epsilon_{\phi}(x_{i};y,M,t)-\epsilon)\frac{\partial x}{\partial \theta}\big{]}. \tag{10}\]
The overall learning objective is computed as:
\[\mathcal{L}=\lambda_{\text{point-cloud}}\mathcal{L}_{\text{point-cloud}}+ \lambda_{\text{SDS}}\mathcal{L}_{\text{SDS}}+\lambda_{\text{sparse}} \mathcal{L}_{\text{sparse}}. \tag{11}\]
## 4. Experiments
### Baselines
We consider three text-to-3D generation baselines: DreamFusion (Zhou et al., 2017; Zhou et al., 2017), Latent-NeRF (Zhou et al., 2017), and SJC (Zhou et al., 2017). Instead of using the close-sourced Imagen (Zhou et al., 2017) diffusion model, both Latent-NeRF and SJC use the publicly available Stable Diffusion (Zhou et al., 2017). We mainly compare our Points-to-3D with Latent-NeRF and SJC in the experiments. We provide more results including comparisons with DreamFields (Zhou et al., 2017), and DreamFusion (Zhou et al., 2017) in our supplementary materials.
### Implementation Details
We use Instant-NGP (Zhou et al., 2017) as our scene model. Following the camera sampling method in (Zhou et al., 2017), during training, a camera position is randomly sampled in spherical coordinates, and we also randomly enlarge the FOV when rendering with NeRF. In addition to the training in latent space shown in Figure 2, we experimentally find that further performing RGB refinement in RGB space, which is introduced in (Zhou et al., 2017), can further improve the text-to-3D generation results. Our Points-to-3D takes less than 50 minutes per text prompt to complete a 3D generation on a single A100 GPU, and most of the time is spent on calculating \(\mathcal{L}_{\text{point-cloud}}\). We train for 5000 iterations using AdamW optimizer with a learning rate of \(1e^{-3}\). The hyperparameters of \(\lambda_{\text{point}},\lambda_{\text{SDS}},\lambda_{\text{sparse}}\) are set to \(5e^{-6},1.0,5e^{-4}\), respectively.
### Ablation Studies
Effect of Point Cloud Guidance Loss.In this section, we evaluate the proposed point cloud guidance loss \(\mathcal{L}_{\text{point-cloud}}\). Concretely, we evaluate Points-to-3D by eliminating the point cloud guidance. We also verify the per-view sparse depth map loss as discussed in Section 3.2. The results are shown in Figure 4. We first produce a
Figure 3. Illustration of the point cloud upsampling process. For each original 3D point (_e.g._, \(p_{i}\)), we add new 3D points (red points) between each of the nearest \(q\) neighbor points (blue points) and point \(p_{i}\) for each interpolation step.
reference image with the text prompt: "an astronaut with a back-pack on a horse" using Stable Diffusion. Then we use \(\mathcal{L}_{\text{point-cloud}}\) (the \(3rd\) row), a designed per-view depth map loss (the \(2nd\) row), and without any geometry constraints (the \(1st\) row), to train three models with the same text prompt, respectively. We can find that without any geometry constraints, the generated content suffers an obvious view inconsistency problem (red dashed boxes). The result of using our designed per-view depth map loss as geometry supervision further improves the multi-face issue. However, the rendered images are less realistic and even broken (yellow dashed boxes) due to the sparsity of point clouds and the inefficiency of the per-view supervision. It is worth noting that the result of using \(\mathcal{L}_{\text{point-cloud}}\) shows more details in both "astronaut" and "horse". That is, Points-to-3D with \(\mathcal{L}_{\text{point-cloud}}\) for geometry optimization can generate more realistic 3D content.
_Effect of 3D Points Upsampling_. In this section, we analyze the effect of upsampling the generated sparse 3D point cloud. As shown in Figure 5, we compare the rendered views of Points-to-3D trained with sparse (4096) 3D points \(P_{s}\) and upsampled denser (\(\sim\)500k) 3D points \(P_{d}\) as the geometry guidance, respectively. The \(1st\) column represents the original sparse points \(P_{s}\) produced by Point-E [35] given the reference image shown in Figure 2, and the upsampled points \(P_{d}\) via our designed rule. The \(2nd\sim 4th\) columns are three corresponding rendered views. We can see that the results guided by \(P_{d}\) are more realistic compared to those guided by \(P_{s}\). This is due to that a denser point cloud can offer more supervision to encourage the NeRF to learn a more concise geometry. Moreover, better geometry (depth map) can also guide ControlNet [59] to generate more geometry-consistent and realistic images that match the input text prompt.
_Effect of Adaptive Design in \(\mathcal{L}_{\text{point-cloud}}\)_. In this section, we illustrate the effect of the adaptive design in \(\mathcal{L}_{\text{point-cloud}}\). That is, in Equation 7 and Equation 8, we propose to ignore the supervision of those NeRF points with \(\tau_{2}<1-\widehat{\mathcal{D}}<\tau_{1}\) to let Points-to-3D to adaptively adjust the geometry to match the text prompt. This adaptive design serves two main purposes: a). it offers the capacity to create new details without changing the main shape of the 3D content. b). it can fill broken holes in the imperfect point clouds \(P_{d}\).
As shown in Figure 6, we visualize two generated 3D contents using Points-to-3D with the same reference image and sparse point cloud but different text prompts. The last three columns represent the rendered images, the rendered depth maps, and the rendered normals at the same camera pose, respectively. We can clearly observe that Points-to-3D can generate more specific new details to match different input text prompts based on the same point cloud guidance. In Figure 7, we analyze the effect of adaptive design in filling holes in the imperfect point cloud. Given a reference image, Point-E [35] may produce non-uniform point clouds, _e.g._, broken holes in the chair back in this instance. If we enforce all the NeRF points closed to the point cloud to be positive class and otherwise negative class, it is difficult to set an appropriate distance threshold for all 3D contents and will cause broken holes. For instance, we compare the results of rendered images and corresponding depth maps trained without and with adaptive design in the \(1st\) and \(2nd\)
Figure 4. Illustration of the effect of our \(\mathcal{L}_{\text{point-cloud}}\). Given a reference image and a text prompt, our Points-to-3D with \(\mathcal{L}_{\text{point-cloud}}\) (the \(3rd\) row) can generate more realistic 3D content than both the per-view depth map loss (the \(2nd\) row) and that without any geometry constraints [28] (the \(1st\) row).
Figure 5. Comparison of rendered views of models trained with \(P_{s}\) and \(P_{d}\) as geometry guidance, respectively. The text prompt is “a Nissan GTR racing car”.
Figure 6. Visualization of two 3D models trained with the same reference image (generated by Stable Diffusion [42]) and the corresponding sparse 3D points but different texts.
Figure 7. Comparison of two 3D models trained with the same reference image and sparse 3D points shown in the \(1st\) column. The \(1st\) and the \(2nd\) rows denote training without and with adaptive design in \(\mathcal{L}_{\text{point-cloud}}\), respectively. The text prompt is “a wooden chair”.
row, respectively. Points-to-3D can naturally repair the broken holes in both geometry and appearance. We also analyze the effect of the depth map condition in our supplementary materials.
### Shape-Controllable Text-to-3D Generation
As special concepts and shapes are usually difficult to describe by text prompts but easy with images, it is desperately needed to have a mechanism to guide the text-to-3D content generation with images. In this section, we evaluate Points-to-3D in generating view-consistent and shape-controllable 3D contents with a single reference image for geometry guidance. Considering that Dream-Fusion (Srivastava et al., 2017) and Magick (Srivastava et al., 2017) [25] use their proprietary text-to-image diffusion models (Beng et al., 2019; Wang et al., 2019) and neither releases the code, we mainly compare with Latent-NeRF (Krause et al., 2019) and SJC (Wang et al., 2019). As shown in Figure 8, we mainly compare two aspects: single-object generation and scene (consists of multiple objects) generation.
For the single-object generation (the \(1st\sim 4th\) rows), Latent-NeRF (Krause et al., 2019) is easy to suffer the view inconsistency problem, and sometimes fails to generate reasonable content. SJC (Wang et al., 2019) looks a little better than Latent-NeRF in terms of view consistency of the generated objects, however, it also sometimes fails to generate content that matches the text description (_e.g._, the \(2nd\) and the \(4th\) rows). Our Points-to-3D can automatically generate view-consistent and more realistic single objects. It is worth noting that Points-to-3D can generate more lifelike details, _e.g._, the logos of Converse, Nike, GUCCI, and LV.
For more challenging scene generation (the \(5th\sim 8th\) rows), the inherent view inconsistency problem of Latent-NeRF (Krause et al., 2019) becomes more serious, _e.g._, multiple teapot spouts in the \(6th\) row and multiple hands or legs in the \(7th\) row. Besides, both Latent-NeRF and SJC can easily lose some concepts of the input text prompts, _e.g._, "motorbike" in the \(5th\) row, "ray" in the \(6th\) row, and "tuba" in the last row. In contrast, our Points-to-3D can create view-consistent 3D content and preserve the concepts contained in the text prompts.
Furthermore, Points-to-3D enables users to arbitrarily create or modify 3D content that has a similar shape to the reference image. We provide more comparisons in our supplementary materials.
Figure 8. Qualitative comparison with Latent-NeRF (Krause et al., 2019) and SJC (Wang et al., 2019) on single-object generation (the \(1st\sim 4th\) rows) and scene generation (the \(5th\sim 8th\) rows). The \(1st\) column denotes reference images used for Points-to-3D, where the top four are real images and the bottom four are synthetic images generated using Stable Diffusion (Wang et al., 2019). (Best viewed by zooming in.)
### Geometry Comparison
We compare the learned geometry of Points-to-3D and Latent-NeRF (Wang et al., 2018), both of which use Instant-NGP (Wang et al., 2018) as the scene model. As depicted in Figure 9, we show two generation results produced using two text prompts: "a lego man" and "a red converse allstar shoe". Each contains three views: a rendered RGB image and two views of mesh. The meshes are extracted by Marching Cubes (Marcus et al., 2019) from density field of the learned Instant-NGP. We can clearly observe that compared to the flawed meshes of Latent-NeRF, Points-to-3D can generate more delicate meshes. That is, in addition to synthesis view-consistent novel views, Points-to-3D can learn controllable and more compact geometry for text-to-3D generation.
### Compositional Generation
We analyze the effectiveness of Points-to-3D in generating compositional 3D content. As shown in Figure 10, by taking the manually composited sparse 3D points of multiple reference images as geometry guidance, Points-to-3D can perform view-consistent and shape-controllable text-to-3D generation. The results indicate that Points-to-3D enables users to freely composite objects using multiple reference images and generate more imaginative 3D content.
### Quantitative Comparisons
CLIP R-precisionIn this section, we calculate the CLIP R-precision metrics for Latent-NeRF (Wang et al., 2018), SJC (Wang et al., 2018), and our Points-to-3D. We compute CLIP R-precision following (Huang et al., 2018) on 50 text and 3D model pairs (shown in our supplementary materials) based on three CLIP image encoders (ViT-B/16, ViT-B/32, and ViT-L/14). For each 3D generation, we randomly select two rendered views for calculation. The results are reported in Table 1, the higher scores for our Points-to-3D results indicate that renderings from our 3D model outputs more accurately resemble the text prompts.
User StudiesThe CLIP R-precision metric focuses on the matching degree of rendered views and text prompts, but it is difficult to reflect view consistency and image realism. We conduct user studies with 22 participants to evaluate different methods based on user preferences. We ask the participants to give a preference score (range from \(1\sim 5\)) in terms of view consistency and prompt relevance for each anonymized method's generation. As shown in Figure 11, we report the average scores on a randomly composed evaluation set that consists of 36 generation results of each method. We find that Points-to-3D is significantly preferred over both Latent-NeRF and SJC in terms of view consistency and prompt relevance. We provide more detailed information about the user study, please refer to our supplementary materials.
by performing score distillation to the 2D image diffusion model (ControlNet). Both qualitative and quantitative comparisons demonstrate the superiority of Points-to-3D in generating view-consistent and shape-controllable 3D contents.
|
2309.02371
|
Observation of elastic bound modes in the continuum in architected beams
|
We report the experimental observation of an elastic bound mode in the
continuum (BIC) in a compact region of an architected beam. We consider a long
slender beam with rigid masses attached at periodic intervals, with a compact
segment bounded by four protruding side beams. The key idea is to seek a mode
where the side beams move out-of-phase with the compact region, thereby
nullifying the forces and moments outside this region and resulting in a bound
mode. The structure is modeled using Euler-Bernoulli beam theory and the side
beams are designed by imposing equilibrium constraints required for a BIC.
Multiple BICs are found in the compact region, and for each BIC, we find a
one-parameter family of BIC supporting side beam designs. The predictions are
verified by three-dimensional finite element simulations, followed by their
experimental observation using laser Doppler vibrometry in a macro-scale
structure. Our approach allows to achieve BICs in an arbitrary sized compact
region of the architected beam. Our findings may open avenues for confining
elastic wave energy in compact regions for applications in sensors and
resonators.
|
Adib Rahman, Raj Kumar Pal
|
2023-09-05T16:34:44Z
|
http://arxiv.org/abs/2309.02371v1
|
# Observation of elastic bound modes in the continuum in architected beams
###### Abstract
We report the experimental observation of an elastic bound mode in the continuum (BIC) in a compact region of an architected beam. We consider a long slender beam with rigid masses attached at periodic intervals, with a compact segment bounded by four protruding side beams. The key idea is to seek a mode where the side beams move out-of-phase with the compact region, thereby nullifying the forces and moments outside this region and resulting in a bound mode. The structure is modeled using Euler-Bernoulli beam theory and the side beams are designed by imposing equilibrium constraints required for a BIC. Multiple BICs are found in the compact region, and for each BIC, we find a one-parameter family of BIC supporting side beam designs. The predictions are verified by three-dimensional finite element simulations, followed by their experimental observation using laser Doppler vibrometry in a macro-scale structure. Our approach allows to achieve BICs in an arbitrary sized compact region of the architected beam. Our findings may open avenues for confining elastic wave energy in compact regions for applications in sensors and resonators.
## I Introduction
Bound modes in the continuum (BICs) are a unique class of localized modes with two key properties: their wave amplitude diminishes to zero outside a compact region and their frequency is in the continuous spectrum (pass band) of bulk propagating modes [1]. In contrast, conventional bound modes reside within bandgaps, and the localized modes encountered in the passband typically exhibit leakage, with the wave amplitude gradually decreasing from the center of the wave [2; 3; 4]. The concept of BICs originated in quantum mechanics, introduced by von Neumann and Wigner in 1929, who utilized a complex artificial potential [5]. It was regarded as a mathematical anomaly, as such complex potentials were not possible in real materials. BICs were subsequently predicted and observed in several classical wave systems [6; 7]. Notably, in 1966, BICs were experimentally observed in acoustics through the 'wake shedding experiment' [8]. Today, BICs have become an active area of research across various scientific disciplines due to their leak free energy storing capacity with very high quality factor (Q factor) [9]. Potential applications of BICs encompass lasing [10; 11; 12], sensing [13], filtering [14; 15], supersonic surface acoustic device [16; 17], vibration absorption [18], and wave guiding [19; 20; 21].
Recent advancements in manufacturing techniques have opened up new possibilities for exploring BICs in complex structures, particularly in the domains of photonic metamaterials. Two major types of BICs are symmetry-protected and accidental. Symmetry-protected BICs arise from the mismatch between the spatial symmetry of a localized mode and the symmetry of the propagating modes. Experimental observations of such symmetry-protected BICs have been reported in various systems, such as dielectric slabs with square arrays of cylinders [22], periodic chains of dielectric disks [23], and optical waveguides [24]. On the other hand, accidental BICs can be achieved through precise system parameter tuning to cancel their coupling with bulk propagating waves. One example of this category is the Fabry-Perot BIC [25], where the BIC is formed through the destructive interference of waves. In addition to these two types of BICs, recent research has explored quasi-BICs (QBICs) which have high Q factors [26; 27]. As true BIC-supporting structures are limited, quasi-BICs are emerging as an alternative.
In contrast to photonics, a major challenge in achieving elastic BICs is the simultaneous presence of transverse and longitudinal waves with distinct dispersion relations. BICs should not couple or hybridize with any propagating modes present in an elastic body. There have been a few works in recent years on predicting and observing BICs in elastic media. Examples include prediction of elastic BICs in a structure comprising of two periodic arrays of cylinders at a specific distance [28], observing BICs in a chain of thin plates connected by slender beams by exploiting non-Hermitian effects [29]. BICs have also been observed in multi-physics domains, including in chip scale ring-shaped optomechanical microresonators [30], slab-on-substrate phononic crystals [31], elastic bar with air-encapsulated cavity [32]. Cao _et al.[18; 33]_ observed quasi-BICs in a semi-infinite plate attached to a resonant waveguide and predicted these can be turned to BICs by tuning the geometric parameters [33]. All these BICs require specific material properties, boundary conditions, geometric features and dimensions. For practical applications, it is desirable to have a general framework that can translate across material properties and generate BICs in arbitrary sized compact regions.
This work builds on our prior work [21], where we pre
dicted how a family of BICs can be achieved in an arbitrary compact region of a spring-mass system by exploiting symmetry constraints. Here, we extend this concept to realize BICs in a compact region of an architected beam. In contrast to spring-mass chains, beams are continuous structures with multiple degrees of freedom at each point, namely transverse displacements and rotations. These degrees of freedom impose additional conditions for BICs. Here, we consider a periodic architected beam having an array of rigid masses. To achieve BICs in a compact region, four side beams are added, and the key idea is that they move out-of-phase with the periodic beam to nullify forces and moments at their joints.
The outline of the paper is as follows: section II presents the design and modeling approach. The structure is modeled using \(1D\) Euler-Bernoulli beam theory. Section III presents the BIC mode shapes determined using \(1D\) finite element analysis and reports a 1-parameter family of BIC supporting side beam designs. In section IV, \(3D\) finite element simulations and laser Doppler vibrometry based experimental measurements are presented, that verify and validate the existence of a BIC in the architected structure. The simulations are done using the beam theory-based design as a starting point to finalize a structure that simplifies fabrication. Finally, the conclusions, along with various sources of error and possible future extensions are presented in section V.
## II Proposed concept and modeling approach to design compact region
We first introduce the proposed architected beam and discuss the key idea of achieving BICs in a compact region by adding side beams. These side beams are designed by modeling the structure using a one-dimensional (\(1D\)) Euler Bernoulli beam theory. A description of this modeling approach is presented, followed by its numerical discretization procedure.
### Architected beams with side segments and symmetry consideration
Let us consider a homogeneous beam with rigid masses attached at periodic intervals of distance \(l\). An example is shown in the central beam in Fig. 1(a). We call this periodic architected beam as the main beam. Figure 1(b) displays a unit cell of the main beam with the key geometric variables labeled. It is a slender beam with rectangular cross-section and has two identical rigid cylinders at its center, one each at the top and bottom.
Our objective is to achieve a BIC in an arbitrary compact region, for example between the cross-sections labeled A and B in Fig. 1(a). We will model this architected structure using one-dimensional (\(1D\)) beam theory, which assumes that the beam deforms such that each cross-section remains rigid. Under this assumption, the degrees of freedom are the 3 translations and 3 rotations of each cross-section along the beam's axis. We restrict attention to long wavelengths, compared to the beam thickness and low frequencies, i.e., in the first pass band of the beam. The lower frequency band has flexural modes with displacement along \(z\). For such modes, it suffices to consider two degrees of freedom at each cross section: transverse displacement \(u\) along \(z\) and rotation \(\theta\) about y-axis, with the latter accounting for bending. The presence of side beams couples the torsional (rotation about \(x\)) and flexural modes near the compact region. However, as we discuss below, for BIC modes, the rotation about \(x\) is cancelled due to symmetry and thus, the number of relevant DOFs at each point along the beam cross-section is two.
A BIC between sections \(A\) and \(B\) will be a mode with displacement confined in this region and with zero force and moment at sections \(A\) and \(B\). Note that if the net force and net moment on section \(A\) (\(B\)) is zero, it will be at rest and no displacement field will be induced to the left (right) of point \(A\) (\(B\)). Our approach is to add side beams so that the sections \(A\) and \(B\) are at rest and we will have BIC in the compact region. Let us discuss how the four side beams in Fig. 1(a) induce BICs and the reason for having two side beams on either side the main beam at sections \(A\) and \(B\). The key idea is to have a mode where the side beams move out of phase, i.e., in opposite direction to the main beam, thereby cancelling the net force and moment at sections \(A\) and \(B\). A single side beam will induce torsional rotation about the main beam axis due to the component of moment along \(x\). To cancel this moment, a second side beam is added. The two side beams on either side at \(A\) thus move in-phase with each other, but out-of-phase with the main beam. We will show later in Sec. III a family of side beams that can achieve exact cancellation of forces and moments.
We choose all the side beams to be identical and arrange them so that the center of the compact region between sections \(A\) and \(B\) has reflection symmetry about both \(x\) and \(y\) axes. Note that BICs do not require these symmetries and they are chosen to simplify the side beam design. Indeed, the only requirement is that the force \(F_{z}=0\) and moments \(M_{x}=M_{y}=0\) at sections \(A\) and \(B\). This requirement ensures zero displacement and rotation at these two sections, and consequently, outside the compact region. We remark here that the full design space of distinct side beams is sufficiently large, with multi-parameter families of solutions that satisfy these conditions. Imposing the constraints arising for symmetry, the problem of inducing BICs reduces to determining suitable side beams. As we are considering that the side beams' arrangement is reflection symmetric about the \(x\) and \(y\) axis, determining one side beam's design will suffice to complete the beam structure that can support BIC at a particular frequency.
Let us discuss how these reflection symmetries and equilibrium conditions impose restrictions on resulting bound mode shapes in the compact region. Each sym
metry can be represented by a linear transformation operator. This operator maps the position vectors of each point in the structure to its corresponding reflected point. In addition, the mode shapes are eigenvectors of this operator [34]. The reflection symmetry operator has two eigenvalues: \(\lambda=\pm 1\) and the bound mode shapes are thus even (\(\lambda=1\)) or odd (\(\lambda=-1\)) in the compact region about the symmetry axis. Let us analyze the consequence of reflection symmetry about \(x\) axis. An odd mode shape about the \(x\) axis will induce a moment and thus rotation about \(x\) at sections \(A\) and \(B\) as the side arms move in opposite directions. The sections to the left of \(A\) and right of \(B\) will thus not be at rest and a bound mode is thus not possible with an odd mode shape about the \(x\) axis. In summary, a bound mode shapes in the compact region will be either even or odd about the \(y\) axis and even about the \(x\) axis.
### Modeling with Euler-Bernoulli beam theory and numerical procedure
Let us derive the governing equations for free vibrations of the structure based on \(1D\) beam theory and discuss the finite element based procedure to solve them. Let \(u(x,t)\) and \(u_{p}(x,t)\) denote the transverse displacements of the main beam and side beam \(p\), respectively. The action functional for this structure is given by
\[S=\int_{0}^{T}\int_{0}^{L}\left[\frac{\rho A\dot{u}^{2}}{2}-\frac {EI\left(u^{\prime\prime}\right)^{2}}{2}+\sum_{p=1}^{N}\left(\frac{m\dot{u}^{2 }}{2}+\frac{I_{r}\dot{\theta}^{2}}{2}\right)\delta(x-pl)\right]dxdt\\ +\sum_{p=1}^{4}\frac{1}{\cos\varphi_{p}}\int_{0}^{T}\int_{0}^{L_ {p}}\left[\frac{\rho A\dot{u}_{p}^{2}}{2}-\frac{\cos^{4}\varphi_{p}EI\left(u_{ p}^{\prime\prime}\right)^{2}}{2}+\left(\frac{m_{p}\dot{u}_{p}^{2}}{2}+\frac{ \cos^{2}\varphi_{p}I_{rp}\dot{\theta}_{p}^{2}}{2}\right)\delta(x-L_{p})\right] dxdt \tag{1}\]
Here \(u^{\prime}\) and \(\dot{u}\) denote partial derivatives of \(u\) with respect to \(x\) and \(t\), respectively, \(\theta=u^{\prime}\) is the rotation of the section and \(L_{p}/\cos\varphi_{p}\) is the length of side beam \(p\). \(E\), \(I\), \(\rho\) and \(A\) are the Young's modulus, bending moment of inertia, density and cross-section area, respectively. The attached cylinders are assumed to be rigid with diameter \(d\). The bending moment of inertia is \(I=wt^{3}/12\) for a beam with width \(w\) and thickness \(t\). \(\varphi_{p}\) is angle of side beam \(p\) with respect to the \(x\)-axis. We seek harmonic solutions at frequency \(\omega\) and impose a displacement field of the form \(u(x,t)=u(x)e^{i\omega t}\) to replace the time derivatives by \(i\omega\). The displacement field satisfies the Euler Lagrange equations, obtained by setting variation of \(S\) to zero. This condition gives
Figure 1: (a) Schematic of proposed architected beam: rigid masses attached at periodic intervals along a homogeneous beam. Four side beams are added to get a BIC between A and B. (b) A unit cell of the periodic beam with the key geometric variables labeled.
\[\delta S =\int_{0}^{L}\left[-\omega^{2}\left(\rho Au\delta u+\sum_{p=1}^{N}(mu \delta u+I_{r}u^{\prime}\delta u^{\prime})\delta(x-pl)\right)-EIu^{\prime\prime }\delta u^{\prime\prime}\right]\ dx\] \[+\sum_{p=1}^{4}\frac{1}{\cos\varphi_{p}}\int_{0}^{L_{p}}\left[- \omega^{2}\left(\rho Au_{p}\delta u_{p}+\left(m_{p}u_{p}\delta u_{p}+\cos^{2} \varphi_{p}I_{rp}u_{p}^{\prime}\delta u_{p}^{\prime}\right)\delta(x-L_{p}) \right)-\cos^{4}\varphi_{p}EIu_{p}^{\prime\prime}\delta u_{p}^{\prime\prime} \right]\ dx=0. \tag{2}\]
Now, let us discuss the numerical procedure to discretize and solve the above equation. We use a finite element approximation, where unknown degrees of freedom are restricted to be the displacements \(u\) and rotations \(u^{\prime}\) at the locations of attached masses. We express \(u\) and \(u^{\prime}\) at a point in the structure as a weighted sum of piecewise cubic polynomials, i.e., having continuous first derivatives, and the weights being the degrees of freedom. We seek a solution that satisfies the governing equation (2) for any perturbation fields \(\delta u\) and \(\delta u^{\prime}\) that lies in the same space spanned by the degrees of freedom. Explicit expressions for the polynomials and the resulting equations are presented in the appendix. The resulting discretized eigenvalue problem for the structure may be written in the matrix form as
\[\omega^{2}\mathbf{Mu}=\mathbf{Ku}. \tag{3}\]
Here \(\mathbf{u}\) is the vector of unknown degrees of freedom, i.e., displacements and rotations at masses, and \(\mathbf{M}\), \(\mathbf{K}\) are the discretized mass and stiffness matrices, respectively.
## III Numerical solution of architected beams supporting BICs
In this section, the mode shapes of BICs and a family of side beams that support these BICs are determined for a given main beam. Although our studies are presented for a specific choice of compact region, the concept and approach can be extended to an arbitrary sized compact region and material properties. A two step process is used to design BIC supporting structures using the beam model introduced above in Sec. II.2. The first step is to determine the bound mode frequencies by imposing zero displacement and rotation at sections \(A\) and \(B\) in the main beam. The next step is to determine the side beam dimensions that satisfy the equilibrium conditions required to keep sections \(A\) and \(B\) at rest. Finally, we verify if these modes are indeed BICs, i.e., if their frequency lie in a pass band. This is done by performing a dispersion analysis that yields the pass and stop band frequencies.
Figure 2: (a) Frequencies of bound modes in the compact region of Fig. 1(a). (b) Mode shapes of first 3 modes, \(u\) is transverse displacement along the main beam. Markers indicate rigid mass locations, with similar marker in (a) for their frequency. (c) Dispersion diagram of the main beam from \(1D\) beam theory and \(3D\) elasticity. The first three mode frequencies lie on the lowest flexural band and are thus BICs.
### Bound mode frequencies and design of side beams
Having derived the governing equations for the proposed structure, let us now solve them numerically to determine BICs. The first step is to determine the bound modes by considering the compact region of the main beam only and explicitly enforcing zero displacement and rotation at sections \(A\) and \(B\) in Fig. 1(a). The resulting modes will, in general, not satisfy equilibrium conditions at sections \(A\) and \(B\). The second step is to determine the side beam dimensions so that the total structure (main and side beams together) satisfy the equilibrium conditions at sections \(A\) and \(B\). In these mode shapes, sections \(A\) and \(B\) will thus be at rest and have zero net force and moment, thereby ensuring that parts outside the compact region will be at rest. Thus, this procedure ensures bound modes in the structure.
In the first step, to determine the natural frequency and mode shape of BICs, we use material properties of Aluminum 6061 with Young's modulus (\(E=68.9\) GPa, Poisson's ratio \(\nu=0.3\) and density \(\rho=2700\) kg/m\({}^{3}\)) for the beam and Neodymium Magnet N35 (cylinder density, \(\rho_{c}=7537.6\) kg/m\({}^{3}\)) for the cylinders considering ease of fabrication. The key geometric variables (\(l,w,t,d,h\)) in the unit cell, see Fig. 1(b) are chosen to be (27.5 mm, 5 mm, 2.032mm, 5mm, 4.6mm). The bound mode shapes and frequencies are determined by solving the eigenvalue problem (3) in the compact region with zero displacement and rotation boundary conditions. Figure 2(a) displays the six bound mode frequencies.
Next, we need to determine the side beam dimensions so that the structure supports BIC in the compact region. There are several geometric variables for the side beams as shown in Fig. 1(b) for a unit cell as well as the angle between the main the beam and side beam \(AC\), \(\varphi_{1}\) as shown in Fig. 1(a). Different sets of the geometric variables for side beams can give BICs in the compact region. We fix \(\varphi\) as \(45^{\circ}\) to simplify the problem. Also, for ease of fabrication, the beam thickness and the cylinder diameters in the side beams are chosen to be the same as that in the main beam. Thus the design reduces to determining three geometric variables: length (\(l_{s}\)), width (\(w_{s}\)) of the side beams, and the cylinder height (\(h_{C}\)) at section \(C\) in Fig. 1(a).
Let us summarize the conditions on a side beam displacement field \(u_{p}\) needed to get a BIC. These conditions ensure that section \(A\) and the region to the left of it will be at rest. Recall the key idea that the two identical side beams at section \(A\), as in Fig. 1(a) move out of phase with the main beam, thereby canceling the force and moment at \(A\). Its displacement field has to satisfy the governing equations (2) at the bound mode frequency \(\omega\) under fixed boundary conditions at \(A\) (\(u_{p}=0\) and \(u_{p}^{\prime}=0\)). In addition, the resulting forces and moments from the side and main beams should add to zero so that section \(A\) is in equilibrium. Under the considered \(1D\) beam theory, the force and moment at section \(A\) are given by
\[F =EIu^{\prime\prime\prime}+\sum_{p=1}^{2}\cos^{3}\varphi_{p}\ u_{p }^{\prime\prime\prime},\] \[M =EIu^{\prime\prime}+\sum_{p=1}^{2}\cos^{2}\varphi_{p}\ EI_{p}u_{p }^{\prime\prime}.\]
Now, we derive the discrete approximations of the above conditions for the side beam having section \(C\) in Fig. 1(a). A side beam is modeled using a single finite element and the degrees of freedom are the displacements and rotations at the two ends (\(A\) and \(C\)). Since we seek solutions with section \(A\) fixed, the displacement field simplifies to \(u_{p}(x)=N_{3}(x/l_{s})\theta_{C}+N_{4}(x/l_{s})u_{C}\). Explicit expressions for \(N_{3}\), \(N_{4}\) are presented in appendix. Under this approximation, the governing equations of side beams and the equilibrium conditions at \(A\) then reduce to
\[\delta\theta_{C}=0 \implies \left(\frac{4EI_{s}}{l_{s}}-\frac{\omega^{2}m_{s}}{420}(4l_{s}^{ 2}+I_{C})\right)\theta_{C}+\] \[\left(\frac{11\omega^{2}m_{s}}{210}l_{s}-\frac{6EI_{s}}{l_{s}^{2} }\right)u_{C}=0, \tag{4a}\] \[\delta u_{C}=0 \implies \left(\frac{11\omega^{2}m_{s}}{210}l_{s}-\frac{6EI_{s}}{l_{s}^{2 }}\right)\theta_{C}+\] \[\left(\frac{12EI}{l_{s}^{3}}-\frac{\omega^{2}m_{s}}{420}(156+m_{ C})\right)u_{C}=0,\] (4b) \[F_{A}=0 \implies \frac{4EI_{s}}{l_{s}}\theta_{C}-\frac{12EI_{s}}{l_{s}^{2}}u_{C}\] \[-\frac{2EI}{l_{e}}\theta_{G}+\frac{6EI}{l_{e}^{2}}u_{G}=0,\] (4c) \[M_{A}=0 \implies \frac{12EI_{s}}{l_{s}^{2}}\theta_{C}-\frac{24EI_{s}}{l_{s}^{3}}u_ {C}-\] \[\frac{6EI}{l_{e}^{2}}\theta_{G}+\frac{12EI}{l_{e}^{3}}u_{G}=0. \tag{4d}\]
Here \(m_{s}=\rho l_{s}w_{s}t\) and \(I_{s}=w_{s}t^{3}/12\) are mass and bending moment of inertia of the side beam, respectively, while \(m_{C}=\pi\rho_{c}d^{2}h_{C}/4\) and \(I_{C}=m_{C}/12(3d^{2}/4+h_{C}^{2})\) are mass and mass moment of inertia of the cylindrical mass at section \(C\). \(u_{G}\) and \(\theta_{G}\) are the displacement and rotation at section \(G\), corresponding to the mode shape of the compact region at frequency \(\omega\) (see Fig. 2(a,b)). The force and moment balance constraints assume that the two side beams at \(A\) move in phase. Indeed, as discussed earlier, a BIC mode shape is symmetric about the \(x\)-axis.
The conditions for getting a BIC mode lead to a system of four nonlinear equations (4) with five unknown variables (\(u_{C},\theta_{C},l_{s},w_{s},h_{C}\)) related to the side beams.
To determine them, \(l_{s}\) is set to different fixed values in a wide range and the remaining variables are determined using the Newton-Raphson method. We determined side beam dimensions that support the lowest frequency bound mode at 713 Hz, denoted by a black marker in Fig. 2(a). Figure 3 displays a 1-parameter family of solutions that we obtained as \(l_{s}\) is varied. Side beams for every solution in Fig. 3 induce the bound mode shown by the black curve in Fig. 2(b). Similarly for the frequencies marked by blue triangle and red square, we find families of design parameters which support the corresponding bound modes in Fig. 2(b). These design parameter are displayed in the appendix, Fig. 7.
### Dispersion analysis of the architected beams
To confirm if the bound modes in Fig. 2(b) are indeed BICs, i.e., if their frequency lies in the pass band, we do a dispersion analysis of the main beam, which is periodic with unit cell in Fig. 1(b). We work with the discrete approximation, where the degrees of freedom are \((u_{n},\theta_{n})\) at a section having rigid mass labeled \(n\). We seek traveling wave solutions of the form \(\mathbf{u}_{n}=\tilde{\mathbf{u}}e^{i\kappa n}\), where \(\kappa\) is the non-dimensional wave-number, \(\mathbf{u}_{n}=[\theta_{n},\ u_{n}]^{T}\) and \(\tilde{\mathbf{u}}=[\theta\ u]^{T}\). The discretized governing equations (3) for this section then reduce to an eigenvalue problem \(\omega^{2}\mathbf{M}_{n}(\kappa)\tilde{\mathbf{u}}=\mathbf{K}_{n}(\kappa)\tilde{\mathbf{u}}\) with
\[\mathbf{M}_{n}(\kappa) =\frac{m_{b}}{420}\begin{bmatrix}2l_{e}^{2}(4-3\cos\kappa)+I_{y _{n}}&26l_{e}\sin\kappa\\ -26il_{e}\sin\kappa&312+108\cos\kappa+m_{n}\end{bmatrix}, \tag{5a}\] \[\mathbf{K}_{n}(\kappa) =\frac{EI}{l_{e}^{2}}\begin{bmatrix}4l_{e}(2+\cos\kappa)&-12i\sin \kappa\\ 12i\sin\kappa&24/l_{e}(1-\cos\kappa)\end{bmatrix}. \tag{5b}\]
Solving the eigenvalue problem for each \(\kappa\) in the interval \([0,\,\pi]\) gives two dispersion branches denoted by red curves in the Fig. 2(c). The first three bound mode frequencies in Fig. 2(a) lie on the lower red branch, implying that these are BICs.
## IV \(3d\) numerical simulations and experimental results
This section presents the verification of our predictions based on the \(1D\) beam model using \(3D\) elasticity theory. The simulations are performed using a commercial finite element analysis software COMSOL. Finally, we report on the experimental observation of a BIC under dynamic excitation with a shaker.
### Verification using \(3d\) elasticity theory
We model the finite beam structure shown in Fig. 4(b) using \(3D\) elasticity theory. Here, the motion at every point in a structure made of a linear elastic isotropic solid is governed by [35]\(\rho\mathbf{\tilde{u}}-[(\lambda+\mu)\nabla(\nabla\cdot\mathbf{u})+\mu\nabla^{2}\mathbf{u}]=0\), with \(\mathbf{u}=[u_{x}\ u_{y}\ u_{z}]^{T}\) being the vector of displacement components, \(\mu\) and \(\lambda\) the Lame constants of the solid. A finite element analysis is performed using COMSOL Multiphysics software and the domain is discretized using quadratic elements with tetrahedral geometry.
Let us first verify if the \(1D\) beam model is accurate by comparing their corresponding dispersion surfaces for the unit cell of Fig. 1(b). Figure 2(c) displays this comparison, with the blue and red curves determined using the \(3D\) and \(1D\) models, respectively. The lower frequency flexural branch is quite close for the two models, which demonstrates the effectiveness of the \(1D\) beam model in predicting flexural modes based BICs. In addition, the \(3D\) analysis also shows a quadratic bending along the \(y\)-direction, linear longitudinal and torsional dispersion branches.
Let us now determine the final design using \(3D\) finite element analysis. All cylinders attached to the main and side beams are taken to be identical to simplify fabrication. Our starting design point is indicated by the
Figure 3: A 1-parameter family of side beams support the lowest frequency BIC (black marker in Fig. 2(b)). The diamond marked geometric dimensions are used for experimental demonstration in Sec. IV.
diamond marker in Fig. 3, with side beam dimensions \((l_{s},w_{s},h_{C})=(28\) mm, 5.81 mm, 5.37 mm). We do a detailed \(3D\) analysis and make minor changes to the design predicted using the \(1D\) beam theory. There are two reasons for requiring modifications to the design predicted using the \(1D\) model. First, the rigid masses are assumed to be a point mass and the space occupied due to the finite diameter \(d\) is neglected in the \(1D\) model. The second reason is to simplify assembly of masses in the side beams, the side beam is made longer to 40 mm and masses are attached at a distance \(l_{s}\), as shown in Fig. 4(b). This design is distinct from the \(1D\) beam model, where the cylindrical mass on the side beams are attached at their ends.
We search for suitable \(l_{s}\) and \(w_{s}\) by doing a parametric sweep over these variables near the starting design point using \(3D\) finite element simulations. This choice is guided by the existence of 1-parameter family of valid design solutions of Eqn. (4), see Fig. 3. The sweep search yields \((l_{s},w_{s})=(28\) mm, 8.23 mm) when the height and diameter of all the attached cylinders are set to \(h=4.6\) mm and \(d=5\) mm, respectively. Figure 5(a) displays the BIC mode shape confined to the compact region for these side beam dimensions. Note that, the frequency determined with the \(3D\) model (682.5 Hz) is close to that predicted by the \(1D\) model (713 Hz).
### Experimental observation of a BIC
Finally, we report on the experimental observation of the BIC shown in Fig. 5(a) using the dimensions determined from the \(3D\) analysis. The experimental setup is shown in Fig. 4(a). The structure in Fig. 4(b) is fabricated from a 0.08 inch thick Aluminum 6061 sheet using water-jet cutting. Cylindrical Neodymium magnets N35 of 5mm diameter and 4.6mm height are placed at the top and bottom in the main and side beams. These dimensions are chosen since they are commercially available. The sample is clamped at both ends and a permanent magnet shaker (LDS V203) is used to apply sinusoidal displacement at the center of the compact region. The excitation point is denoted here by \(A\), as shown in Fig. 5(a). A force sensor (PCB 208C01) is attached to the shaker to measure the applied force. The velocity at various points along the main beam is measured using a laser Doppler vibrometer (Polytec V FX-I-110).
Let us summarize the experimental procedure. We excite the structure at different frequencies in the interval \([650,750]\) Hz and determine the frequency response function. The excitation frequencies are indicated by markers in the response plot in Fig. 5(a). The velocity of a cylinder and the force applied by the shaker are measured by applying the excitation at a frequency for 15 seconds. To allow transients to die down, the force and velocity data are recorded in the last 6 seconds of excitation. Then the beam is kept at rest for 10 seconds before exciting it at next frequency. This process is repeated for measuring velocity of each cylinder. The maximum velocity, \(v\) and maximum force, \(F\) at a frequency are calculated by a FFT of the measured velocity and force. Then, normalized energy of every point is determined as \(|v/F|^{2}\).
Figure 5(b) displays the measured response at cylinders on the right side of excitation point. We observe a peak in the frequency response of the excitation point \(A\) at 695 Hz. At this frequency, the response at points \(C\) and \(D\), lying at the boundary and outside the compact region, are significantly lower (1 unit) compared to the excitation point \(A\) (about 93 units). This observation confirms the existence of a BIC in the structure at 695 Hz. We also measured the frequency response at other cylinders left of the excitation point. Note that due to reflection symmetry, the corresponding symmetric points on the left should have identical response. We quantified the difference in response at points \(B\), \(C\), \(D\) and the side beams with their corresponding symmetric points to the left at the BIC frequency. Our experiments showed a 3% difference point B and at the side beams, and a 10% difference at points \(C\) and \(D\) with their corresponding symmetric points to the left.
Figure 4: (a) Experimental set-up. The beam is exited at the center of the compact region using a shaker. A force sensor is attached to the shaker. The velocity at various points along the beam is measured using a laser vibrometer. (b) Zoomed-in view, indicating the dimensions \(l_{s}\) and \(w_{s}\) in a 40 mm side beam length.
Figure 5(c) displays a comparison between experiments and simulations for the frequency response at the excitation point. The simulation has a response peak (around 1000 units) at 682.5 Hz, which is the frequency of the mode shape in Fig. 5(a). Let us remark on the various sources of error that result in deviation from a true BIC. The discrepancy between experiments and simulations is attributed to imperfections in manufacturing, with the fabricated beam width being around 4.8 mm instead of the designed 5mm, and precision in placing cylinders at the exact locations. Another possible reason for lower peak response in experiments is air damping [30] at the macro scales.
Finally, we note that the frequency response point \(C\) is 0.137 units in simulations, compared to 1000 for point \(A\). Although significantly lower than the experimental value, it is not zero, in contrast to the prediction of the \(1D\) beam model. To understand this discrepancy, note that the Euler Bernoulli beam theory, which is used to predict a true BIC, assumes that each cross-section is non-deformable, and undergoes translations and rigid rotations [36]. Although an excellent approximation at low frequencies, small deviations arise when full \(3D\) effects are considered. The resulting displacement at \(C\) is thus a measure of the deviation of the exact \(3D\) solution from the \(1D\) beam theory. Notably, we do not attempt to satisfy the zero displacement condition point-wise, but instead do it in an average sense over the entire cross-section.
## V Conclusion
We introduce the architected structure that comprises of a main beam with periodically attached masses, and has a compact region with protruding side beams. Symmetry and equilibrium constraints are used to determine the conditions required for a BIC in this compact region. A \(1D\) beam model is derived using Euler Bernoulli theory and a finite element method is used to determine bound modes in the structure. The conditions on main and side beams required to support a BIC are derived and a Newton Raphson method is used to solve the resulting nonlinear equations. For each BIC, we find a 1-parameter family of side beam designs that supports it. A dispersion analysis is conducted to confirm that their frequencies lie in passband and they are thus BICs.
We verify the predictions of BIC based on the \(1D\) beam model using finite element analysis (FEA) based on \(3D\) elasticity. The \(1D\) model is found to be in good agreement for the low frequency flexural modes under consideration. For ease of fabrication and assembly, and to account for the mismatch with \(3D\) elasticity, minor modifications to the side beam design determined using the \(1D\) model are made by doing a parametric sweep over the width and length of side beams using \(3D\) FEA. The de
Figure 5: (a) BIC mode shape at 682.5 Hz determined using \(3D\) FEA. (b) Measured frequency response at cylinders. A resonant peak occurs at 695 Hz corresponding to the BIC. (c) Comparison of frequency response at excitation point \(A\): simulation and experiments.
signed structure is fabricated and excited over a range of frequencies around the BIC frequency. The experimental results of frequency response at the excitation point shows a resonant peak close to the frequency predicted by FEA. At the resonant frequency, the fraction of energy leaking to surrounding reduced to minimum, which demonstrates the existence of a BIC in the compact region. The experimental results are compared with FEA results and the possible reasons for discrepancies, along with causes of deviation from a true BIC are discussed.
Let us remark on some future possible extensions of our work. These concepts translate across length scales and material properties, and may find applications at the micro and nano-scales. At those scales, the limitations associated with air damping and material damping may be significantly reduced. The idea of cancelling forces and moments by exploiting symmetry may be extended to realize bound modes and BICs in plates, shells and \(3D\) architected solids.
###### Acknowledgements.
This work was supported by the U.S. National Science Foundation under Award No. 2027455
## Appendix A Finite element shape functions and matrices
Let us discuss an approximation we use to determine the effective bending stiffness of the beam segment between sections 1 and 2 in Fig. 6. Its contribution comes from three segments: the two segments of length \(d/2\) at the two ends having rigid masses, and the beam segment with length \(l_{e}=l-d\) between them. These three segments may be viewed as springs in series and their effective stiffness is lower than the stiffness of these segments. The segment having rigid masses (length \(d/2\)) has significantly higher bending stiffness and its contribution to the effective bending stiffness is neglected. Only the beam segment \(l_{e}\) in Fig. 6 is used to determine the effective bending stiffness. Under this approximation, the rigid masses may be represented as point masses with a beam segment of length \(l_{e}\) between them.
Figure 6 displays a schematic of a beam finite element. The discrete degrees of freedom are the displacements and rotations at the location of rigid masses, labeled \([\theta_{1},\ u_{1}]^{T}\) and \([\theta_{2},\ u_{2}]^{T}\) in this figure. Let us denote the location of point 1 and 2 by \(x_{1}\) and \(x_{2}\), respectively. In this element, we seek a solution of the form
\[u(x)=N_{1}(\xi)\theta_{1}+N_{2}(\xi)u_{1}+N_{3}(\xi)\theta_{2}+N_{4}(\xi)u_{2}, \tag{10}\]
where \(\xi\) is a local coordinate in the element given by \((x-x_{1})/l_{e}\) and taking values in \([0,1]\). \(N_{i}(\xi)\) are Hermite polynomial shape functions [37] and explicit expressions for the shape functions are
\[N_{1} =l_{e}\xi(\xi-1)^{2} \tag{11a}\] \[N_{2} =1-3\xi^{2}+2\xi^{3}\] (11b) \[N_{3} =l_{e}\xi^{2}(\xi-1)\] (11c) \[N_{4} =\xi^{2}(3-2\xi) \tag{11d}\]
Here, Eqn. (10) may be written compactly as \(u(x)=\mathbf{N}\mathbf{u}^{T}\), with \(\mathbf{N}\) and \(\mathbf{u}\) being vectors having components \(N_{i}\) and \(u_{i}\), respectively.
Let us derive the contribution of the above beam segment to the governing equation. We substitute Eqn. (10) into Eqn. (2) and separate the terms with and without \(\omega^{2}\) into mass matrix, \(\mathbf{M}_{el}\) and stiffness matrix, \(\mathbf{K}_{el}\) respectively for an element. The various terms in Eqn. (2) then have the form \(\delta\mathbf{u}^{T}\mathbf{K}_{el}\mathbf{u}\) or \(\omega^{2}\delta\mathbf{u}^{T}\mathbf{M}_{el}\mathbf{u}\), where
\[\mathbf{K}_{el} =\int_{x_{1}}^{x_{2}}\frac{d^{2}\mathbf{N}^{T}}{dx^{2}}EI\frac{d^{2} \mathbf{N}}{dx^{2}}dx, \tag{12}\] \[\mathbf{M}_{el} =\int_{x_{1}}^{x_{2}}\rho\mathbf{A}\mathbf{N}^{T}\mathbf{N}dx\] \[+\sum_{i=1}^{2}m_{i}\mathbf{N}^{T}(\xi_{i})\mathbf{N}(\xi_{i})+\sum_{i=1} ^{2}I_{i}\frac{d\mathbf{N}^{T}(\xi_{i})}{dx}\frac{d\mathbf{N}(\xi_{i})}{dx}. \tag{12}\]
Here, \(x_{1}\) and \(x_{2}\) are respectively locations for section 1 and 2 in Fig. 6. The \(m_{i}\) and \(I_{i}\) represent mass and mass moment of inertia of \(i^{th}\) rigid mass, respectively. Explicit expressions for these matrices are
\[\mathbf{M}_{el}=\frac{m_{b}}{420}\begin{bmatrix}4l_{e}^{2}+I_{1}&22l_{e}&-3l_{e} ^{2}&13l_{e}\\ 22l_{e}&156+m_{1}&-13l_{e}&54\\ -3l_{e}^{2}&-13l_{e}&4l_{e}^{2}+I_{2}&-22l_{e}\\ 13l_{e}&54&-22l_{e}&156+m_{2}\end{bmatrix} \tag{13a}\] \[\mathbf{K}_{el}=\frac{EI}{l_{e}^{3}}\begin{bmatrix}4l_{e}^{2}&6l_{e}&2l_{e}^{2}& -6l_{e}\\ 6l_{e}&12&6l_{e}&-12\\ 2l_{e}^{2}&6l_{e}&4l_{e}^{2}&-6l_{e}\\ -6l_{e}&-12&-6l_{e}&12\end{bmatrix}. \tag{13b}\]
Figure 6: Schematic of a beam finite element with degrees of freedom \([\theta_{1},\ u_{1}]^{T}\) and \([\theta_{2},\ u_{2}]^{T}\) at sections 1 and 2, respectively.
Here, \(m_{b}=\rho Al_{e}\) is the mass of the beam in the element. Assembling \(\mathbf{M}_{el}\) and \(\mathbf{K}_{el}\) for every beam segment in the structure gives its mass \(\mathbf{M}\) and stiffness \(\mathbf{K}\) matrix. The governing equations result in an eigenvalue problem \(\omega^{2}\mathbf{M}\mathbf{u}=\mathbf{K}\mathbf{u}\). Here, \(\mathbf{u}\) is a vector with components having the displacements and rotations at the rigid mass locations.
## Appendix B 1-parameter family of designs for other BICs
|
2310.17179
|
Linking intra- and extra-cellular metabolic domains via neural-network
surrogates for dynamic metabolic control
|
We outline a modeling and optimization strategy for investigating dynamic
metabolic engineering interventions. Our framework is particularly useful at
the early stages of research and development, often constrained by limited
knowledge and experimental data. Elucidating a priori optimal trajectories of
manipulatable intracellular fluxes can guide the design of suitable control
schemes, e.g., cyber(ge)netic or in-cell approaches, and the selection of
appropriate actuators, e.g., at the transcriptional or post-translational
levels. Model-based dynamic optimization is proposed to predict optimal
trajectories of target manipulatable intracellular fluxes. A challenge emerges
as existing models are often oversimplified, lacking insights into metabolism,
or excessively complex, making them difficult to build and implement. Here, we
use surrogates derived from steady-state solutions of constraint-based
metabolic models to link manipulatable intracellular fluxes to the process
exchange rates of structurally simple hybrid dynamic models. The latter can be
conveniently used in optimal control problems of metabolism. As a proof of
concept, we apply our method to a reduced metabolic network of
$\textit{Escherichia coli}$ considering two different scenarios of dynamic
metabolic engineering.
|
Sebastián Espinel-Ríos, José L. Avalos
|
2023-10-26T06:06:24Z
|
http://arxiv.org/abs/2310.17179v4
|
Linking intra- and extra-cellular metabolic domains via neural-network surrogates for dynamic metabolic control
###### Abstract
In this study, we aim to optimize biotechnological production by manipulating intracellular metabolic fluxes in microbial cell factories. Model-based dynamic optimization is proposed to determine the optimal dynamic trajectories of the manipulatable intracellular fluxes. A challenge emerges as existing models are often oversimplified, lacking insights into intracellular metabolism, or are excessively complex, leading to numerical and implementation challenges in optimal control (e.g., related to bilevel optimizations). We propose a solution involving a machine-learning surrogate derived from steady-state constraint-based metabolic modeling. This surrogate bridges the gap between manipulatable intracellular fluxes and process exchange rates. By integrating the surrogate model with simple macro-kinetic dynamic models, we can develop _hybrid_ machine-learning-supported _dynamic_ models. Conveniently, the manipulatable intracellular fluxes in these _augmented_ models can be exploited as dynamic optimization degrees of freedom. We apply this modeling and optimization strategy to a representative metabolic network that showcases common challenges in dynamic metabolic control. We also present an example of cybernetic control to counteract system uncertainties. Our approach facilitates the _in silico_ evaluation of dynamic metabolic interventions and can aid in the selection of suitable control and actuation strategies.
## I Introduction
Microbial cell factories are engineered microorganisms that are optimized for the biotechnological production of valuable metabolites and proteins from renewable resources. Metabolic engineering involves the rewiring of metabolic networks to enhance the production of native products or to introduce new (non-native) metabolic pathways into cells. As a result, these microbial cell factories can make a wide range of biochemicals, biofuels, biomaterials, and biopharmaceuticals [1].
Bioprocess efficiency is commonly assessed by the product titer, volumetric productivity rate, and yield. In conventional metabolic engineering approaches, the cell's metabolic steady state is optimized to favor the product pathway. However, maximizing the product pathway can divert resources away from biomass synthesis, resulting in a trade-off where increased product yields lead to decreased biomass yields and reduced volumetric productivities [2].
Dynamic metabolic control [2] has emerged as a means to address intrinsic metabolic trade-offs, potentially enhancing the production efficiency of bioprocesses. It focuses on dynamically adjusting manipulatable metabolic fluxes to enable transitions between different metabolic states, rather than maintaining a static metabolic flux distribution. Compared to _only_ exploiting extracellular/process exchange rates, manipulating _intracellular_ fluxes can enable more degrees of freedom for advanced metabolic applications, upgrading the toolbox of bioprocess optimization.
The manipulation of target intracellular metabolic fluxes can be achieved, e.g., by tuning the expression of enzymes (transcriptional level) or by modifying enzyme activity (post-transcriptional level) [3]. Control can be exerted externally, utilizing external signals in a cybernetic approach [4], or intracellularly, with control mechanisms encoded within the cell through genetic and molecular circuits [5]. Fig. 1 shows possible control configurations for dynamic metabolic control.
A key question arises: what should the optimal dynamic trajectories of the manipulatable fluxes be to maximize production efficiency? Knowing _a priori_ the optimal dynamic trajectories of the manipulatable intracellular fluxes can facilitate, e.g., the determination of whether a cybernetic or intracellular control approach is more suitable for implementation. Also, it could help to decide on the most appropriate actuation mechanism, e.g., either at the transcriptional or post-transcriptional level.
We propose employing model-based dynamic optimization to determine the optimal dynamic trajectories of the manipulatable intracellular metabolic fluxes that maximize production efficiency. To do so, we need a suitable dynamic model that links manipulatable intracellular metabolic fluxes
Fig. 1: Representation of dynamic metabolic engineering and possible control configuration strategies levering, e.g., two manipulatable intracellular fluxes. System inputs: \(\mathbf{u}\in\mathbb{R}^{n_{u}}\); system measurements: \(\mathbf{y}\in\mathbb{R}^{n_{y}}\).
to extracellular exchange rates1 such as substrate uptake, product excretion, and growth rates.
Footnote 1: Without loss of generality, here we assume that all products and substrates are exchanged with the extracellular medium, although one could also deal with intracellular products.
Unstructured and unsegregated macro-kinetic modeling may be the simplest way to model dynamic bioprocesses. These models consider biomass as a single component catalyzing growth and process exchange rates [6]. As such, they do not explicitly capture information on intracellular metabolism; thus, not suitable for applications in dynamic metabolic control exploiting _intracellular_ fluxes.
Constraint-based modeling, such as flux balance analysis (FBA), enables the prediction of steady-state metabolic flux distributions (cf. [7] for more details). These (underdetermined) models optimize a cellular objective function, often assumed to be the maximization of biomass synthesis. The optimization is constrained by the mass balances, constructed from the stoichiometric matrix of a metabolic network, which can be derived from gene-enzyme-reaction associations. Additional constraints related to, e.g., thermodynamic feasibility and resource allocation, can also be applied. The advantage of these models is their simplicity. In their most basic formulation, they require only the mass balance and irreversibility reaction constraints. Using constraint-based models, one can estimate _maximum_ theoretical fluxes or yields under specific conditions and assumed cellular objectives. However, the steady-state assumption limits the application of constraint-based modeling in _dynamic_ optimization schemes.
Dynamic versions of constraint-based modeling are available (cf. e.g. [4, 8]). Integrating them into process optimization, however, turns the task into a bilevel optimization and requires assumptions on the cell's _dynamic_ objective function, e.g., short-term/long-term goals. Furthermore, the numerical solution of bilevel optimizations often involves game theory-based assumptions (cf. [9] for more details). One has to decide whether the two optimizations collaborate (optimistic approach) or conflict (pessimistic approach). The optimistic approach simplifies the task, allowing the replacement of the inner optimization with its Karush-Kuhn-Tucker conditions. However, this introduces non-convexity, even if the original model was convex. Consequently, constraint-based dynamic models often present substantial challenges in the context of process optimization.
In this work, we employ neural networks, known for capturing complex relationships, to bridge the gap between the steady-state solutions of constraint-based models and macro-kinetic dynamic models. The neural network effectively creates a direct link between manipulatable intracellular fluxes and extracellular exchange rates. The resulting model can be regarded as a _hybrid_ machine-learning supported _dynamic_ model. This methodology simplifies process optimization by circumventing bilevel optimization schemes while still enabling the use of manipulatable intracellular fluxes as optimization degrees of freedom. Our contribution differs from previous approaches [10], where polynomial-based surrogates were employed to correlate one set of extracellular fluxes with another, neglecting the exploration and exploitation of the intracellular domain.
The remainder of this paper is organized as follows. Section 2 introduces our hybrid machine-learning-supported dynamic metabolic modeling strategy. In Section 3, we present a model-based dynamic optimization scheme that leverages manipulatable intracellular fluxes to maximize production efficiency, and we show how this can be incorporated into cybernetic control approaches. Section 4 serves as a test case, where we consider a representative metabolic network with two biotechnology-relevant degrees of freedom: one for modulating the trade-off between growth and product production, and another one for adjusting the ratio of products of interest.
## 2 Metabolic modeling strategy
We proceed to introduce each of the elements of our proposed metabolic modeling strategy and show how they are interconnected.
### _Constraint-based model in steady state_
A constraint-based model, often underdetermined, can be expressed as (adapted from [7]):
\[\max_{\mathbf{V}} F_{\mathrm{bio}}(\mathbf{V}), \tag{1a}\] \[\mathrm{s.t.} \dot{\mathbf{m}}=\mathbf{SV}=\mathbf{0},\] (1b) \[\mathbf{V_{\min}}\leq\mathbf{V}\leq\mathbf{V_{\max}},\] (1c) \[V_{i,\min}=0,\forall i\in\mathbb{I},\] (1d) \[\mathbf{V_{\mathrm{man}}}=\mathbf{v_{\mathrm{man}}},\,\mathbf{V_{\mathrm{man} }}\subseteq\mathbf{V},\] (1e) \[\mathbf{0}\leq\mathbf{c}(\mathbf{V}). \tag{1f}\]
Here, \(\mathbf{S}\in\mathbb{R}^{n_{m}\times n_{V}}\) is the stoichiometric matrix associated with the internal metabolites \(\mathbf{m}\in\mathbb{R}^{n_{m}}\) and metabolic fluxes \(\mathbf{V}\in\mathbb{R}^{n_{V}}\). \(\mathbb{I}\) represents the set of irreversible reactions, hence the lower bound equal to zero. In case of manipulatable intracellular fluxes, denoted as \(\mathbf{V_{\mathrm{man}}}\in\mathbb{R}^{n_{\mathrm{man}}}\), they can be constrained to their corresponding values \(\mathbf{v_{\mathrm{man}}}\in\mathbb{R}^{n_{\mathrm{man}}}\). Additional (non-linear) constraints \(\mathbf{c}:\mathbb{R}^{n_{V}}\rightarrow\mathbb{R}^{n_{c}}\) can be integrated to consider factors like resource allocation and thermodynamics. The assumed cellular objective function is represented by \(F_{\mathrm{bio}}:\mathbb{R}^{n_{V}}\rightarrow\mathbb{R}\), with \(\mathbf{V}\) being the decision variable of the optimization. Note that the model assumes steady-state conditions of the metabolism (cf. Eq. (1b)).
### _Mapping intra- and extra-cellular metabolic fluxes_
We categorize metabolic fluxes into intracellular fluxes, denoted as \(\mathbf{V_{\mathrm{int}}}\in\mathbb{R}^{n_{\mathrm{int}}}\), and extracellular or exchange fluxes, represented by \(\mathbf{V_{\mathrm{ext}}}\in\mathbb{R}^{n_{\mathrm{ext}}}\). Thus, \(\mathbf{V}:=[\mathbf{V_{\mathrm{int}}^{\mathsf{T}}},\mathbf{V_{\mathrm{ext}}^{\mathsf{T} }}]^{\mathsf{T}}\). To systematically explore the impact on the metabolism of varying intracellular fluxes, we employ a _grid search_ approach. We define sets \(\mathbb{V}_{\mathrm{man},1},\mathbb{V}_{\mathrm{man},2},\ldots,\mathbb{V}_{ \mathrm{man},n}\) representing possible values for each of the \(n\) manipulatable intracellular fluxes. The Cartesian product gives us all possible combinations of these flux values:
\[\mathbb{G}=\prod_{i=1}^{n}\mathbb{V}_{\mathrm{man},i}. \tag{2}\]
Each combination is applied as a constraint in Eq. (1e). Solving the constraint-based model in (1) results in a set of labels (extracellular/exchange fluxes) and features (manipulatable intracellular fluxes). A machine-learning approach
can be used to learn the mapping between the intra- and extra-cellular flux domains. Using neural networks, this can be expressed as:
\[\mathbf{V}_{\mathrm{ext}}=\mathbf{f}_{\mathbf{NN}}(\mathbf{V}_{\mathrm{man}},\mathbf{\Theta}), \tag{3}\]
where \(\mathbf{f}_{\mathbf{NN}}:\mathbb{R}^{n_{\mathrm{man}}}\times\mathbb{R}^{n_{\mathbf{ e}}}\to\mathbb{R}^{n_{\mathrm{ext}}}\) represents the trained neural network and \(\mathbf{\Theta}\in\mathbb{R}^{n_{\mathbf{e}}}\) represents the corresponding parameters.
_Remark._ We could optionally also include the remaining (non-manipulatable) intracellular fluxes \(\mathbf{V}_{\mathrm{rem}}\in\mathbb{R}^{n_{V}-n_{\mathrm{man}}}\) as labels, ensuring continued insight into the intracellular metabolic flux distribution:
\[\mathbf{V}_{\mathrm{rem}}=\mathbf{f}_{\mathbf{NN}}(\mathbf{V}_{\mathrm{man}},\mathbf{\Theta}). \tag{4}\]
For simplicity, we consider Eq. (3) in the remainder of this study.
### _Hybrid dynamic model linked to intracellular fluxes_
The neural-network surrogate described above can be integrated into macro-kinetic dynamic models, effectively creating a _hybrid_ machine-learning-supported _dynamic_ model. Therein, the process exchange rates are represented as functions of manipulatable intracellular fluxes. Let \(\mathbf{z}\in\mathbb{R}^{n_{z}}\) be the extracellular states, including the biomass dry weight \(Bio\in\mathbb{R}\). Without loss of generality, let us assume a batch process. The dynamics of the system follows:
\[\frac{\mathrm{d}\mathbf{z}(t)}{\mathrm{d}t}=Bio(t)\mathbf{q}(\mathbf{f}_{ \mathbf{NN}}(\mathbf{V}_{\mathrm{man}}(t),\mathbf{\Theta}),h(\mathbf{z}(t),\mathbf{\theta}_{h} )), \tag{5a}\] \[q_{i}:=V_{\mathrm{ext},i}(\mathbf{V}_{\mathrm{man}}(t),\mathbf{\Theta}) h(\mathbf{z}(t),\mathbf{\theta}_{h}),\] (5b) \[\forall i\in\{1,2,...,n_{z}\},\] \[\mathbf{z}(t_{0})=\mathbf{z}_{\mathbf{0}}. \tag{5c}\]
Here, \(\mathbf{q}:\mathbb{R}^{n_{\mathrm{man}}}\times\mathbb{R}^{n_{\mathbf{e}}}\times \mathbb{R}^{n_{z}}\times\mathbb{R}^{n_{\mathbf{e}}}\to\mathbb{R}^{n_{z}}\) describes the biomass-specific exchange reaction rates of the macro-kinetic model, with \(q_{i}\in\mathbf{q}\). \(\mathbf{\theta}_{h}\in\mathbb{R}^{n_{h}}\) represents the parameters of the function \(h:\mathbb{R}^{n_{z}}\times\mathbb{R}^{n_{\mathbf{e}}h}\to\mathbb{R}\), while \(t\) and \(t_{0}\) denote the current and initial time, respectively. The function \(h\) accounts for _a priori_ known rate-limiting factors, e.g., substrate uptake limitation or product inhibition, often neglected by constraint-based models2. As such, \(V_{\mathrm{ext},i}\), obtained from Eq. (3), represents _maximum_ theoretical flux values predicted by the constraint-based model. Henceforth, we will omit the time-dependency of variables when clear from the context.
Footnote 2: In the absence of (known) rate limitations, one could optimistically assume \(h=1\) when, e.g., the carbon-/energy-source is available in the medium; otherwise, \(h=0\). This simplification can facilitate the numerical solution of the dynamic system.
## 3 Dynamic metabolic control
We now show how the manipulatable intracellular fluxes can serve as degrees of freedom for metabolic control via model-based dynamic optimization.
### _Dynamic optimization_
The dynamic optimization problem for maximizing the production efficiency of a batch process reads:
\[\max_{\mathbf{V}_{\mathrm{man}}(\cdot)} \vspace{-0.1cm}J_{p}(\mathbf{z},\mathbf{V}_{\mathrm{man}}), \tag{6a}\] \[\mathrm{s.t.} \text{Eqs.\eqref{eq:dynamic_model_model_model},\eqref{eq:dynamic_model_model_model},\eqref{eq:dynamic_model_model_model},\eqref{eq:dynamic_model_model_model},\] \[0\leq\mathbf{g}(\mathbf{z},\mathbf{V}_{\mathrm{man}}), \tag{6b}\]
where \(J_{p}:\mathbb{R}^{n_{z}}\times\mathbb{R}^{n_{\mathrm{man}}}\to\mathbb{R}\) is the objective function (e.g., volumetric productivity, economic profit, etc.). The function \(\mathbf{g}:\mathbb{R}^{n_{z}}\times\mathbb{R}^{n_{\mathrm{man}}}\to\mathbb{R}^{n_{g}}\) represents possible state and input constraints, addressing economic, technical, or safety considerations. The degree of freedom of the optimization problem is the _function_ of manipulatable fluxes \(\mathbf{V}_{\mathrm{man}}(\cdot)\) from \(t_{0}\) to the final prediction time \(t_{h}\), e.g., the final process time \(t_{f}\) in a batch.
### _Counteracting system uncertainties: cybernetic approach_
Let us consider a cybernetic approach where the manipulatable intracellular fluxes can be adjusted online using _external_ inputs. Compared to purely in-cell feedback control approaches, cybernetics offers in general a higher degree of controllability, adaptability, and tunability. For simplicity, we assume that the external input \(\mathbf{u}_{\mathbf{e}}\in\mathbb{R}^{n_{\mathbf{u}_{\mathbf{e}}}}\) directly influences the activity of the metabolic enzymes catalyzing the manipulatable intracellular fluxes. This matches, e.g., a post-translational control scenario. Therefore, to achieve a target \(\mathbf{V}_{\mathrm{man}}\), one could compute the inputs to be applied to the plant via a suitable actuator. This relationship follows:
\[\mathbf{u}_{\mathbf{e}}=\mathbf{f}_{\mathbf{u}}(\mathbf{V}_{\mathrm{man}},\mathbf{\theta}_{u}), \tag{7}\]
where \(\mathbf{f}_{\mathbf{u}}:\mathbb{R}^{n_{\mathrm{man}}}\times\mathbb{R}^{n_{\mathbf{e}}}\to \mathbb{R}^{n_{\mathrm{man}}}\). \(\mathbf{\theta}_{u}\in\mathbb{R}^{n_{\mathbf{e}}}\) represents the parameters of \(\mathbf{f}_{\mathbf{u}}\). Here we assume one input per manipulatable intracellular flux.
_Remark._ In _cybernetic_ approaches (control at the transcriptional level) [4], Eq. (7) should be reconsidered to account for possible time-delays arising from gene transcription and translation processes.
In cybernetic approaches, one could counteract the short-term effects of model-plant mismatch and disturbances by repeatedly solving the optimization problem in (6). This enables the re-evaluation of the optimal control problem in real-time, based on the current state of the system, effectively implementing model predictive control (MPC) [11]. At sampling time \(t_{k}\), for a batch process, this can be formulated as:
\[\max_{\mathbf{u}(\cdot)}\vspace{-0.1cm}J_{p}(\mathbf{z},\mathbf{V}_{\mathrm{man}}), \tag{8a}\] \[\mathrm{s.t.} \text{Eqs.\eqref{eq:dynamic_model_model_model},\eqref{eq:dynamic_model_model_model},\eqref{eq:dynamic_model_model_model_},\eqref{eq:dynamic_model_model_model_},\eqref{eq:dynamic_model_model_model_},\eqref{eq:dynamic_model_model_model_},\] \[\mathbf{z}(t_{k})=\mathbf{z}_{\mathbf{k}}. \tag{8b}\]
Note that the decision variable of the optimization changes from the _function_\(\mathbf{V}_{\mathrm{man}}(\cdot)\) in (6) to the _function_\(\mathbf{u}(\cdot)\) in (8) since the feedback control is implemented using a cybernetic approach. Furthermore, \(t_{h}\) can move along with the process time, representing a _moving_ prediction horizon, or remain constant, indicative of a _shrinking_ prediction horizon. In
batch processes with a predetermined final process time, we often set \(t_{h}=t_{f}\), resulting in a shrinking horizon.
## 4 Representative metabolic network
We consider a representative metabolic network (cf. Fig. 2) that captures common features in metabolic engineering. The network consists of a single carbon source \(A\) and two products of interest \(F\) and \(G\). We assume that a specific ratio of the latter products is desirable in the bioprocess for optimal product formulation, downstream processing, or end-product properties. For example, this can be the case of biopolymers, where a given ratio of monomers defines the final polymer characteristics (tensile strength, melting point, etc.) [12]. In addition, a by-product \(D\) is associated with the biomass formation process due to the necessity of \(ATP\) as a biomass precursor. A core aspect underlined by this network is the trade-off between biomass formation and the synthesis of products of interest. An illustrative scenario is when all available carbon flux is channeled through the product pathway, \(V_{4}=V_{\text{ext},1}\), leading to the absence of biomass precursors (\(ATP\), \(B\), and \(C\)) and, consequently, no growth. The growth rate is represented by \(V_{\text{ext},5}\).
\(V_{4}\) and \(V_{6}\) are manipulatable intracellular fluxes, \(\mathbf{V_{\text{man}}}:=[V_{4},V_{6}]^{\mathsf{T}}\). Modulating \(V_{4}\) can divert the metabolic flux away from biomass synthesis to favor product formation. Therefore, \(V_{4}\) unlocks a way to modulate the product-biomass trade-off. On the other hand, the fine-tuning of \(V_{6}\) offers a mechanism to balance the production ratio of \(E\) and \(G\).
_Remark_. We express the extracellular states in \(\mathrm{mmol/L}\), except for biomass which is expressed in \(\mathrm{g/L}\). The metabolic fluxes are biomass-specific, expressed in \(\mathrm{mmol/g/h}\), except for the growth rate which is in \(\mathrm{g/g/h}\).
### _Neural-network surrogate of FBA_
Following the approach described in Section 2, we define the sets
\[\mathbb{V}_{\mathrm{man},1} :=\left\{0,0.2,0.4,\ldots,V_{\text{ext},1,\mathrm{max}}\right\} \tag{9a}\] \[\mathbb{V}_{\mathrm{man},2} :=\left\{0,0.01V_{4},0.02V_{4},\ldots,0.5V_{4}\right\}, \tag{9b}\]
where \(V_{4}\in\mathbb{V}_{\mathrm{man},1}\) and \(V_{6}\in\mathbb{V}_{\mathrm{man},2}\). The maximum element value of \(\mathbb{V}_{\mathrm{man},2}\) is \(0.5V_{4}\) to ensure feasibility3. The Cartesian product of these sets follows Eq. (2). Each element of \(\mathbb{G}\) is a possible pair \((V_{4},V_{6})\). We solved an FBA problem as described in (1), considering constraints represented by Eqs. (1b)-(1e). The assumed cell's objective function in (1a) was set to the maximization of the growth rate, given \(0\leq V_{\text{ext},1}\leq V_{\text{ext},1,\mathrm{max}}\), \(V_{\text{ext},1,\mathrm{max}}=10\). The explored yield trade-off space from the FBA solutions is exemplified by Fig. 3.
Footnote 3: Steady-state mass balances give \(V_{\text{ext},3}=V_{5}-V_{6}\) and \(V_{5}=V_{4}-V_{6}\). Ensuring \(V_{\text{ext},3}\geq 0\) (irreversibility) leads to \(V_{5}\geq V_{6}\), hence \(V_{6}\leq 0.5V_{4}\).
We trained a feedforward neural network4 to obtain a machine-learning model as in Eq. (3). The coefficient of determination for the parity plots of the growth rates and product exchange rates was \(R^{2}=1.00\) (not shown), indicating a _perfect_ fit. The latter was expected as the data generated for the neural-network surrogate was obtained _in silico_ without any source of system uncertainty.
Footnote 4: One hidden layer with four neurons, rectified linear unit (ReLU) activation function. 15 % of the data was used for testing, 80 % of the remaining data was used for training, and 20 % for validation.
### _Hybrid macro-kinetic model_
The macro-kinetic dynamic model in batch for the metabolic system described in Fig. 2 was formulated as in Eqs. (5a)-(5c), with \(\mathbf{z}:=[A_{\text{ext}},D_{\text{ext}},F_{\text{ext}},G_{\text{ext}},Bio]^{ \mathsf{T}}\). Note that \(q_{i}\) is defined as a function of the manipulatable intracellular fluxes. Furthermore, we assumed Monod-type substrate limitation, whose parameters are widely known for several microorganisms and carbon sources. For the case study, we consider \(h\) in Eq. (5b) as:
\[h(A_{\text{ext}})=\frac{A_{\text{ext}}}{A_{\text{ext}}+k_{A}}, \tag{10}\]
where \(k_{A}=0.04\,\mathrm{mmol/L}\) is the assumed substrate affinity constant.
### _Dynamic optimization scenarios_
We performed several dynamic optimizations using the optimal control problem defined in (6), aiming to maximize the final concentration of the products of interest, denoted as \(J_{p}=F_{\text{ext}}(t_{f})+G_{\text{ext}}(t_{f})\). The optimization degree of freedom
Figure 3: Yield trade-offs space for production of (a) \(F\) against \(Bio\) and (b) \(G\) against \(Bio\).
Figure 2: Representative metabolic network with 11 irreversible fluxes \(V_{i}\). Orange arrows indicate extracellular/exchange fluxes. Dotted arrows denote the pathways leading to the products of interest, \(\mathbf{m}:=[A,B,C,D,E,F,G,ATP]^{\mathsf{T}}\), \(\mathbf{z}:=[A_{\text{ext}},D_{\text{ext}},F_{\text{ext}},G_{\text{ext}},Bio]^{ \mathsf{T}}\).
was \(\boldsymbol{V_{\text{man}}}(\cdot)|_{t_{0}}^{t_{0}}\) with \(t_{h}=t_{f}=9\,\mathrm{h}\). We furthermore included a constraint demanding a specific final product ratio:
\[r_{G}=\frac{G_{\text{ext}}(t_{f})}{G_{\text{ext}}(t_{f})+F_{\text{ext}}(t_{f})}. \tag{11}\]
The predicted optimal trajectories of manipulatable intracellular fluxes for six different scenarios, under \(\boldsymbol{z_{0}}=[120,0,0,0,0.001]^{\text{T}}\), are presented in Fig. 4-(a)-(b). These scenarios correspond to S.1: \(r_{G}=0.2\), S.2: \(r_{G}=0.3\), S.3: \(r_{G}=0.4\), S.4: \(r_{G}=0.6\), S.5: \(r_{G}=0.7\), and S.6: \(r_{G}=0.8\). The dynamic trajectories of relevant extracellular states for the latter scenarios are shown in Fig. 4-(c)-(h).
The optimization predicted for all cases a two-stage fermentation profile, indicated by the trajectory of \(V_{4}\). During about the first 2 h, \(V_{4}\approx 0\) allowed for biomass accumulation in the batch (_growth phase_) followed by \(V_{4}\approx 10\,\mathrm{mmol}/\mathrm{g}/\mathrm{h}\), where all flux was diverted to the product pathway (_production phase_). Such a two-stage fermentation addresses the trade-off between biomass growth and product synthesis toward maximizing product volumetric productivity. \(V_{6}\) followed a more complex trajectory, finely tuned to comply with the demanded final product ratios.
_Remark._ We considered piece-wise constant manipulatable intracellular fluxes of size \(0.01\,\mathrm{h}\) to approximate the _function_\(\boldsymbol{V_{\text{man}}}(\cdot)\), otherwise infinite-dimensional. The constraint-based model was solved in CasADi [13]. The training of the neural network and the neural-network-supported dynamic optimizations were performed in HILO-MPC [14].
### _Metabolic cybernetic case_
Now, we examine a scenario where the manipulatable intracellular fluxes are controlled using a cybernetic approach. Furthermore, we introduce system uncertainty to the _plant_ by scaling down both \(h\) in Eqs. (5a)-(5b) and \(Bio(t_{0})\) by a factor of 0.97. The optimization/controller, on the other hand, employed the _nominal_ model as described in the previous sections.
Solving a shrinking-horizon MPC with piece-wise constant manipulatable intracellular fluxes of size \(0.5\,\mathrm{h}\), demanding \(r_{G}=0.5\), leads to the results in Fig. 5. The increased sampling time size of \(0.5\,\mathrm{h}\) is more _realistic_ in terms of process monitoring, i.e., for updating the current state of the plant in a feedback loop. For the sake of generality, we plot the target manipulated flux values instead of the external inputs in Fig. 5. We assume that external inputs are available to modulate the manipulatable fluxes (cf. Eq. (7)).
Overall, MPC finely adjusted the trajectories of the manipulatable intracellular fluxes to account for the reduced rates and biomass initial concentration. Both the open-loop optimization and the MPC controller achieved an \(r_{G}=0.50\), as demanded. However, while the open-loop controller reached \(A_{\text{ext}}(t_{f})=45.8\,\mathrm{mmol}/\mathrm{L}\), MPC was able to achieve almost full substrate depletion \(A_{\text{ext}}(t_{f})=0.8\,\mathrm{mmol}/\mathrm{L}\), producing about 1.6-fold more net final product (\(F_{\text{ext}}(t_{f})+G_{\text{ext}}(t_{f})\)).
## 5 Conclusion
We presented a method that employs machine learning to bridge manipulatable intracellular metabolic fluxes with process exchange rates. This _surrogate_ model is trained with data obtained from solutions of a constraint-based metabolic model. Consequently, a _hybrid_ machine-learning-supported _dynamic_ model can be formulated, where the exchange process rates depend on the manipulatable intracellular fluxes. This approach not only enables dynamic extracellular state predictions given changes in the intracellular metabolism, but also facilitates dynamic optimization using intracellular fluxes explicitly as optimization degrees of freedom.
We applied this methodology to a representative metabolic network, highlighting common challenges in metabolic engineering such as the trade-off between product and biomass
Fig. 4: (a)-(b) Predicted optimal trajectories of the manipulatable intracellular metabolic fluxes for six different scenarios. (c)-(h) Predicted dynamic profiles of relevant extracellular states for the considered scenarios. \(A_{\text{ext}}\), \(\boldsymbol{\_-}\)\(\boldsymbol{\_-}\)\(F_{\text{ext}}\), \(\boldsymbol{\_-}\)\(G_{\text{ext}}\), \(\boldsymbol{\_-}\)\(\mathrm{Bio}\times 10\).
yields, and the balancing of product ratios. Using this network, we demonstrated our modeling and optimization approach considering two intracellular metabolic fluxes as degrees of freedom for dynamic optimization. Additionally, we outlined a cybernetic scheme for real-time flux adjustments to counteract system uncertainties.
The proposed approach can contribute to the quest for developing advanced biotechnological processes. It facilitates efficient _in silico_ testing of diverse dynamic metabolic engineering strategies, actuation mechanisms, and control approaches. This is especially beneficial when constrained by limited information or system knowledge.
|
2308.15524
|
Interfacing Electron and Neutrino Quasielastic Scattering Cross Sections
with the Spectral Function in GENIE
|
Progress in neutrino-nucleus cross section models is being driven by the need
for highly accurate predictions for the neutrino oscillation community. These
sophisticated models are being developed within a microscopic description of
the nucleus with the goal of encompassing all reaction modes relevant for the
accelerator neutrino program. The disconnect between these microscopic models
and the event generators that will be used in the next generation of
experiments represents a critical obstacle that must be overcome in order to
precisely measure the neutrino oscillation parameters. To this end we have
developed a Fortran wrapper for lepton-nucleus quasielastic (QE) scattering
within the GENIE event generator as a proof of principle, with the broader goal
of creating an efficient pipeline for incorporating advanced theoretical models
in event generators. As a demonstration of this interface, we have implemented
the Spectral Function model into GENIE, offering a more complete description of
the nuclear ground state, as well as the ability to provide quantifiable
theoretical uncertainties. We validate this implementation and compare its
predictions against data and against QE models already available in GENIE.
|
Minerba Betancourt, Steven Gardiner, Noemi Rocco, Noah Steinberg
|
2023-08-29T18:00:01Z
|
http://arxiv.org/abs/2308.15524v1
|
Interfacing Electron and Neutrino Quasielastic Scattering Cross Sections with the Spectral Function in GENIE
###### Abstract
Progress in neutrino-nucleus cross section models is being driven by the need for highly accurate predictions for the neutrino oscillation community. These sophisticated models are being developed within a microscopic description of the nucleus with the goal of encompassing all reaction modes relevant for the accelerator neutrino program. The disconnect between these microscopic models and the event generators that will be used in the next generation of experiments represents a critical obstacle that must be overcome in order to precisely measure the neutrino oscillation parameters. To this end we have developed a Fortran wrapper for lepton-nucleus quasielastic (QE) scattering within the GENIE event generator as a proof of principle, with the broader goal of creating an efficient pipeline for incorporating advanced theoretical models in event generators. As a demonstration of this interface, we have implemented the Spectral Function model into GENIE, offering a more complete description of the nuclear ground state, as well as the ability to provide quantifiable theoretical uncertainties. We validate this implementation and compare its predictions against data and against QE models already available in GENIE.
+
Footnote †: preprint: FERMILAB-PUB-23-458-CSAID-ND-T
## I Introduction
The next generation of large accelerator-based neutrino oscillation experiments, namely DUNE and Hyper-K, will require an evolution in our understanding and modeling of neutrino-nucleus interactions in order to meet their design goals [1; 2]. These experiments aim to not only measure the standard neutrino oscillation parameters, but also to challenge the three neutrino paradigm and search for other physics beyond the Standard Model [3; 4]. This requires accurate predictions for all SM (and BSM) processes as well as a quantification of the associated systematic errors involved. These experiments rely on neutrino event generators for the above, which makes the accuracy of such generators of paramount importance. Fortunately, modern neutrino event generators have a plethora of new lepton-nucleus scattering data to benchmark against with higher precision [5; 6; 7], in new exclusive channels [8; 9; 10; 11; 12], with highly differential data [13], and on new targets [14; 15]. These new results have without a doubt shown that the empirical models used in many event generators cannot simultaneously describe the data across the landscape of experiments.
A common practice among generators is to stitch together disparate models, each describing a different reaction mechanism - quasielastic, two-particle two-hole (2p2h), resonance production, deep inelastic scattering. These are woven together to cover the phase space probed by neutrino experiments. The lack of a unified framework for each of the components leads to large ad hoc tunes being applied, interaction by interaction, to reach agreement with the data [16; 17; 18; 19]. These tunes tend to give inconsistent results across nuclear targets, and even across experiments using the same nuclear target. Additionally, such empirical treatments provide no way to rigorously assess the theoretical uncertainty associated with the underlying physics, obscuring the final systematic errors obtained on the sought after oscillation parameters.
Reaching the \(\mathcal{O}(1)\%\) precision in the neutrino cross section predictions needed for neutrino oscillation analyses will require basing our models in first principles nuclear theory, a consistency in the treatment of the different reaction mechanisms relevant to describe experimental data, and the implementation of such models in our event generators to estimate signal and background predictions [20]. In this article, we will describe a new interface developed for the GENIE event generator [21; 22] which enables an efficient implementation of the Spectral Function model for the description of the quasi-elastic region. This interface can be easily adapted to other accommodate other nuclear models.
The Spectral Function and extended factorization scheme allow for a unified framework able to describe the different reaction mechanisms into the same model while providing an accurate description of nuclear dynamics. Furthermore, it allows to consistently estimate the theoretical error of the calculations, preliminary studies in this direction have been carried out in Ref. [23; 24]. Section II discusses the motivation of and implementation of a theory interface, while Sec. III and IV give details on the factorization scheme and Spectral Functions used in the model. Finally in Sec. VI and VII we validate and test the implementation against inclusive and exclusive electron and nucleus scattering data.
## II Theory Interface
As the number and sophistication of lepton-nucleus interaction models grow, one of the most time consuming bottlenecks is the implementation of these models into event generators. Currently this must be done by a specialist, with specific knowledge of a particular event generator. Models are typically added one at a time, often requiring both translation between programming languages and adaptation to existing software infrastruc
ture. An example of this is the SuSAv2 implementation in GENIE in which the theoretical model is designed only for inclusive interactions, but the event generator must be able to deliver fully exclusive predictions [25].
The need for a less labor-intensive pipeline for theorists to contribute models to event generators has motivated development of simple interfaces for integrating external calculations [26]. In the GENIE neutrino event generator a first step in this direction was taken through the creation of a hadron tensor table framework [21]. In this framework pre-computed tables of hadronic response tensor elements, defined on a two-dimensional grid in energy and momentum transfer, are provided to GENIE for sampling of the final-state lepton kinematics. The hadron tensor can be contracted with a generic leptonic tensor to compute either charged lepton or neutrino scattering cross sections. The tensor table framework has been adopted for the inclusion of the CRPA QE model, SuSAv2 QE+2p2h model, and the Valencia 2p2h model [27; 28; 29; 30].
While the tensor table strategy allows for a speedy implementation of these models into GENIE, the framework has several drawbacks. The current GENIE format for tensor tables is inclusive, meaning that the outgoing nucleon kinematics must be sampled separately from those of the final-state lepton. This has the potential to lead to large disagreements in nucleon momentum and angle distributions [31]. Additionally, there are questions of consistency between the underlying nuclear ground state used to generate the tensor tables and the ground state used in GENIE to select target nucleons. Finally, there is no ability to manipulate the underlying theory parameters involved in the calculation of the hadron tensor elements themselves. This ability can be useful for studying systematic uncertainties, which must otherwise be estimated by less well-motivated methods.
As a first step towards a more flexible interface which addresses these challenges, we have removed the barrier between theorists' original codes and GENIE by creating a Fortran wrapper to directly interface these codes with the GENIE event generator. The choice to create a Fortran wrapper as opposed to any other programming language was based on a survey of many theorists in the neutrino-nucleus scattering community in which a majority of theorists had implementations of their models written in Fortran [32]. The first wrapper developed is specifically for predictions of QE scattering within the Impulse Approximation (IA). In this scheme, described further in Secs. III and IV, lepton-nucleus scattering is factorized into the incoherent sum of collisions with individual nucleons. The nuclear ground state is described by a probability density known as the Spectral Function (SF) which specifies the energy and momentum distributions of bound nucleons. Realistic Spectral Functions include both short- and long-range correlations between constituent nucleons. Given an input Spectral Function, our wrapper allows for a calculation of the hadronic response tensor from an external theory code written in Fortran. This capability can then be used by GENIE to produce events and compute differential cross sections. In the following sections we will give more detail about the factorization scheme used; contrast the Spectral Function against other, more simple nucleon momentum distributions; and validate and compare the model predictions against charged lepton and neutrino scattering data.
## III Factorization of electron and neutrino quasielastic scattering
We report the expression of the fully exclusive lepton-nucleus differential cross section yielding single-nucleon emission. Within the IA, which is expected to hold for momentum transfers \(|\mathbf{q}|>400\,\mathrm{MeV}\), this can be written in the form
\[\begin{split} d\sigma=\sum_{\tau=n,p}\frac{\mathcal{N}_{\tau} \mathcal{C}}{32\pi^{2}E_{\mathbf{p}}E_{\mathbf{p^{\prime}}}E_{\mathbf{k^{ \prime}}}E_{\mathbf{k}}}P_{r}(\mathbf{p},E)\times\\ L_{\mu\nu}\tilde{A}_{\tau}^{\mu\nu}\delta(E_{\mathbf{k}}+E_{N_{i} }-E_{\mathbf{k^{\prime}}}-E_{\mathbf{p^{\prime}}})\,d^{3}\mathbf{p}\,dE\,d^{3} \mathbf{k^{\prime}}\,.\end{split} \tag{1}\]
In Eq. 1\(k\) (\(k^{\prime}\)) and \(p\) (\(p^{\prime}\)) denote the four-momenta of the initial (final) lepton and initial (final) struck nucleon, respectively, and \(E_{\mathbf{p}}\) is the on-shell energy of a particle with 3-momentum \(\mathbf{p}\). The leptonic tensor is completely determined by the lepton kinematics and is given separately for charged leptons and neutrinos as
\[L_{\mu\nu}=\begin{cases}\mathrm{CC},\mathrm{NC}&8(k_{\mu}k^{\prime}_{\nu}+k^ {\prime}_{\mu}k_{\nu}-k\cdot k^{\prime}g_{\mu\nu}\pm i\epsilon_{\mu\nu\rho \sigma}k^{\rho}k^{\prime\sigma})\\ \mathrm{EM}&2(k_{\mu}k^{\prime}_{\nu}+k^{\prime}_{\mu}k_{\nu}+[m_{\mathbf{k} }^{2}-k\cdot k^{\prime}g_{\mu\nu}])\,.\end{cases} \tag{2}\]
The upper sign (\(+\)) should be taken for neutrinos and the lower (-) for anti-neutrinos. We use the symbol \(m_{\mathbf{k}}\) to represent the mass of the particle with 3-momentum \(\mathbf{k}\). The coupling factor \(\mathcal{C}\) depends on the probe and is given by
\[\mathcal{C}=\begin{cases}\mathrm{CC}&G_{F}^{2}|\mathrm{V}_{\mathrm{ud}}|^{2}\\ \mathrm{NC}&G_{F}^{2}\\ \mathrm{EM}&2e^{4}/Q^{4}\,,\end{cases} \tag{3}\]
where \(Q^{2}=-q^{2}>0\).
\(\tilde{A}_{\tau}^{\mu\nu}\) is the nucleon-level response tensor for a bound nucleon with isospin \(\tau\). In the IA, \(\tilde{A}^{\mu\nu}\) is just the free nucleon response tensor but with the energy transfer \(\omega\) modified to account for the energy that must be given to the residual nucleus to free the bound nucleon,
\[\tilde{A}^{\mu\nu}=\langle p^{\prime}|j_{1b}^{!\mu}(\tilde{q})|p\rangle \langle p|j_{1b}^{\nu}(\tilde{q})|p^{\prime}\rangle. \tag{4}\]
In the above we have assumed that the nuclear current operator is made up of only one-body currents, i.e.,
\[J^{\mu}_{\mathrm{nuclear}}=\sum_{i}j_{1b}^{\mu}\,. \tag{5}\]
The single nucleon Spectral Function \(P_{\tau}({\bf p},E)\) describes the distribution of momentum and removal energy for bound nucleons of isospin \(\tau\). Asymmetric nuclei like \({}^{40}\)Ar necessarily have different Spectral Functions for protons and neutrons, so it is important that Eq. 1 allow for this. For the case of symmetric nuclei we can ignore isospin breaking effects and easily set \(P_{p}({\bf p},E)=P_{n}({\bf p},E)\). The binding energy of each nucleon is given by \(\epsilon_{B}=M_{f}+m_{p}-M_{i}\) where \(M_{f(i)}\) is the mass of the final (initial) nucleus. In Eq. 4 we follow the DeForest prescription of using free nucleon spinors and form factors, evaluated using on shell nucleon four-momenta but a modified four momentum transfer [33]
\[\tilde{q}=p^{\prime}-(E_{\bf p},{\bf p})=q-(\epsilon_{B},{\bf 0})=(\tilde{ \omega},{\bf q})\,. \tag{6}\]
The nucleon current operator is given by
\[j^{\mu}_{1b} =\gamma^{\mu}F_{1}^{V}(\tilde{Q}^{2})+i\sigma^{\mu\nu}\frac{\tilde {q}_{\nu}}{2M}F_{2}^{V}(\tilde{Q}^{2}) \tag{7}\] \[+\gamma^{\mu}\gamma^{5}F_{A}(\tilde{Q}^{2})+\frac{\tilde{q}^{\mu }}{M}\gamma^{5}F_{P}(\tilde{Q}^{2}).\]
Finally, the form factors used in Eq. 7 in the case of charged lepton scattering are related to those used in neutrino scattering by the Conserved Vector Current (CVC) hypothesis. This relationship allows for vector form factors derived from precision electron scattering experiments to be readily implemented in neutrino-nucleus cross section predictions. Several parameterizations of the Dirac and Pauli form factors \(F_{1,2}^{p,n}\) exist in GENIE which can be configured by the user. For the axial form factor we consider the dipole model with \(M_{A}=1.0\) GeV [34], but the z-expansion parameterization extracted from neutrino-Deuterium scattering [35] as well as from Lattice QCD [36; 37; 38] also exists in GENIE. Obviously for charged lepton scattering we set \(F_{A}=F_{P}=0\).
This model simultaneously describes both charged lepton and neutrino-nucleus scattering. Comparisons against inclusive and semi-exclusive electron scattering data have already highlighted several modeling deficiencies in the current generation of neutrino event generators [39; 7].
## IV Spectral Function
The Spectral Function of a nucleon with isospin \(\tau\in\{p,n\}\) and momentum \({\bf k}\) can be written as
\[P_{\tau}({\bf k},E) =\sum_{n}|\langle 0|[|k)\,|\Psi_{n}^{A-1}\rangle]|^{2}\delta(E+E_ {0}-E_{n}^{A-1})\] \[=P_{\rm MF}({\bf k},E)+P_{\rm corr}({\bf k},E)\,, \tag{8}\]
where \(|k\rangle\) is the single-nucleon, plane-wave state, \(|0\rangle\) is the ground state of the Hamiltonian with energy \(E_{0}\), while \(|\Psi_{n}^{A-1}\rangle\) and \(E_{n}^{A-1}\) are the energy eigenstates and eigenvalues of the remnant nucleus with \((A-1)\) particles. The Spectral Function in Eq. 8 is a sum of a mean field (MF) and a correlation (corr) term with distinct energy dependence. Both exclusive and inclusive electron scattering experiments have shown that the correlation piece dominates for momenta above \(k_{f}\), is essentially universal, and comprises approximately 20% of the single nucleon strength [40; 41; 42; 43; 44; 45]. The momentum distribution of the initial nucleon is obtained by integrating the Spectral Function over the removal energy
\[n_{\tau}(k)=\int dE\,P_{\tau}({\bf k},E)\,. \tag{9}\]
Nuclear models currently included in GENIE are based on either the Relativistic Fermi Gas (RFG) or the Local Fermi Gas (LFG), the latter of which uses a density-dependent Fermi momentum. Ad-hoc modifications of the models include fixed high momentum tails stitched onto the original RFG momentum distributions [46] or (in the LFG) a shift in strength from \(k<k_{f}\) to \(k>k_{f}\)[21], mimicking a correlation tail. In either case this leads to an incorrect relationship between the nucleon momentum and removal energy. Spectral Functions for finite nuclei have been derived from experiment and different theoretical approaches (QMC, LDA, SCGF, CBF) [47; 48; 49; 50; 41]. In this work we utilize the \({}^{12}\)C and \({}^{16}\)O Spectral Functions obtained within the Correlated Basis Function (CBF) approach, where the MF piece has been fit to \((e,e^{\prime}p)\) scattering data and the correlation contribution is computed using the Local Density Approximation (LDA) [48]. We also assume that the SF for protons and neutrons are the same, and we ignore any isospin breaking effects. We note that the availability of several Spectral Functions from different underlying nuclear models is an advantage as it presents an opportunity to quantify related theoretical uncertainties.
Figure 1 displays the initial-state nucleon momentum distribution for true QE events on \({}^{12}\)C produced using GENIE and the RFG, LFG, and Spectral Function representations of the target nucleus. It is clearly visible that the normalization of the RFG in the mean field (low momentum) region is much larger than the LFG and SF. The LFG completely lacks the correlation tail which is put in by hand in the RFG but exists _a priori_ in the SF. Measurements from MINERvA and T2K of single transverse kinematic imbalance observables have shown that the largest disagreement between models exists in this SRC dominated region between \(200<p_{n}<700\) MeV. The SF initial state agrees better with the data in this region than nuclear models based on the RFG and LFG [51; 52].
## V GENIE implementation
The first step of the GENIE implementation involves some minor code adjustments to allow use of a precomputed SF provided in the form of a data file. Each SF
data file contains a table of \(|\mathbf{p}|,E,P(|\mathbf{p}|,E)\) triples arranged on a regular grid. The SF is normalized so that
\[\begin{split}&\int P(\mathbf{p},E)d^{3}\,\mathbf{p}\,dE\\ &\approx 4\pi\Delta|\mathbf{p}|\Delta E\sum_{ij}|\mathbf{p}_{i}|^{2 }P(|\mathbf{p}_{i}|,E_{j})\\ &=\sum_{ij}P_{\text{bin\,ij}}=1,\end{split} \tag{10}\]
where \(|\mathbf{p}_{i}|\) and \(E_{j}\) are evaluated at the midpoint of each bin on the grid. The values of \(|\mathbf{p}|\) and \(E\) are sampled for an initial nucleon using a two-dimensional histogram like the one shown in Fig. 2. The bins of this histogram have been filled with the same probability mass value \(P_{\text{bin\,ij}}\) from Eq. 10. To approximate the SF, a 2D bin is sampled according to the probability mass distribution, and then specific values of \(|\mathbf{p}|\) and E are chosen uniformly within its boundaries. Finally, a direction for the initial nucleon is chosen isotropically.
New code was also added to GENIE (in the form of a C++ class called UnifiedQELPXSec) to compute the quasielastic differential cross section according to the expression from Eq. 1. The new code takes advantage of the flexibility of the formalism in Sec. III to simultaneously describe electron and neutrino scattering. Based on the projectile of interest, GENIE sets up any necessary constants, form factors, or other calculation ingredients from GENIE internals, minimizing the need for code duplication. Utilizing the same model and code for charged lepton and neutrino scattering allows for parameter constraints obtained from charged lepton scattering experiments to be consistently and immediately applied to neutrino scattering (as well as vice versa). Our implementation utilizes the wrapper described in Sec. II to compute the nucleon-level response tensor of Eq. 4 using an external Fortran code. The results are then fed back to GENIE to compute the differential cross section.
In order to remove the energy-conserving delta function of Eq. 1, we utilize a change of variables by working within the center of momentum (CM) frame of the initial lepton and the struck nucleon. In this reference frame, a formal replacement can be made
\[\begin{split}&\delta(E_{\mathbf{k}}+E_{N_{i}}-E_{\mathbf{k}^{ \prime}}-E_{\mathbf{p}^{\prime}})\,d^{3}\mathbf{k}^{\prime}\rightarrow\\ &\frac{\sqrt{1+(\gamma^{2}-1)(1-\cos^{2}\theta_{0})}}{|\mathbf{v }_{\mathbf{k}^{\prime}}-\mathbf{v}_{\mathbf{p}^{\prime}}|}|\mathbf{k}^{\prime} _{0}|^{2}\,d\phi_{0}\,d\cos\theta_{0}\,,\end{split}\]
where \(\mathbf{k}^{\prime}_{0}\) is the final lepton 3-momentum in the CM frame, \(\gamma\) is the Lorentz factor for the boost between lab and CM frames, and \(\mathbf{v}_{\mathbf{k}^{\prime}}\) (\(\mathbf{v}_{\mathbf{p}^{\prime}}\)) is the lab-frame velocity of the final lepton (final nucleon). The CM frame final lepton scattering angles \(\theta_{0}\) and \(\phi_{0}\) are measured between \(\mathbf{k}^{\prime}_{0}\) and \(\mathbf{v}\), the velocity of the CM frame as measured in the lab frame. This choice of variables is convenient for Monte Carlo (MC) sampling, and is also done for existing QE simulations in recent releases of GENIE.
By using the Spectral Function as a normalized probability density, we can integrate over the 4D phase space of the initial nucleon using MC methods. The differential cross section can be computed as
\[\frac{d\sigma}{d\cos\theta_{0}d\phi_{0}} =\int P(\mathbf{p},E)F(\mathbf{p},E)dEd^{3}\mathbf{p}\] \[=\langle F(\mathbf{p},E)\rangle\approx\frac{1}{N}\sum_{k=1}^{N}F( \mathbf{p}_{k},E_{k})\,,\]
where \(F(\mathbf{p}_{k},E_{k})\) is basically the cross section of Eq. 1 with the Spectral Function factored out. Nucleon variables are drawn for each trial from the Spectral Function, and the lepton angles are easily integrated over.
While the above constitutes a novel implementation of the Spectral Function into GENIE, we must mention
Figure 1: Initial nucleon momentum distributions for \({}^{12}\)C for models using the Relativistic Fermi Gas (RFG: Blue), Local Fermi Gas (LFG: Black), and Spectral Function (SF: Red). Momentum distributions have been obtained from 100,000 simulated electron \({}^{12}\)C scattering events at \(E_{\text{beam}}=1\,\text{GeV}\) in GENIE.
Figure 2: Two dimensional probability mass distribution of initial nucleon momentum and removal energy for the \({}^{12}\)C SF implemented in GENIE. S and P shells are visible at low momentum and removal energy.
past work on another numerical implementation called GENIE + \(\nu T\)[53]. This work focused on inclusive observables and studied the corresponding shift in extracted oscillation parameters when the SF is used as the base model. While the origin of the physics model in the GENIE + \(\nu T\) implementation is the same as in the present work, several differences must be noted. First, the kinematic sampling is done differently. In Ref. [53], values of \(Q^{2}\) are generated for sampling the lepton kinematics, as was typical in GENIE releases before major version 3; our implementation generates \((\cos\theta_{0},\phi_{0})\) pairs which fully retain correlations with the outgoing nucleon. This enables our implementation to deliver exclusive cross section predictions needed for analyzing the data of current and future oscillation experiments using liquid argon time projection chambers. The goals of our implementation are also different. First and foremost, the present work serves as a test case for our Fortran wrapper and a verification that the implementation is done correctly. Our SF implementation is also part of a larger effort to improve lepton-nucleus scattering models in event generators, with a hope to develop a consistent scheme which encompasses all reaction mechanisms. One alternative avenue is the development of the ACHILLES event generator, also based on the SF model, which aims to root each portion of the event-generation pipeline in microscopic nuclear theory [54; 55].
## VI Validation
As a validation of our implementation, we first compare our GENIE SF results against inclusive electron scattering data and standalone calculations (i.e., outside of any event generator) using the same Spectral Function and form factors for inclusive neutrino scattering. In Fig. 3 we show predictions using the GENIE SF model against inclusive electron scattering data on \({}^{12}\)C for beam energies of 0.961 and 1.108 GeV, both taken at an electron scattering angle of \(37.5^{\circ}\)[56]. We see here that the peak locations and widths are well described by the SF model, though final state interactions will slightly shift the peaks towards lower energy transfer through interference effects [57]. The GENIE SF model slightly underpredicts the height of the peaks, but this is to be expected. The inclusion of two-body currents in Eq. 5 leading to multi-nucleon knockout increases the predicted cross section especially at energy transfers beyond the QE peak and before resonance production. Furthermore the interference between one- and two-body currents leading to single nucleon knockout is known to increase the cross section at the QE peak [58; 59; 60]. Given the missing interaction mechanisms just mentioned, the satisfactory agreement between the GENIE SF predictions and the inclusive data is a useful validation of the implementation. Below in Fig. 4 we also show inclusive double differential muon neutrino cross sections at \(E_{\nu}=1\,\)GeV at fixed muon scattering angles of \(20^{\circ},30^{\circ}\), and \(40^{\circ}\). Predictions from the GENIE SF match the standalone calculations (labeled "Rocco SF" in the figure), again validating the implementation.
## VII Exclusive Cross Section Predictions
As discussed earlier, exclusive cross sections are a more powerful discriminator between different neutrino-nucleus cross section models. To this end we compare the GENIE SF implementation to both the SuSAv2 QE and G2018 QE models implemented in GENIE. We include only the quasielastic components of each model for
Figure 3: Inclusive double differential cross sections vs. energy transfer from \(e^{-12}\)C scattering at \(\theta_{e^{\prime}}=37.5^{\circ}\) for beam energies of 0.961 (red) and 1.108 (blue) GeV. Data points are shown as points in the same colors with shaded bands showing statistical plus systematic errors
Figure 4: Inclusive double differential cross sections vs. energy transfer from \(\nu_{\mu}\)–\({}^{12}\)C QE scattering at 1 GeV and several muon scattering angles: 20, 30, and 40 degrees. Solid lines are the GENIE SF implementation and the dashed lines are predictions from the SF model of Noemi Rocco.
consistency in the comparisons. We begin with exclusive electron scattering measurements from e4\(\nu\), where the \((e,e^{\prime}p)_{0\pi}\) topology has been measured on a variety of targets and across multiple beam energies [7]. We focus on transverse kinematic imbalance variables (TKI) which are sensitive to different reaction mechanisms and are independent of incident lepton energy [52; 61; 62]. The differential cross section in transverse momentum defined as
\[\mathbf{P}_{\mathrm{T}}=\mathbf{P}_{\mathrm{T}}^{\mathrm{\varphi^{\prime}}}+ \mathbf{P}_{\mathrm{T}}^{\mathrm{p}}\,, \tag{11}\]
for 1.161 GeV electrons on \({}^{12}\)C compared to predictions from G2018, SuSAv2, and the SF are shown in Fig. 5. quasielastic scattering has been shown to be the dominant component at low \(\mathrm{P}_{\mathrm{T}}\) where Fermi-motion dictates the normalization and width of the cross section. Inelastic contributions, NN correlations, and significant intra-nuclear re-scattering or re-absorption of the outgoing hadronic system (FSI) contribute as a broad tail to higher values of \(\mathrm{P}_{\mathrm{T}}\) above the Fermi momentum.
Figure 5 shows that G18 model significantly over predicts the normalization at low \(\mathrm{P}_{\mathrm{T}}\). The SF model shows an excellent agreement with the data at low \(\mathrm{P}_{\mathrm{T}}\) where it should be remarked that as the simulations include only the QE interaction, the predictions should _always_ understood the data. The lowest \(\mathrm{P}_{\mathrm{T}}\) bin shows a mild over prediction from the SuSAv2 model, but otherwise SuSAv2 describes the data well. The cross section serves as a proxy for initial nucleon momentum, as can be seen by the similarities between the shape and normalization of the cross sections in Fig. 5 compared to the momentum distributions of Fig 1.
Moving from electron scattering to neutrinos we next examine CCQE-like (also known as CC0\(\pi\)) scattering from the MINERvA experiment. The ability of a model to simultaneously describe electron and neutrino scattering is crucial to leveraging the extremely high precision charged lepton data available. To this end we examine first data from the Low Energy (LE) period of MINERvA, with an average neutrino energy \(\langle E_{\nu}\rangle=3\,\mathrm{GeV}\). We focus on another derived TKI variable, \(p_{n}\) which is an estimator for the initial neutron momentum under the CCQE hypothesis [61]. Below in Fig. 6 we show the measured \(p_{n}\) distribution at MINERvA against QE predictions from SuSAv2 and the SF models.
The predicted \(p_{n}\) distribution from the SF matches the data very well in width and peak position and is slightly narrower than the SuSAv2 prediction, which reflects the broader initial nucleon momentum distribution of the LFG used by SuSAv2 in GENIE as seen in Fig. 1.
The same analysis measured the leading proton scattering angle spectrum, which is sensitive to FSI but also to the way in which the final state nucleon phase space is sampled [31]. As the SuSAv2 implementation is inclusive there is no guarantee that the final state nucleon kinematics will be correctly generated, as opposed to the fully exclusive nature of the SF implementation. Figure 7 shows the proton scattering angle spectrum, with the SuSAv2 QE prediction being significantly larger and slightly broader at the QE peak than the SF prediction. As 2p2h and other inelastic channels will contribute over the entire range of proton scattering angles, the SuSAv2 prediction leads to an over-estimation of the cross section.
The final MINERvA data set for comparison is the triple differential CCQE-like measurement in the medium energy, with an average neutrino energy of 6 GeV [13]. In this analysis, data is binned in muon longitudinal and transverse momentum as well as \(E_{\mathrm{avail}}\) defined by
\[E_{\mathrm{avail}}=\sum T_{\mathrm{proton}}+\sum T_{\pi^{\pm}}+\sum E_{ \mathrm{particle}}\,. \tag{12}\]
Figure 5: Differential cross section in \(p_{T}\) from (e,e’p)\({}_{0\pi}\) events for 1.159 GeV e- \({}^{12}\)C GeV scattering. Simulation predictions from three different GENIE models where only true QE events are shown.
Figure 6: MINERvA differential cross section in \(p_{n}\) (initial neutron momentum) with data in (black points) compared to SF (red), and SuSAv2 (green). Data from [61].
In the above, \(T_{\rm proton}\) is the proton kinetic energy, \(T_{\pi^{\pm}}\) is the charged pion kinetic energy, and \(E_{\rm particle}\) is the total energy of any other final state particle except neutrons [13]. This kinematic variable when summed with the lepton energy is used as an estimator for the true neutrino energy by experiments like NOvA and MicroBooNE. In this measurement the signal is \(0\pi\) events, so \(E_{\rm avail}\) is just the sum of the kinetic energies of all detected protons.
Figure 8 shows this triple differential cross section for \(1.5\,{\rm GeV}<p_{\parallel}<3.5\,{\rm GeV}\) with QE predictions from the SF, SuSAv2, and G2018 models. As this sample contains high energy neutrinos, there is again the expectation that each simulation's prediction should undershoot the data, which we see from each of the three models. It is interesting to note that even though each of the three included models have vastly different theoretical underpinnings, that each lead to a similar prediction in the QE region.
Our last data comparison is with T2K data on oxygen. An oxygen Spectral Function computed using CBF theory has also been provided in GENIE, enabling the SF model to be validated against multiple nuclear targets [48]. We compare against double differential cross sections in muon momentum and cosine of the scattering angle on oxygen from \(CC0\pi\) events from T2K [6]. The lower beam energy of T2K, which peaks at around 600 MeV means that inelastic contributions from resonance production and DIS are smaller than in other neutrino experiments. In Figure 9 we compare predictions from the SF and SuSAv2 models in 6 different bins of \(\cos\theta_{\mu}\). SF predictions are consistently below the data, as to be expected as 2p2h interactions are still expected to be significant at these kinematics as well as small contributions from resonance production. This is to be compared to the SuSAv2 QE predictions which are already close to the data and even overshoot it at forward muon angles.
## VIII Discussion
The growing quantity, quality, and dimensionality of charged lepton and neutrino-nucleus scattering data present increasingly strong constraints on event generator predictions. To meet the precision simulation needs of future experiments, an efficient pathway for the implementation of more realistic, theory driven models which start from a microscopic picture of the nucleus will be invaluable. We have highlighted some practical difficulties in including such new models in neutrino event generators like GENIE, and we have created an interface for Fortran-based QE cross-section calculations as a first step to overcome these difficulties.
We have also discussed some of the limitations of the available models in GENIE, focusing on the highly approximate representations of the nuclear ground state currently available. We have shown how the Spectral Function provides a more complete picture of the nucleus with the correct relationship between nucleon momentum and removal energy, as well as naturally including correlations between nucleons. This more complex model for the nuclear ground state leads to marked differences in exclusive cross-section predictions as can be seen in both electron and neutrino scattering.
Finally, the inclusion of the Spectral Function model within GENIE allows for multiple avenues for continuing improvement. The first is the ability of this model, and the code as implemented, to predict electron and neutrino scattering cross sections simultaneously. This will allow information gathered from precision charged lepton scattering experiments to be more effectively used to refine neutrino scattering predictions. While this work is limited to the quasielastic region, it is important to mention that the SF formalism has been generalized to include two-body current and pion-production mechanisms [63]. However, the process of extending the interface to encompass these additional contributions is not straightforward and would necessitate further advancements beyond the present scope.
In contrast to similar previous efforts based on pre-computed hadron tensors, the interface we have devised provides greater ability to manipulate input parameters and study their impact on the simulation predictions. In particular, our interface allows for estimation of theoretical uncertainties through direct variations of the adopted nucleon form factors and the use of multiple Spectral Function tables calculated using different nuclear model assumptions [24; 64].
## IX Acknowledgments
This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics and Fermilab LDRD awards (N.S).
Figure 7: MINERvA differential cross section in \(\theta_{p}\) (proton scattering angle) with data in (black points) compared to SF (red), and SuSAv2 (green). Data from [61].
|
2306.03832
|
Stochastic Principal-Agent Problems: Efficient Computation and Learning
|
We introduce a stochastic principal-agent model. A principal and an agent
interact in a stochastic environment, each privy to observations about the
state not available to the other. The principal has the power of commitment,
both to elicit information from the agent and to provide signals about her own
information. The players communicate with each other and then select actions
independently. Each of them receives a payoff based on the state and their
joint action, and the environment transitions to a new state. The interaction
continues over a finite time horizon. Both players are far-sighted, aiming to
maximize their total payoffs over the time horizon. The model encompasses as
special cases extensive-form games (EFGs) and stochastic games of incomplete
information, partially observable Markov decision processes (POMDPs), as well
as other forms of sequential principal-agent interactions, including Bayesian
persuasion and automated mechanism design problems.
We consider both the computation and learning of the principal's optimal
policy. Since the general problem, which subsumes POMDPs, is intractable, we
explore algorithmic solutions under hindsight observability, where the state
and the interaction history are revealed at the end of each step. Though the
problem becomes more amenable under this condition, the number of possible
histories remains exponential in the length of the time horizon, making
approaches for EFG-based models infeasible. We present an efficient algorithm
based on the inducible value sets. The algorithm computes an
$\epsilon$-approximate optimal policy in time polynomial in $1/\epsilon$.
Additionally, we show an efficient learning algorithm for an episodic
reinforcement learning setting where the transition probabilities are unknown.
The algorithm guarantees sublinear regret $\tilde{O}(T^{2/3})$ for both players
over $T$ episodes.
|
Jiarui Gan, Rupak Majumdar, Debmalya Mandal, Goran Radanovic
|
2023-06-06T16:20:44Z
|
http://arxiv.org/abs/2306.03832v3
|
# Sequential Principal-Agent Problems with Communication: Efficient Computation and Learning
###### Abstract
We study a sequential decision making problem between a principal and an agent with incomplete information on both sides. In this model, the principal and the agent interact in a stochastic environment, and each is privy to observations about the state not available to the other. The principal has the power of commitment, both to elicit information from the agent and to provide signals about her own information. The principal and the agent communicate their signals to each other, and select their actions independently based on this communication. Each player receives a payoff based on the state and their joint actions, and the environment moves to a new state. The interaction continues over a finite time horizon, and both players act to optimize their own total payoffs over the horizon. Our model encompasses as special cases stochastic games of incomplete information and POMDPs, as well as sequential Bayesian persuasion and mechanism design problems. We study both computation of optimal policies and learning in our setting. While the general problems are computationally intractable, we study algorithmic solutions under a _conditional independence_ assumption on the underlying state-observation distributions. We present an polynomial-time algorithm to compute the principal's optimal policy up to an additive approximation. Additionally, we show an efficient learning algorithm in the case where the transition probabilities are not known beforehand. The algorithm guarantees sublinear regret for both players.
## 1 Introduction
Many problems in economic theory involve sequential reasoning between multiple parties with asymmetric access to information (Ross, 1973; Jensen and Meckling, 1976; Bolton and Dewatripont, 2004; Ljungqvist and Sargent, 2018). For example, in contract theory, one party (the principal) delegates authority and decision-making power to another (the agent), and the goal is to come up with mechanisms to ensure that the agent's actions are aligned with the principal's utilities. This broad class of _principal-agent problems_ lead to many research questions about information design and optimal strategic behaviors, with broad-ranging applications from governance and public administration to e-commerce and financial services. In particular, _algorithmic_ techniques for optimal decision making and learning are crucial for obtaining effective solutions to real-world problems in this domain.
In this paper, we consider a general framework for sequential principal-agent problems with communication and study algorithmic problems related to the computation and learning of optimal solutions under the framework. In our framework, the interaction between the principal and the agent takes place in a stochastic environment over multiple time steps. In each step, both players are privy to information not available to the other and make partial observations about the environment. The players can communicate their private information to influence
each other and, based on this communication, take actions that jointly influence the state of the environment. Each player has their own payoff, and we study the _non-zero sum case_ where the payoffs need not sum to zero. The players are _far-sighted_: their goal is to maximize their expected total payoffs over the entire horizon of the game. Technically, these are stochastic games with partial information on both sides (Aumann and Maschler, 1995; Mertens et al., 2015).
In the spirit of the principal-agent problem, we assume that the principal has the power of _commitment_, both to elicit information from the agent and to provide signals about her own information to coordinate their joint actions. A commitment is a binding agreement for the principal to act according to the committed strategy; technically, we have a Stackelberg game (Von Stackelberg, 2010). The agent acts optimally in response to the commitment. As a result, our model incorporates both sequential information design (or Bayesian persuasion (Kamenica and Gentzkow, 2011)) (Gan et al., 2022; Wu et al., 2022) and sequential mechanism design (Zhang and Conitzer, 2021) as special cases, as well as several other models of stochastic games (Shapley, 1953) with commitment, and POMDPs.
We focus on a finite time horizon and the total reward setting. We investigate both the _full information_ setting, where all parameters of the underlying game are known to both players and their goal is to design optimal policies, and the _partial information_ setting, where the parameters are not given beforehand and have to be learned by interacting in the environment. Based on these two settings, we design algorithms to compute or learn the principal's optimal policy, which is in general history-dependent.
### Our Results
Since the general setting of our model is computationally intractable--the problem is PSPACE-hard even for the special case of finite-horizon POMDPs (Papadimitriou and Tsitsiklis, 1987)--we focus on a special case with a _decomposability assumption_ on the underlying state-observation distributions. Under this assumption, the current state is independent of the history conditioned on the players' observations at the current time step. The model under the decomposability assumption is fairly expressive. It encompasses settings where the state is fully observable to the players (e.g., stochastic games or repeated games) or is revealed at the end of each time step, as well as settings where observations can be interpreted as external parameters generated based on internal Markovian states (e.g., (Gan et al., 2022; Wu et al., 2022)).
Our first main result is a polynomial-time algorithm to compute the principal's optimal policy based on the construction of inducible payoff sets. Our algorithm computes \(\epsilon\)-optimal solutions up to an additive approximation \(\epsilon\). The key technical difficulty in our algorithm is to characterize the one-step solutions in a dynamic programming formulation as projections of convex polytopes that can be efficiently approximated up to an additive factor.
Next, we study the partial-information case. We consider a typical setting in the reinforcement learning (RL) literature where the players are not given the transition probabilities beforehand and need to learn them through interaction. The setting is episodic and consists of \(T\) episodes. As our second main result, we present a computationally efficient algorithm that guarantees sublinear \(\widetilde{\mathcal{O}}(\operatorname{poly}(M,H)\cdot T^{2/3})\) regret for both players, where \(M\) is the size of the model and \(H\) is the horizon length of each episode. The bound matches a \(\Omega(T^{2/3})\) lower bound by Bernasconi et al. (2022) for a sequential persuasion model. Our learning algorithm uses _reward-free exploration_ from the recent RL literature, and relies on efficient computation of optimal policies that are _approximately_ incentive compatible. The latter is achieved via a variant of our algorithm for the full-information case.
### Related Work
The principal-agent problem is a well-known concept in economics studies (see, e.g., Ross, 1973, Myerson, 1982, Milgrom and Roberts, 1986, Makris, 2003). Models featuring sequential interactions and communications have also been proposed and studied (Myerson, 1986, Forges, 1986). Our work follows the same modeling approach as these early works and generalizes the one-shot versions of the respective types of principal-agent problems, including in particular information design (Kamenica and Gentzkow, 2011), mechanism design (Sandholm, 2003), and models that are a mixture of both (Myerson, 1982, Castiglioni et al., 2022, Gan et al., 2022c). In the more recent literature, there has been a growing interest in the algorithmic aspects of sequential principal-agent problems. The computation and learning of sequential extensions of various forms of principal-agent interactions have been studied. These include models of sequential information design (e.g., Celli et al., 2020, Gan et al., 2022b, a, Wu et al., 2022, Bernasconi et al., 2022), mechanism design (e.g., Zhang and Conitzer, 2021, Cacciamani et al., 2023), and more broadly, various types of sequential Stackelberg games (e.g., Letchford and Conitzer, 2010, Letchford et al., 2012, Bosansky et al., 2017, Harris et al., 2021, Collina et al., 2023). Our model direct generalizes many of these models in the finite-horizon setting with far-sighted players, where our algorithms directly apply and our results are complementary to results in the respective works. To name a few, Gan et al. (2022b) studied the infinite-horizon information design model and showed that the general setting with a far-sighted receiver (agent) is inapproximable, though an efficient algorithm exists when the receiver is myopic. The work leaves open the possibility of designing efficient algorithms for the corresponding finite-horizon setting of the same model, which we consider in this paper. Bernasconi et al. (2022) also studied a similar model based on extensive-form games (EFGs) and presented efficient computation and learning algorithms. EFGs are less expressive than POMDP-based models in the sense that the number of possible trajectories of the game is usually bounded by the size of the problem (i.e., the game tree), whereas in the POMDP model, this can grow exponential in the size of the problem. Hence, the algorithms for EFG-based models do not translate to efficient algorithms for ours. Besides the above two works, Wu et al. (2022) also studied the learning problems in the information design model but in the case of a myopic agent. In the domain of mechanism design, Zhang and Conitzer (2021) studied a infinite-horizon model that is a POMDP for the principal and MDP for the agent. They presented a linear programming algorithm to compute the principal's optimal mechanism, though the formulation is exponentially large in the input size of the problem.
## 2 Preliminaries
A principal (\(\mathsf{P}\)) and an agent (\(\mathsf{A}\)) interact in a finite-horizon partially observable Markov decision process (POMDP) \(\mathcal{M}=\langle S,A,\Omega,p,\mathbf{r}\rangle\). \(S\) is a finite state space of the environment. \(A=A^{\mathsf{P}}\times A^{\mathsf{A}}\) is a finite joint action space. \(\Omega=\Omega^{\mathsf{P}}\times\Omega^{\mathsf{A}}\) is a finite joint observation space. The tuple \(p=(p_{h})_{h}\) consists of a transition function \(p_{h}:S\times A\rightarrow\Delta(S\times\Omega)\) for each time step \(h\), which generates a random state-observation pair \((s,\mathbf{\omega})\sim p_{h}(\cdot\,|\,s,\mathbf{a})\) when a joint action \(\mathbf{a}\in A\) is performed at state \(s\in S\). The tuple \(\mathbf{r}=(\mathbf{r}_{h})_{h}\) consists of a pair \(\mathbf{r}_{h}=(r_{h}^{\mathsf{P}},r_{h}^{\mathsf{A}})\) of reward functions for each time step \(h\): \(r_{h}^{\mathsf{P}}:S\times A\rightarrow\mathbb{R}\) generates a reward for the principal and \(r_{h}^{\mathsf{A}}:S\times A\rightarrow\mathbb{R}\) generates a reward for the agent. Without loss of generality, we assume that the rewards are bounded in \([0,1]\). Let \(H\) be the horizon length of the process. The interaction at each time step \(h=1,\ldots,H\) is as follows:
1. **Observation:** The principal observes \(\omega_{h}^{\mathsf{P}}\) and the agent observes \(\omega_{h}^{\mathsf{A}}\).
2. **Communication:** The principal elicits the agent's observation. The agent reports \(\widetilde{\omega}^{\mathsf{A}}_{h}\in\Omega^{\mathsf{A}}\) (not necessarily the same as \(\omega^{\mathsf{A}}_{h}\)) and the principal responds with a coordination signal \(g_{h}\) based on \(\omega^{\mathsf{P}}_{h}\) and \(\widetilde{\omega}^{\mathsf{A}}\). The agent observes \(g_{h}\) in addition to her observation \(\omega^{\mathsf{A}}_{h}\) above.
3. **Action:** The principal and the agent perform actions \(a^{\mathsf{P}}_{h}\) and \(a^{\mathsf{A}}_{h}\), respectively, based on their observations and the information exchange above.
4. **Rewards and Transitioning:** Rewards \(r^{\mathsf{P}}_{h}(s_{h},\mathbf{a}_{h})\) and \(r^{\mathsf{A}}_{h}(s_{h},\mathbf{a}_{h})\) are generated for the principal and agent, respectively (where \(\mathbf{a}_{h}=(a^{\mathsf{P}}_{h},a^{\mathsf{A}}_{h})\)). The next state and observations are drawn: \((s_{h+1},\mathbf{\omega}_{h+1})\sim p_{h}(\cdot\,|\,s_{h},\mathbf{a}_{h})\).
Following the paradigm of the principal-agent model, we consider the setting where the principal _commits_ to a signaling strategy and the agent reacts to this commitment. Both players are _far-sighted_ and aim to maximize their total reward obtained over the \(H\) time steps, and we take the principal's perspective to study her optimal commitment. At a high level, this is a Stackelberg game between the principal and the agent. The model generalizes several types of principal-agent interactions, including information design (where the principal is the observer and the agent acts), mechanism design (where the agent is the observer and the principal acts), and stochastic games with commitment and coordination (where the environment is fully observable).
### Dynamic Policies
We consider history-dependent dynamic policies of the principal (dynamic policies hereafter), which are functions of the history of interaction between the players. The history up until time step \(h\) is a sequence \(\sigma_{h}=(s_{1},\mathbf{\omega}_{1},\widetilde{\omega}_{1},g_{1},\mathbf{a}_{1 };s_{2},\mathbf{\omega}_{2},\widetilde{\omega}_{2},g_{2},\mathbf{a}_{2};\dots;s_{h },\mathbf{\omega}_{h},\widetilde{\omega}_{h},g_{h},\mathbf{a}_{h})\). We let \(\Sigma_{h}\) denote the set of all such sequences. Each player observes the part of the sequence visible to them. Similarly to the notation above, we let \(\sigma^{\mathsf{P}}_{h}\in\Sigma^{\mathsf{P}}_{h}\) and \(\sigma^{\mathsf{A}}_{h}\in\Sigma^{\mathsf{A}}_{h}\) denote sequences observed by the principal and the agent at time step \(h\), where \(\Sigma^{\mathsf{P}}_{h}\) and \(\Sigma^{\mathsf{A}}_{h}\) denote the corresponding sets consisting of all such sequences. Namely, \(\sigma^{\mathsf{P}}_{h}\) is the part of \(\sigma_{h}\) without any \(\omega^{\mathsf{A}}_{\ell}\) and \(a^{\mathsf{A}}_{\ell}\), and \(\sigma^{\mathsf{A}}_{h}\) is the part without any \(\omega^{\mathsf{P}}_{\ell}\) and \(a^{\mathsf{P}}_{\ell}\). Let \(\Sigma^{\mathsf{P}}=\bigcup_{h=0}^{H}\Sigma^{\mathsf{P}}_{h}\) and \(\Sigma^{\mathsf{A}}=\bigcup_{h=0}^{H}\Sigma^{\mathsf{A}}_{h}\), where \(\Sigma^{\mathsf{P}}_{0}=\Sigma^{\mathsf{A}}_{0}=\{\varnothing\}\) contain only the empty sequence. Note that assuming that the players only observe their own actions is without loss of generality. For example, the case where the principal observes the agent's actions can be captured by this model by encoding \(a^{\mathsf{A}}_{\ell}\) into \(\omega^{\mathsf{P}}_{\ell+1}\). Additionally, we also assume that each player's own action is encoded in their observation at the next time step, so that when the principal elicits the agent's observation, she also elicits information about the action the agent performed.
Principal's PolicyIn the most generic sense, a dynamic policy takes the form \(\pi:\Sigma^{\mathsf{P}}\times\Omega\to\Delta(G\times A^{\mathsf{P}})\). Namely, \(\pi\) defines a policy \(\pi(\sigma):\Omega\to\Delta(G\times A^{\mathsf{P}})\) for every possible sequence \(\sigma\in\Sigma^{\mathsf{P}}\), which outputs a joint distribution over a signal space \(G\) and the principal's action space \(A^{\mathsf{P}}\), given an input pair \((\omega^{\mathsf{P}},\widetilde{\omega}^{\mathsf{A}})\in\Omega\) consisting of the principal's observation \(\omega^{\mathsf{P}}\) and the agent's report \(\widetilde{\omega}^{\mathsf{A}}\). According to the revelation principle, it is without loss of generality to consider policies that are _incentive compatible_ (IC) [see e.g., Myerson, 1986]. That is, we can associate each signal \(g\in G\) with an action of the agent and think of it as an action recommendation for the agent; moreover, the policy incentivizes the agent's _truthful_ response--to report her true observation and perform the recommended action. This simplifies the policy space to IC policies of the form \(\pi:\Sigma^{\mathsf{P}}\times\Omega\to\Delta(A)\), whereby the principal draws a joint action \(\mathbf{a}=(a^{\mathsf{P}},a^{\mathsf{A}})\in A\), sends \(a^{\mathsf{A}}\) as an action recommendation to the agent, and performs \(a^{\mathsf{P}}\) herself.
Agent's ResponseThe principal announces to the agent the dynamic policy she commits to. The agent then faces a meta-POMDP, where the observation at each time step \(h\) consists of both the observation \(\omega_{h}\) from the environment as well as the coordination signal \(a_{h}^{\mathsf{A}}\) from the principal. Hence, the agent reacts by playing according to an optimal policy for this meta-POMDP, which is in general also history-based. When the principal's policy is IC, responding truthfully is optimal for the agent.
## 3 Computing an Optimal Policy
Unsurprisingly, the problem of computing an optimal policy is intractable because our model generalizes POMDPs. Solving a finite-horizon POMDP is known to be PSPACE-complete (Papadimitriou and Tsitsiklis, 1987). The PSPACE-hardness remains even in the special case of information design, where the principal observes the state directly (but does not act) and the agent acts (but makes no observation); as well as the special case of mechanism design, where the agent observes the state directly (but does not act) and the principal acts (but does not observe). This can be seen by considering zero-sum instances of our model, where the players' payoffs sum to zero. Consider for example the case of information design. Since the game is zero-sum, it is optimal for the principal to not send any signals in this case (if signaling were able to improve the principal's payoff, the agent would be better-off ignoring the signals). Hence, knowing the maximum attainable payoff of the principal in this case is equivalent to knowing (the negative of) the agent's maximum attainable payoff, which amounts to solving a POMDP.
This means that an efficient algorithm is unlikely. Due to this complexity barrier, we will focus on a special case where the state is conditionally independent of the history. We present an efficient algorithm to compute near-optimal strategies in this case.
Conditionally Independent StatesWe consider the case where the transition function can be factorized as
\[p_{h}(s^{\prime},\boldsymbol{\omega}^{\prime}\,|\,s,\mathbf{a})=\phi_{h}(o^{ \prime}\,|\,s,\mathbf{a})\cdot\mu_{h}(s^{\prime},\boldsymbol{\omega}^{\prime} \,|\,o^{\prime}), \tag{1}\]
where \(o^{\prime}\) is an additional public observation that is observed by both players.1 In this case, the next state \(s^{\prime}\) is conditionally independent from the history given the public observation \(o^{\prime}\). We refer to this case as CIS (conditionally independent states). Though a special case of our model, CIS encompasses various important settings, including the ones where the state is fully observable to the players (e.g., stochastic games or repeated games (Collina et al., 2023)) or is revealed at the end of each time step, as well as settings where observations are interpreted as external parameters generated based on internal Markovian states (e.g., (Gan et al., 2022; Wu et al., 2022)).
Footnote 1: Note that this additional observation does not require any extension to our model as we can view \((o,\omega^{\mathsf{P}})\) and \((o,\omega^{\mathsf{A}})\) as the players’ observations in our original model.
Let \(O\) be the set of all possible public observations. CIS helps decouple the subsequent process after the occurrence of \(o\) from the previous history. We can view the process following the occurrence of \(o\) at time step \(h\) as a subgame \(\mathcal{G}_{h}(o)\). Our approach is to construct the set \(\mathcal{V}_{h}(o)\subseteq\mathbb{R}^{2}\) of _inducible payoff pairs_ for each subgame \(\mathcal{G}_{h}(o)\) via backward induction. Each vector \(\mathbf{v}=(v^{\mathsf{P}},v^{\mathsf{A}})\in\mathcal{V}_{h}(o)\) is a pair of values for the principal and the agent, respectively, that can be induced as their expected payoffs in the subgame by some policy \(\pi\) of the principal. The approach is analogous to constructing the value function for an MDP (where the value is a vector in our case). Similar algorithms have been used, e.g., by MacDermed et al. (2011) and Letchford et al. (2012), to approximate correlated or Stackelberg equilibria in stochastic games.
Without loss of generality, we assume that the process starts from a deterministic public observation \(o_{\star}\) at time step 1. The principal's optimal payoff is then \(\max_{\mathbf{v}\in\mathcal{V}_{1}(o_{\star})}v^{\mathsf{P}}\). Once we use backward induction to construct all the inducible payoff sets, the distribution \(\pi(\cdot\,|\,\sigma,o,\boldsymbol{\omega})\) output by a policy \(\pi\) that induces the optimal payoff will be computed via a forward construction. We present these backward and forward computation processes next.
### Backward Construction of Inducible Payoff Sets
We use backward induction to approximate the inducible payoff sets. To distinguish, we let \(\widehat{\mathcal{V}}_{h}(o)\) denote the approximation of \(\mathcal{V}_{h}(o)\) we aim to obtain. Without loss of generality, we can assume that in the last time step \(H\) all rewards are 0. Hence, we have \(\mathcal{V}_{H}(o)=\{(0,0)\}\) for all \(o\in O\) as the base case. Now suppose that we have obtained \(\widehat{\mathcal{V}}_{h}(o^{\prime})\) for all \(o^{\prime}\in O\) and they are all convex polytopes. We move one time step backward to construct \(\widehat{\mathcal{V}}(o)\) for each \(o\in O_{h}\) (and in the meantime show that they are also convex polytopes).
In the same vein as the Bellman equation, every \(\mathbf{v}\in\mathcal{V}_{h}(o)\) can be expressed as follows:
\[\mathbf{v}=\sum_{s,\boldsymbol{\omega},\mathbf{a}}\mu_{h}(s, \boldsymbol{\omega}\,|\,o)\cdot\bar{\pi}(\mathbf{a}\,|\,\boldsymbol{\omega}) \cdot\left(\mathbf{r}_{h}(s,\mathbf{a})+\sum_{o^{\prime}}\phi_{h}(o^{\prime} \,|\,s,\mathbf{a})\cdot\mathbf{v}^{\prime}(\mathbf{a},\boldsymbol{\omega},o^{ \prime})\right), \tag{2}\]
where \(\bar{\pi}:\Omega\to\Delta(A)\) is a (partial) policy defined for the first time step of \(\mathcal{G}_{h}(o)\), and each \(\mathbf{v}^{\prime}(\mathbf{a},\boldsymbol{\omega},o^{\prime})\in\mathcal{V}_{ h+1}(o^{\prime})\) is a pair of payoffs to be induced in the next stage. The pairs \(\mathbf{v}^{\prime}(\mathbf{a},\boldsymbol{\omega},o^{\prime})\in\mathcal{V}_{ h+1}(o^{\prime})\) depend on the interaction history encoded in \((\mathbf{a},\boldsymbol{\omega},o^{\prime})\) before the principal selects the policy for the next time step, where: \(\mathbf{a}=(a^{\mathsf{P}},a^{\mathsf{A}})\) consists of the action \(a^{\mathsf{P}}\) played by the principal and the action \(a^{\mathsf{A}}\) recommended to the agent; and \(\boldsymbol{\omega}=(\omega^{\mathsf{P}},\omega^{\mathsf{A}})\) consists of \(\omega^{\mathsf{P}}\) as the principal's observation and \(\omega^{\mathsf{A}}\) as the agent's report. These are elements observed by the principal and the use of a dynamic policy allows the next payoff pairs to depend on them.
Note that the expression assumes the truthful response of the agent: the agent's observation generated by \(\mu_{h}\) is the same as her report to \(\bar{\pi}\), and the action she performs (which determines the rewards and the transition to \(o^{\prime}\)) is the same as the one recommended by \(\bar{\pi}\). To ensure that this truthful behavior is indeed incentivized, we need the following IC constraint for every \(\omega^{\mathsf{A}}\in\Omega^{\mathsf{A}}\) (what the agent observes) and \(\widetilde{\omega}^{\mathsf{A}}\in\Omega^{\mathsf{A}}\) (what the agent reports):
\[\sum_{s,\omega^{\mathsf{P}},\mathbf{a}}\mu_{h}(s,\omega^{\mathsf{ P}}\,|\,o,\omega^{\mathsf{A}})\cdot\bar{\pi}(\mathbf{a}\,|\,\boldsymbol{\omega}) \cdot\left(r_{h}^{\mathsf{A}}(s,\mathbf{a})+\sum_{o^{\prime}}\phi_{h}(o^{ \prime}\,|\,s,\mathbf{a})\cdot v^{\prime\mathsf{A}}(\mathbf{a},\boldsymbol{ \omega},o^{\prime})\right)\] \[\geq \,\sum_{a^{\mathsf{A}}}\,\max_{\tilde{a}^{\mathsf{A}}\in A_{h}^{ \mathsf{A}}}\,\sum_{s,\omega^{\mathsf{P}},a^{\mathsf{P}}}\,\mu_{h}(s,\omega^{ \mathsf{P}}\,|\,o,\omega^{\mathsf{A}})\cdot\bar{\pi}(\mathbf{a}\,|\,\omega^{ \mathsf{P}},\widetilde{\omega}^{\mathsf{A}})\cdot\left(r_{h}^{\mathsf{A}}\left( s,a^{\mathsf{P}},\tilde{a}^{\mathsf{A}}\right)+\right.\] \[\left.\qquad\qquad\sum_{o^{\prime}}\phi_{h}\left(o^{\prime}\,|\,s,a^{\mathsf{P}},\tilde{a}^{\mathsf{A}}\right)\cdot v^{\prime\mathsf{A}}( \mathbf{a},\omega^{\mathsf{P}},\widetilde{\omega}^{\mathsf{A}},o^{\prime}) \right), \tag{3}\]
where \(\mu_{h}(s,\omega^{\mathsf{P}}\,|\,o,\omega^{\mathsf{A}})\propto\mu_{h}(s, \boldsymbol{\omega}\,|\,o)\) is the conditional probability derived from \(\mu_{h}\). In other words, upon observing \(\omega^{\mathsf{A}}\), the agent's expected payoff under her truthful response is at least as much as what she could obtain if she reported a different observation \(\widetilde{\omega}^{\mathsf{A}}\) and performed a best action \(\tilde{a}^{\mathsf{A}}\) in response to every possible recommendation \(a^{\mathsf{A}}\) from the principal. Note that the agent's action \(a^{\mathsf{A}}\) that parameterizes the last term \(v^{\prime\mathsf{A}}(\mathbf{a},\omega^{\mathsf{P}},\widetilde{\omega}^{ \mathsf{A}},o^{\prime})\) represents the principal's recommendation, so it does not change with the action \(\tilde{a}^{\mathsf{A}}\) the agent actually performs.
Building \(\widehat{\mathcal{V}}(o)\)Therefore, to decide (approximately) whether a vector \(\mathbf{v}\) is contained in \(\mathcal{V}_{h}(o)\) amounts to deciding whether Eqs. (2) and (3) are satisfied by some valuation of the variables
\(\{\bar{\pi}(\mathbf{a}\,|\,\boldsymbol{\omega}):\mathbf{a}\in A,\boldsymbol{\omega} \in\Omega\}\) and \(\{\mathbf{v}^{\prime}(\mathbf{a},\boldsymbol{\omega},o^{\prime}):\mathbf{a}\in A,\boldsymbol{\omega}\in\Omega,o^{\prime}\in O\}\) (highlighted in blue in the constraints), where we also require
\[\bar{\pi}(\cdot\,|\,\boldsymbol{\omega})\in\Delta(A) \text{for }\boldsymbol{\omega}\in\Omega \tag{4}\] \[\mathbf{v}^{\prime}(\mathbf{a},\boldsymbol{\omega},o^{\prime}) \in\widehat{\mathcal{V}}_{h+1}(o^{\prime}) \text{for }\mathbf{a}\in A,\ \boldsymbol{\omega}\in\Omega,\ o^{\prime}\in\Omega \tag{5}\]
so that \(\bar{\pi}(\cdot\,|\,\boldsymbol{\omega})\) is a valid distribution and every \(\mathbf{v}^{\prime}(\mathbf{a},\boldsymbol{\omega},o^{\prime})\) is inducible in the next subgames.
The constraints are non-linear as they involve quadratic terms in Eqs. (2) and (3) as well as maximization operators in Eq. (3). Nevertheless, as we will shortly show, as long as each \(\widehat{\mathcal{V}}_{h+1}(o^{\prime})\) is given by its _half-space representation_ (i.e., as a set of linear constraints of the form \(\mathrm{A}\cdot\mathbf{v}\leq\mathbf{b}\)) we can linearize the constraints, so we obtain a linear polytope \(\mathcal{P}\) whose projection onto the dimensions of \(\mathbf{v}\) is \(\mathcal{V}_{h}(o)\). To ensure that \(\mathcal{V}_{h}(o)\) can be plugged back into Eq. (5) in the next induction step, we need the half-space representation of the projection and in particular we need to eliminate the other variables in the representation and keep only \(\mathbf{v}\). This can be done approximately in polynomial time given that \(\mathbf{v}\) is two-dimensional: roughly speaking, we can discretize the space \([0,H]^{2}\), check the inducibility of points in the discrete space, and compute the half-space representation of convex hull of the inducible points (we have \(\mathcal{V}_{h}(o)\subseteq[0,H]^{2}\) since all rewards are bounded in \([0,1]\)); see the proof of Lemma 1 for more details.
Repeating the induction procedure till \(h=1\), we obtain \(\widehat{\mathcal{V}}_{1}(o_{\star})\) as well as the principal's (approximately) optimal payoff by solving \(\max_{\mathbf{v}\in\widehat{\mathcal{V}}_{1}(o_{\star})}v^{\mathsf{P}}\) (which is an LP). The following lemma summarizes our result.
**Lemma 1**.: _For any constant \(\epsilon>0\), the half-space representation of \(\widehat{\mathcal{V}}_{1}(o_{\star})\) can be computed in time \(\mathrm{poly}(|\mathcal{M}|,H,1/\epsilon)\), such that \(\widehat{\mathcal{V}}_{1}(o_{\star})\subseteq\mathcal{V}_{1}(o_{\star})\) and \(\max_{\mathbf{v}\in\widehat{\mathcal{V}}_{1}(o_{\star})}v^{\mathsf{P}}\geq \max_{\mathbf{v}\in\mathcal{V}_{1}(o_{\star})}-\epsilon\)._
Proof.: For every \(o\in O\), let \(\bar{\mathcal{V}}_{h}(o)\) denote the set of payoff vectors \(\mathbf{v}\) satisfying Eqs. (2) to (5). Note that \(\bar{\mathcal{V}}_{h}(o)\) is different from the set \(\mathcal{V}_{h}(o)\) of inducible payoffs--the latter is defined by constraints in the same form but with the exact inducible set \(\mathcal{V}_{h+1}(o^{\prime})\) instead of \(\widehat{\mathcal{V}}_{h+1}(o^{\prime})\) in Eq. (5). For ease of description, throughout this proof, we say that \(\widehat{\mathcal{V}}_{h}(o)\) is an _\(\varepsilon\)-approximation of \(\mathcal{V}_{h}(o)\)_ if \(\widehat{\mathcal{V}}_{h}(o)\subseteq\mathcal{V}_{h}(o)\) and, for every \(\mathbf{v}\in\mathcal{V}_{h}(o)\), there exists \(\mathbf{v}^{\prime}\in\widehat{\mathcal{V}}_{h}(o)\) such that \(v^{\prime\mathsf{P}}\geq v^{\mathsf{P}}-\varepsilon\) and \(v^{\prime\mathsf{A}}=v^{\mathsf{A}}\). Namely, the approximation only compromises on the principal's payoff. This is important because it ensures that the same response of the agent can be incentivized under both the exact and the approximate inducible sets, which further ensures smooth changes of the approximation through the induction process we present next.
We now prove by induction and the key is the following induction step. Suppose that the following properties hold for all \(o\in O\):
1. \(\widehat{\mathcal{V}}_{h+1}(o)\) is defined by \(\mathcal{O}(H/\delta)\) many linear constraints.
2. \(\widehat{\mathcal{V}}_{h+1}(o)\) is an \(\varepsilon\)-approximation of \(\mathcal{V}_{h+1}(o)\).
We will show that for every \(o\in O\) we can compute \(\widehat{\mathcal{V}}_{h}(o)\) satisfying Property 1 and Property 2 with a factor \(\varepsilon^{\prime}=\varepsilon+\delta\), in time polynomial in \(1/\delta\). Once this holds, picking \(\delta=\epsilon/H\) gives the desired result.
Recall that rewards are bounded in \([0,1]\), so \(\mathcal{V}_{h}(o)\) is contained in \([0,H]^{2}\) for all \(o\in O\). Our approach is to slice \(\bar{\mathcal{V}}_{h}(o)\) along the dimension of the principal's payoff, compute the end points of each slice, and construct the convex hull of all the end points as an approximation of \(\bar{\mathcal{V}}_{h}(o)\). Specifically, let
\[W=\{0,\ \delta,\ 2\delta,\ \ldots,\ H-\delta,\ H\}\]
be a discretization of \([0,H]\), and let \(\mathcal{W}\) be a set consisting of the following points:
* \(\check{\mathbf{v}}_{w}\in\arg\min_{\mathbf{v}\in\check{\mathcal{V}}_{h}(o):v^{ \mathsf{P}}=w}v^{\mathsf{A}}\) and \(\hat{\mathbf{v}}_{w}\in\arg\max_{\mathbf{v}\in\check{\mathcal{V}}_{h}(o):v^{ \mathsf{P}}=w}v^{\mathsf{A}}\) for each \(w\in W\), which are the two end points of each slice.
* (Arbitrary) \(\check{\mathbf{v}}_{*}\in\arg\min_{\mathbf{v}\in\check{\mathcal{V}}_{h}(o)}v^{ \mathsf{A}}\) and \(\hat{\mathbf{v}}_{*}\in\arg\max_{\mathbf{v}\in\check{\mathcal{V}}_{h}(o)}v^{ \mathsf{A}}\), which are extreme points of \(\check{\mathcal{V}}_{h}(o)\) with the minimum and maximum payoffs of the agent.
It shall be clear that the choice of these points ensures that we can approximate any inducible payoff pair with at most \(\delta\) compromise on the principal's payoff and no compromise on the agent's payoff. (In particular, the addition of \(\check{\mathbf{v}}_{*}\) and \(\hat{\mathbf{v}}_{*}\) ensures that we do not miss the extreme values of the agent's inducible payoff that may not be attained at any of the slices.) All the points can be computed efficiently by solving LPs that minimizes (or maximizes) \(v^{\mathsf{A}}\) (where we also treat \(\mathbf{v}\) as variables), subject to the linearized version of Eqs. (2) to (5), or additionally \(v^{\mathsf{P}}=w\).
We then construct \(\widehat{\mathcal{V}}_{h}(o)\) as the convex hull of the inducible payoff vectors in this set, which can be done efficiently via standard algorithms in computational geometry (e.g., Chan's algorithm (Chan, 1996)). This way, Property 1 holds for \(\widehat{\mathcal{V}}_{h}(o)\) because \(\widehat{\mathcal{V}}_{h}(o)\) has at most \(\mathcal{O}(H/\delta)\) vertices while it is in \(\mathbb{R}^{2}\). Meanwhile, \(\widehat{\mathcal{V}}_{h}(o)\) is an \(\delta\)-approximation of \(\check{\mathcal{V}}_{h}(o)\) according to our definition.
It remains to confirm that \(\widehat{\mathcal{V}}_{h}(o)\) is an \((\varepsilon+\delta)\)-approximation of \(\mathcal{V}_{h}(o)\). Indeed, this follows readily as long as \(\widehat{\mathcal{V}}_{h}(o)\) is an \(\varepsilon\)-approximation of \(\mathcal{V}_{h}(o)\). To see the latter, consider an arbitrary \(\mathbf{v}\in\mathcal{V}_{h}(o)\). By definition, \(\mathbf{v}\) can be induced by some \(\bar{\pi}\) and a set \(F=\{\mathbf{v}^{\prime}(\mathbf{a},\mathbf{\omega},o^{\prime})\in\mathcal{V}_{h+1 }(o^{\prime}):\mathbf{a}\in A,\mathbf{\omega}\in\Omega,o^{\prime}\in O\}\) of payoff vectors according to Eqs. (2) to (5) (with the exact set \(\mathcal{V}_{h+1}(o^{\prime})\) instead of \(\widehat{\mathcal{V}}_{h+1}(o^{\prime})\) in Eq. (5)). By the induction hypothesis, \(\widehat{\mathcal{V}}_{h+1}(o^{\prime})\) is an \(\varepsilon\)-approximation of \(\mathcal{V}_{h+1}(o^{\prime})\), so for every \(\mathbf{v}^{\prime}\in F\), there exists \(\tilde{\mathbf{v}}^{\prime}\in\widehat{\mathcal{V}}_{h}(o^{\prime})\) such that \(\tilde{v}^{\prime\mathsf{P}}\geq v^{\prime\mathsf{P}}-\varepsilon\) and \(\tilde{v}^{\prime\mathsf{A}}=v^{\prime\mathsf{A}}\). Using \(\tilde{\mathbf{v}}^{\prime}\) instead of \(\mathbf{v}^{\prime}\), the same policy \(\bar{\pi}\) then induces the desired \(\tilde{\mathbf{v}}\in\widehat{\mathcal{V}}_{h}(o)\) to approximate \(\mathbf{v}\): From the agent's perspective, the payoffs are the same under both \(\tilde{\mathbf{v}}^{\prime}\) and \(\mathbf{v}^{\prime}\), so the same response can be incentivized. Moreover, according to Eq. (2), the overall difference between \(\tilde{v}^{\mathsf{P}}\) and \(v^{\mathsf{P}}\) is at most \(\varepsilon\) because \(\sum_{s,\mathbf{\omega},\mathbf{a}}\mu(s,\mathbf{\omega}\,|\,o)\cdot\bar{\pi}(\mathbf{a }\,|\,\mathbf{\omega})=1\). Hence, \(\tilde{v}^{\mathsf{P}}\geq v^{\mathsf{P}}-\varepsilon\) and \(\bar{\mathcal{V}}_{h}(o)\) is an \(\varepsilon\)-approximation of \(\mathcal{V}_{h}(o)\).
Linearizing Eqs. (2) and (3)In order to remove the maximization operators in Eq. (3), we introduce auxiliary variables \(\big{\{}y(a^{\mathsf{A}},\omega^{\mathsf{A}},\widetilde{\omega}^{\mathsf{A}}):a \in A^{\mathsf{A}},\ \omega^{\mathsf{A}},\widetilde{\omega}^{\mathsf{A}}\in\Omega^{\mathsf{A}}\big{\}}\) to capture the maximum values on the right hand side of Eq. (3). We replace the right hand side of Eq. (3) with \(\sum_{a^{\mathsf{A}}\in A^{\mathsf{A}}}y(a^{\mathsf{A}},\omega^{\mathsf{A}}, \widetilde{\omega}^{\mathsf{A}})\) and add the following constraint to force each \(y(a^{\mathsf{A}},\omega^{\mathsf{A}},\widetilde{\omega}^{\mathsf{A}})\) to be an upper bound of the maximum value.
\[y(a^{\mathsf{A}},\omega^{\mathsf{A}},\widetilde{\omega}^{\mathsf{A}}) \geq\sum_{s,\omega^{\mathsf{P}},a^{\mathsf{P}}}\ \mu_{h}(s,\omega^{\mathsf{P}}\,|\,o,\omega^{\mathsf{A}})\cdot\bar{\pi}( \mathbf{a}\,|\,\omega^{\mathsf{P}},\widetilde{\omega}^{\mathsf{A}})\cdot\left(r_ {h}^{\mathsf{A}}\left(s,a^{\mathsf{P}},\tilde{a}^{\mathsf{A}}\right)+\right.\] \[\left.\sum_{o^{\prime}}\phi_{h}\left(o^{\prime}\,|\,s,a^{\mathsf{P }},\tilde{a}^{\mathsf{A}}\right)\cdot v^{\prime\mathsf{A}}(\mathbf{a},\omega^{ \mathsf{P}},\widetilde{\omega}^{\mathsf{A}},o^{\prime})\right)\qquad\qquad\text{ for }\tilde{a}^{\mathsf{A}}\in A^{\mathsf{A}} \tag{6}\]
Next we remove the quadratic terms in Eqs. (2) and (6). We introduce the following auxiliary variables \(\mathbf{z}(\mathbf{a},\mathbf{\omega},\widetilde{\omega}^{\mathsf{A}})\) to replace the terms \(\bar{\pi}(\mathbf{a}\,|\,\mathbf{\omega})\cdot\mathbf{v}^{\prime}(\mathbf{a},\mathbf{ \omega},o^{\prime})\). Recall that \(\mathbf{v}^{\prime}(\mathbf{a},\mathbf{\omega},o^{\prime})\) can take any value in \(\widehat{\mathcal{V}}_{h}(o^{\prime})\). Suppose that \(\widehat{\mathcal{V}}_{h}(o^{\prime})\), as a convex polytope, is given by a set of linear constraints as \(\widehat{\mathcal{V}}(o^{\prime})=\{\mathbf{v}:\mathsf{A}\cdot\mathbf{v}\leq \mathbf{b}\}\). Then the image of \(\widehat{\mathcal{V}}_{h}(o^{\prime})\) under the transformation \(\mathbf{z}(\mathbf{a},\mathbf{\omega},\widetilde{\omega}^{\mathsf{A}})=\bar{\pi}( \mathbf{a}\,|\,\mathbf{\omega})\cdot\mathbf{v}^{\prime}(\mathbf{a},\mathbf{\omega},o^{ \prime})\) is exactly the set of vectors satisfying
\[\mathsf{A}\cdot\mathbf{z}(\mathbf{a},\mathbf{\omega},\widetilde{\omega}^{\mathsf{A}}) \leq\bar{\pi}(\mathbf{a}\,|\,\mathbf{\omega})\cdot\mathbf{b}. \tag{7}\]
This is the only constraint needed (for each \((\mathbf{a},\mathbf{\omega},\widetilde{\omega}^{\mathsf{A}})\)) after the replacement. This finishes the linearization of Eqs. (2) to (5).
We remark that, ideally, we would like to obtain the exact half-space representation of the projection, so we can compute \(\mathcal{V}_{h}(o)\) exactly. Nevertheless, there is in general no efficient algorithm that computes the projection of a polytope onto \(\mathbb{R}^{2}\) in half-space representation as the number of half-spaces may be exponential in the number of facets of the original polytope (Amenta and Ziegler, 1996). In our problem, this exact approach is even more demanding and requires that the number of constraints in the half-space representation of \(\mathcal{V}_{h}(o)\) grows at most logarithmically, in order for the whole approach to run efficiently--otherwise, the number of constraints would grow exponentially in \(H\) through the backward induction process. We highlight these challenges and leave open the tractability of computing an _exact_ optimal policy.
### Forward Construction of an Optimal Policy
So far we have obtained the maximum inducible payoff but not yet a dynamic policy to achieve this payoff. The following efficient procedure fulfills this task based on the inducible payoff sets we computed above. Instead of computing an explicit description of a dynamic policy \(\pi\)--which would be exponentially large as it specifies a distribution for each possible sequence--the procedure outputs the distribution \(\pi(\cdot\,|\,\sigma_{h-1}^{\mathsf{P}};o_{h},\mathbf{\omega}_{h})\) for any given sequence \(\sigma_{h-1}^{\mathsf{P}}=(o_{1},\mathbf{\omega}_{1},\mathbf{a}_{1};\ldots;o_{h-1}, \mathbf{\omega}_{h-1},\mathbf{a}_{h-1})\in\Sigma_{h-1}^{\mathsf{P}}\) extended by observations \(o_{h}\in O\) and \(\mathbf{\omega}_{h}\in\Omega\) (where every \(\omega_{\ell}^{\mathsf{A}}\) means a report of the agent, and \(a_{\ell}^{\mathsf{A}}\) means an action recommendation). The procedure allows the principal to compute the policy on-the-fly.
1. Initialize: \(\mathbf{v}=\operatorname{argmax}_{\mathbf{v}\in\widehat{\mathcal{V}}_{1}(o_{ 1})}v^{\mathsf{P}}\) and \(o=o_{1}\).
2. For \(\ell=1,\ldots,h-1\): * Fix \(\mathbf{v}\) and \(o\), and solve (the linearized version of) Eqs. (2) to (5); record \(\bar{\pi}\) and \(\mathbf{v}^{\prime}\) in the solution. * Update \(\mathbf{v}=\mathbf{v}^{\prime}(\mathbf{a}_{\ell},\mathbf{\omega}_{\ell},o_{\ell+1})\) and \(o=o_{\ell+1}\).
3. Output \(\pi(\cdot\,|\,\sigma_{h-1},o_{h},\mathbf{\omega}_{h})=\bar{\pi}(\cdot\,|\,\mathbf{ \omega}_{h})\).
This leads to the following main result of this section.
**Theorem 2**.: _Given CIS, the distribution \(\pi(\cdot\,|\,\sigma^{\mathsf{P}};o,\mathbf{\omega})\) of an \(\epsilon\)-optimal policy \(\pi\) of the principal can be computed in time \(\operatorname{poly}(|\mathcal{M}|,H,1/\epsilon)\), for any \((\sigma^{\mathsf{P}};o,\mathbf{\omega})\in\Sigma^{\mathsf{P}}\times O\times\Omega\)._
The approach based on constructing inducible payoff sets can also be extended to handle a _subgame perfect equilibrium_ style refinement of the solution concept, where the principal is required to play optimally in each subgame (or nearly optimally up to an allowed threshold). This can be achieved by refining the inducible payoff sets accordingly. Such refinement can be useful when the principal is required to use credible threats in their commitment.
## 4 Learning to Commit
We now turn to an episodic online setting where the transition model \(p:S\times A\to\Delta(S\times\Omega)\) is not known to the players beforehand. Let there be \(T\) episodes. At the beginning of each episode, the principal commits to a new policy based on outcomes of the previous episodes. Each episode proceeds in \(H\) time steps the same way as the model in the previous sections.
We will present a learning algorithm that guarantees sublinear regrets for both players given the CIS assumption. The algorithm is _centralized_ and requires the agent to behave truthfully as it does not guarantee exact IC during the course of learning. However, the policy obtained
by the algorithm is IC in the limit when the number of episodes approaches infinity. Indeed, in the case where the model is unknown to the agent either, IC in the limit is a more relevant concept as the agent cannot decide how to optimally deviate from the truthful behavior and has to learn that along with the principal. More importantly, the fact that the algorithm guarantees sublinear regret for the agent also provides a tangible incentive for the agent to participate in the centralized learning.
We start by defining the players' regrets.
Regret DefinitionWe let \(\mathbf{v}_{\pi,f}=\mathbb{E}_{\pi,f}\left[\sum_{h=1}^{H}\mathbf{r}(s_{h}, \mathbf{a}_{h})\right]\) denote the players' expected payoff vector induced by a policy \(\pi\) of the principal and a response strategy \(f\) of the agent under the _true_ transition model \(p\). When \(f\) is not specified, \(\mathbf{v}_{\pi}\) denotes the vector induced by \(\pi\) and the agent's _truthful_ response to \(\pi\). Moreover, \(\mathbf{v}_{\pi,*}\) denotes the vector induced by \(\pi\) and the agent's _best_ response to \(\pi\). The principal's regret \(\text{Reg}^{\mathsf{P}}\) is defined with respect to the optimal policy under the true model:
\[\text{Reg}^{\mathsf{P}}=\sum_{t=1}^{T}\left(\max_{\pi\in\Pi}v_{\pi}^{\mathsf{ P}}-v_{\pi_{t}}^{\mathsf{P}}\right),\]
where \(\Pi\) denotes the set of IC policies, and \(\pi_{t}\) denotes the policy the principal commits to in the \(t\)-th episode. The agent's regret is defined with respect to her optimal response to the principal's policy:
\[\text{Reg}^{\mathsf{A}}=\sum_{t=1}^{T}\left(v_{\pi_{t},*}^{\mathsf{A}}-v_{\pi _{t}}^{\mathsf{A}}\right),\]
which is a dynamic regret as the benchmark changes according to the policy in each episode.
Our approach is built on reward-free exploration, which is an RL paradigm where learning happens before a reward function is provided (Jin et al., 2020). It has been shown in a series of recent works that efficient learning is possible under this paradigm (Jin et al., 2020; Kaufmann et al., 2021; Menard et al., 2021). In particular, we will use the sample complexity bound in Lemma 3. At a high level, our algorithm proceeds by first conducting reward-free exploration to learn a sufficiently accurate estimate of the true model. Based on the estimate we then solve relaxed version of the policy optimization problem to obtain a policy. Using this policy in the remaining episodes guarantees sublinear regret for both players.
**Lemma 3** ((Jin et al., 2020, Lemma 3.6)).: _Consider an MDP \((S,A,p)\) (without a reward function specified) with horizon length \(H\). There exists an algorithm which learns a model \(\widehat{p}\) after \(\widetilde{\mathcal{O}}\left(\frac{H^{5}|S|^{2}\cdot|A|}{\epsilon^{2}}\right)\) episodes of exploration, such that with probability at least \(1-q\), for any reward function \(r\) and policy \(\pi\), it holds that \(\left|V_{1}^{\pi}(s;r)-\widehat{V}_{1}^{\pi}(s;r)\right|\leq\epsilon\) for all states \(s\) and time steps \(h\in\{1,\dots,H\}\), where \(V_{1}^{\pi}\) and \(\widehat{V}_{1}^{\pi}\) are the value functions of \(\pi\) under the models \(p\) and \(\widehat{p}\), respectively.2_
Footnote 2: The notation \(\widetilde{\mathcal{O}}\) omits logarithmic factors. In the original statement of Jin et al. (2020), \(\pi\) is non-stationary (time-dependent) but _independent_ of the history. However, the proof of the lemma also applies to history-dependent policies. The dependence on \(H\) in the sample complexity can be further improved with better reward-free exploration algorithms (Kaufmann et al., 2021; Menard et al., 2021), but this is not a focus of ours.
With the above result, we can learn a model \(\widehat{p}\) for our problem. Let \(\widehat{\mathbf{v}}_{\pi,f}\), \(\widehat{\mathbf{v}}_{\pi}\), and \(\widehat{\mathbf{v}}_{\pi,*}\) denote the payoff vectors induced under model \(\widehat{p}\). Lemma 3 then translates to the result stated in Lemma 4 in our setting. Note that the process facing the principal and the agent jointly during the learning process is effectively an MDP under the CIS assumption, where the effective state space is \(O\times\Omega\).
**Lemma 4**.: _Given CIS, a model \(\widehat{p}\) can be learned after \(\widetilde{\mathcal{O}}\left(\frac{H^{\mathsf{S}}|O|^{2}|\Omega|^{2}|A|}{\epsilon^ {2}}\right)\) episodes of exploration, such that \(\left|v_{\pi,f}^{\mathsf{A}}-\widehat{v}_{\pi,f}^{\mathsf{A}}\right|\leq\epsilon\) and \(\left|v_{\pi,f}^{\mathsf{P}}-\widehat{v}_{\pi,f}^{\mathsf{P}}\right|\leq\epsilon\) holds with probability at least \(1-q\) for any policy \(\pi\) and response strategy \(f\) of the agent._
Hence, the value functions change smoothly with the learned model \(\widehat{p}\). However, this smoothness is insufficient for deriving a sublinear bound on the principal's regret because of the IC constraints in our problem. The set \(\widehat{\Pi}\) of IC policies does not change smoothly with the model \(\widehat{p}\). To deal with this non-smoothness, we need to relax the IC constraints, and we define \(\epsilon\)-IC policies.
**Definition 5** (\(\epsilon\)-IC policies).: A policy \(\pi\) is \(\epsilon\)-IC (with respect to model \(\widehat{p}\)) if \(\widehat{v}_{\pi,*}^{\mathsf{A}}-\widehat{v}_{\pi}^{\mathsf{A}}\leq\epsilon\).
That is, when an \(\epsilon\)-IC policy is committed, the agent cannot improve her expected payoff by more than \(\epsilon\) if she deviates from her truthful response. We let \(\widehat{\Pi}_{\epsilon}\) denote the set of \(\epsilon\)-IC policies. Given Lemma 4, the following lemma is immediate so that optimizing over \(\widehat{\Pi}_{2\epsilon}\) instead of \(\widehat{\Pi}\) ensures that that the payoff yielded for the principal is as much as her optimal payoff under the true model (up to a small error). Meanwhile, the payoff loss introduced by this relaxation for the agent is also small.
**Lemma 6**.: \(\Pi\subseteq\widehat{\Pi}_{2\epsilon}\) _holds with probability at least \(1-q\)._
Proof.: By Lemma 4, both \(\widehat{v}_{\pi,*}^{\mathsf{A}}-\widehat{v}_{\pi}^{\mathsf{A}}\) and \(v_{\pi,*}^{\mathsf{A}}-v_{\pi}^{\mathsf{A}}\) is at most \(2\epsilon\) with high probability. Since every \(\pi\in\Pi\) is IC, we have \(v_{\pi,*}^{\mathsf{A}}-v_{\pi}^{\mathsf{A}}\leq 0\) and in turn \(\widehat{v}_{\pi,*}^{\mathsf{A}}-\widehat{v}_{\pi}^{\mathsf{A}}\leq 2\epsilon\), so \(\pi\in\widehat{\Pi}_{2\epsilon}\).
With the above results, our approach to the learning problem is as follows.
1. Run reward-free exploration to obtain a model \(\widehat{p}\) that satisfies Lemma 3.
2. Find an optimal policy (w.r.t \(\widehat{p}\)) of the principal from \(\widehat{\Pi}_{2\epsilon}\) and use it in the remaining rounds.
In particular, a near-optimal strategy for Step 2 can be computed efficiently according to Lemma 7, via an approach similar to Theorem 2. This leads to an efficient algorithm that guarantees sublinear regret for both players stated in Theorem 8.
**Lemma 7**.: _Given CIS, the distribution \(\pi_{h}(\cdot\,|\,\sigma,o,\boldsymbol{\omega})\) of a \(\delta\)-optimal strategy \(\pi\in\widehat{\Pi}_{\epsilon}\) of the principal can be computed in time \(\mathrm{poly}(|\mathcal{M}|,H,1/\epsilon)\), for any \((\sigma,o,\boldsymbol{\omega})\in\Sigma^{\mathsf{P}}\times O\times\Omega\) (where the \(\delta\)-optimality is with respect to the model \(\widehat{p}\))._
Proof sketch.: The proof is similar to the approach to computing a near-optimal strategy in \(\Pi\) (Lemma 1 and theorem 2). Instead of maintaining two-dimensional sets of inducible payoffs, we introduce an additional dimension to capture the agent's maximum attainable payoff, i.e., the payoff given by her best response. (When we want compute strategies that are exactly IC, the maximum attainable payoff should be the same as the agent's truthful payoff, so there is no need to maintain this additional dimension.) Hence, each \(\mathbf{v}\in\mathcal{V}(o)\) is now a tuple \((v^{\mathsf{P}},v^{\mathsf{A}},v_{*}^{\mathsf{A}})\).
The inducibility of a tuple \(\mathbf{v}=(v^{\mathsf{P}},v^{\mathsf{A}},v_{*}^{\mathsf{A}})\) is characterized by the following constraints. First, we impose the same constraint as Eq. (2) on the first two dimensions of \(\mathbf{v}\), so that they capture the players' payoffs under the agent's truthful response. In order for the third dimension \(v_{*}^{\mathsf{A}}\) to capture the agent's maximum attainable payoff, we use a constraint similar to Eq. (3):
\[v_{*}^{\mathsf{A}}\geq\sum_{\omega^{\mathsf{A}}}\max_{\widetilde {\omega}^{\mathsf{A}}}\ \sum_{a^{\mathsf{A}}}\ \max_{\tilde{a}^{\mathsf{A}}}\ \sum_{s,\omega^{\mathsf{P}},a^{\mathsf{P}}}\ \mu_{h}(s,\omega^{\mathsf{P}}\,|\,o,\omega^{ \mathsf{A}})\cdot\bar{\pi}(\mathtt{a}\,|\,\omega^{\mathsf{P}},\widetilde{\omega }^{\mathsf{A}})\cdot\left(r_{h}^{\mathsf{A}}\left(s,a^{\mathsf{P}},\tilde{a}^{ \mathsf{A}}\right)+\right.\] \[\left.\sum_{o^{\prime}}\phi_{h}\left(o^{\prime}\,|\,s,a^{ \mathsf{P}},\tilde{a}^{\mathsf{A}}\right)\cdot{v_{*}^{\prime}}^{\mathsf{A}}( \mathtt{a},\omega^{\mathsf{P}},\widetilde{\omega}^{\mathsf{A}},o^{\prime}) \right). \tag{8}\]
The remaining constraints are the same as Eqs. (4) and (5).
All the non-linear constraints can be linearized the same way as the approach described in Section 3.1. Hence, we can efficiently approximate \(\mathcal{V}_{h}(o)\) by examining the inducibility of points on a sufficiently fine-grained grid in \([0,H]^{3}\), which contains \(\operatorname{poly}(H,1/\delta)\) many points, and constructing the convex hull of these points. (Note that there is no need to ensure no compromise on the agent's payoff as was essential in the proof of Lemma 1. This is because \(\epsilon\)-IC is defined with respect to solely the agent's expected payoff at the beginning of the game instead of that at every time step. Hence, using points on a regular grid suffices the purpose of the approximation in this proof.) The halfspace-representation of the convex hull can be computed efficiently given that it is in \(\mathbb{R}^{3}\)[10]. Eventually, an optimal \(\pi\in\widehat{\Pi}_{\epsilon}\) corresponds to a solution to \(\max_{\mathbf{v}\in\mathcal{V}_{1}(o_{*})}v^{\mathsf{P}}\) subject to \(v^{\mathsf{A}}\geq v^{\mathsf{A}}_{*}-\epsilon\), and we can use the same forward construction procedure in Section 3.2 to compute \(\pi_{h}(\cdot\left|\sigma,o,\boldsymbol{\omega}\right)\).
Note that Eq. (8) only requires \(v^{\mathsf{A}}_{*}\) to be an upper bound of the maximum attainable payoff instead of the exact value. This suffices for our purpose because any \((v^{\mathsf{P}},v^{\mathsf{A}},v^{\mathsf{A}}_{*})\) in the feasible set \(\mathcal{V}_{1}(o_{\star})\cap\left\{\mathbf{v}:v^{\mathsf{A}}\geq v^{\mathsf{ A}}_{*}-\epsilon\right\}\) also implies the inclusion of \((v^{\mathsf{P}},v^{\mathsf{A}},\bar{v}^{\mathsf{A}}_{*})\) in the same feasible set, where \(\bar{v}^{\mathsf{A}}_{*}\) is the actual maximum attainable payoff induced by the policy that induces \((v^{\mathsf{P}},v^{\mathsf{A}},v^{\mathsf{A}}_{*})\) according to our formulation.
**Theorem 8**.: _There exists an algorithm that guarantees regret \(\widetilde{\mathcal{O}}(\zeta^{1/3}T^{2/3})\) for both players with probability \(1-q\), where \(\zeta=H^{5}\left|O\right|^{2}\left|\Omega\right|^{2}\left|A\right|\). The computation involved in implementing the algorithm takes time \(\operatorname{poly}(\left|\mathcal{M}\right|,H,T)\) given CIS._
Proof.: We run reward-free exploration to obtain a model \(\widehat{p}\) with error bound \(\epsilon\). This takes \(\widetilde{\mathcal{O}}(\zeta/\epsilon^{2})\) rounds according to Lemma 4. Next, we compute an \(\epsilon\)-optimal strategy \(\pi\in\widehat{\Pi}_{2\epsilon}\) and use it in the remaining rounds. According to Lemma 7, this can be done efficiently.
By assumption, rewards are bounded in \([0,1]\) so the regrets are at most \(1\) for both players in each of the exploration rounds. In each of the remaining rounds, the agent's regret is
\[v^{\mathsf{A}}_{\pi,*}-v^{\mathsf{A}}_{\pi}\leq\underbrace{\left|\widehat{v} ^{\mathsf{A}}_{\pi,*}-\widehat{v}^{\mathsf{A}}_{\pi}\right|}_{\leq 2\epsilon\text{ as }\pi \in\widehat{\Pi}_{2\epsilon}}+\underbrace{\left|\widehat{v}^{\mathsf{A}}_{ \pi,*}-v^{\mathsf{A}}_{\pi,*}\right|+\left|\widehat{v}^{\mathsf{A}}_{\pi}-v^{ \mathsf{A}}_{\pi}\right|}_{\text{each $\leq\epsilon$ by Lemma \ref{lem:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:
a broad class of succinctly represented \(n\)-player games (Papadimitriou and Roughgarden, 2008; Jiang and Leyton-Brown, 2011), but computing an optimal one is hard even in one-shot settings (Papadimitriou and Roughgarden, 2005). Identifying tractable special cases of the \(n\)-agent setting or scalable algorithms for practical applications could be interesting directions for future work.
Our work is theoretical by nature. We are not aware of any direct negative social impact but we acknowledge the possibility of misuse of the algorithms by malicious entities whose objectives can be formulated as sequential principal-agent problems. Therefore, specific applications of the algorithms should be carefully assessed to ensure safe and lawful utilization.
|
2307.14901
|
Text-guided Foundation Model Adaptation for Pathological Image
Classification
|
The recent surge of foundation models in computer vision and natural language
processing opens up perspectives in utilizing multi-modal clinical data to
train large models with strong generalizability. Yet pathological image
datasets often lack biomedical text annotation and enrichment. Guiding
data-efficient image diagnosis from the use of biomedical text knowledge
becomes a substantial interest. In this paper, we propose to Connect Image and
Text Embeddings (CITE) to enhance pathological image classification. CITE
injects text insights gained from language models pre-trained with a broad
range of biomedical texts, leading to adapt foundation models towards
pathological image understanding. Through extensive experiments on the
PatchGastric stomach tumor pathological image dataset, we demonstrate that CITE
achieves leading performance compared with various baselines especially when
training data is scarce. CITE offers insights into leveraging in-domain text
knowledge to reinforce data-efficient pathological image classification. Code
is available at https://github.com/Yunkun-Zhang/CITE.
|
Yunkun Zhang, Jin Gao, Mu Zhou, Xiaosong Wang, Yu Qiao, Shaoting Zhang, Dequan Wang
|
2023-07-27T14:44:56Z
|
http://arxiv.org/abs/2307.14901v1
|
# Text-guided Foundation Model Adaptation for Pathological Image Classification
###### Abstract
The recent surge of foundation models in computer vision and natural language processing opens up perspectives in utilizing multimodal clinical data to train large models with strong generalizability. Yet pathological image datasets often lack biomedical text annotation and enrichment. Guiding data-efficient image diagnosis from the use of biomedical text knowledge becomes a substantial interest. In this paper, we propose to **C**onnect **I**mage and **T**ext **E**mbeddings (CITE) to enhance pathological image classification. CITE injects text insights gained from language models pre-trained with a broad range of biomedical texts, leading to adapt foundation models towards pathological image understanding. Through extensive experiments on the PatchGastric stomach tumor pathological image dataset, we demonstrate that CITE achieves leading performance compared with various baselines especially when training data is scarce. CITE offers insights into leveraging in-domain text knowledge to reinforce data-efficient pathological image classification. Code is available at [https://github.com/Yunkun-Zhang/CITE](https://github.com/Yunkun-Zhang/CITE).
Keywords:Foundation models Multi-modality Model Adaptation Pathological image classification.
## 1 Introduction
Deep learning for medical imaging has achieved remarkable progress, leading to a growing body of parameter-tuning strategies [1, 2, 3]. Those approaches are often designed to address disease-specific problems with limitations in their generalizability. In parallel, foundation models [4] have surged in computer vision [5, 6] and natural language processing [7, 8] with growing model capacity and data size, opening up perspectives in utilizing foundation models and large-scale clinical data for diagnostic tasks. However, pure imaging data can be insufficient to adapt foundation models with large model capacity to the medical field. Given the complex tissue characteristics of pathological whole slide images (WSI), it is crucial to develop adaptation strategies allowing (1) training data efficiency, and (2) data fusion flexibility for pathological image analysis.
Although foundation models promise a strong generalization ability [4], there is an inherent domain shift between medical and natural concepts in both vision
and language modalities. Pre-trained biomedical language models are increasingly applied to medical context understanding [9, 10, 11]. Language models prove to be effective in capturing semantic characteristics with a lower data acquisition and annotation cost in medical areas [12]. Such property is desired to address the dilemma of medical imaging cohorts, where well-annotated, high-quality medical imaging cohorts are expensive to collect and curate compared with text inputs [13]. In addition, vision-language models demonstrate the importance of joining multi-modal information for learning strong encoders [5, 6, 14]. Thus, connecting visual representations with text information from biomedical language models becomes increasingly critical to adapting foundation models for medical image classification, particularly in the challenging setting of data deficiency.
In this study, we propose CITE, a data-efficient adaptation framework that **C**onnects **I**mage and **T**ext **E**mbeddings from foundation models to perform pathological image classification with limited training samples (see Fig. 1). To enable language comprehension, CITE makes use of large language models pre-trained on biomedical text datasets [11, 10] with rich and professional biomedical knowledge. Meanwhile, for visual understanding, CITE only introduces a small number of trainable parameters to a pre-trained foundation model, for example, CLIP [5] and INTERN [6], in order to capture domain-specific knowledge without modifying the backbone parameters. In this framework, we emphasize the utility of text information to play a substitutive role as traditional classification heads, guiding the adaptation of the vision encoder. A favorable contribution of our approach is to retain the completeness of both pre-trained models, enabling a low-cost adaptation given the large capacity of foundation models. Overall, our contributions are summarized as follows:
1. We demonstrate the usefulness of injecting biomedical text knowledge into foundation model adaptation for improved pathological image classification.
2. CITE introduces only a small number of extra model parameters (\(\sim\)0.6% of the vision encoder), meanwhile keeping the pre-trained models frozen during
Figure 1: **Connecting Image and Text Embeddings.** Our CITE emphasizes a text-guided model adaptation. An image with the visual prompt is processed through a vision encoder and a projection layer. The text knowledge is embedded by a text encoder, where a stop-gradient operation is applied. Classification prediction is made by the similarity between image and text embeddings. During adaptation, the visual prompt and the projection are tuned while the pre-trained encoders are frozen.
adaptation, leading to strong compatibility with a variety of backbone model architectures.
3. CITE is simple yet effective that outperforms supervised learning, visual prompt tuning, and few-shot baselines by a remarkable margin, especially under the data deficiency with limited amounts of training image samples (_e.g._, using only 1 to 16 slides per class).
## 2 Related Work
**Medical Image Classification.** Deep learning for medical image classification has long relied on training large models from scratch [15, 1]. Also, fine-tuning or linear-probing the pre-trained models obtained from natural images [16, 17, 18] is reasonable. However, those methods are supported by sufficient high-quality data expensive to collect and curate [19]. In addition, task-specific models do not generalize well with different image modalities [2]. To tackle this issue, we emphasize the adaptation of foundation models in a data-efficient manner.
**Vision-Language Pre-training.** Recent work has made efforts in pre-training vision-language models. CLIP [5] collects 400 million image-text pairs from the internet and trains aligned vision and text encoders from scratch. LiT [20] trains a text encoder aligned with a fixed pre-trained vision encoder. BLIP-2 [14] trains a query transformer by bootstrapping from pre-trained encoders. REACT [21] fixes both pre-trained encoders and tunes extra gated self-attention modules. However, those methods establish vision-language alignment by pre-training on large-scale image-text pairs. Instead, we combine pre-trained unimodal models on downstream tasks and build a multi-modal classifier with only a few data.
**Model Adaptation via Prompt Tuning.** Prompt tuning proves to be an efficient adaptation method for both vision and language models [22, 23]. Originating from natural language processing, "prompting" refers to adding (manual) text instructions to model inputs, whose goal is to help the pre-trained model better understand the current task. For instance, CoOp [22] introduces learnable prompt parameters to the text branch of vision-language models. VPT [23] demonstrates the effectiveness of prompt tuning with pre-trained vision encoders. In this study, we adopt prompt tuning for adaptation because it is lightweight and only modifies the input while keeping the whole pre-trained model unchanged. However, existing prompt tuning methods lack expert knowledge and understanding of downstream medical tasks. To address this challenge, we leverage large language models pre-trained with biomedical text to inject medical domain knowledge.
**Biomedical Language Model Utilization.** Biomedical text mining promises to offer the necessary knowledge base in medicine [9, 10, 11]. Leveraging language models pre-trained with biomedical text for medical language tasks is a common application. For instance, Alsentzer et al. [9] pre-train a clinical text model with BioBERT [10] initialization and show a significant improvement on five clinical language tasks. However, the potential of biomedical text information in medical imaging applications has not been explicitly addressed. In our efforts, we emphasize the importance of utilizing biomedical language models for adapting foundational vision models into cancer pathological analysis.
## 3 Methodology
Fig. 2 depicts an overview of our approach CITE for data-efficient pathological image classification. CITE jointly understands the image features extracted by vision encoders pre-trained with natural imaging, and text insights encoded in large language models pre-trained with biomedical text (_e.g._, BioLinkBERT [11] which captures rich text insights spanning across biomedical papers via citations). We connect text and imaging by a projection and classify the images by comparing the cosine similarity between image and text embeddings.
Importantly, we introduce two low-cost sets of trainable parameters to the vision encoder in order to adapt the model with the guidance of text information. They are (1) prompt tokens in the input space to model task-specific information, and (2) a projection layer in the latent space to align image and text embeddings. During model adaptation, we freeze the pre-trained encoders and only tune the introduced parameters, which not only saves remarkable training data and computational resources but also makes our approach favorable with various foundation model architectures.
### Connecting Text and Imaging
An image \(I\) to be classified is processed through a pre-trained vision encoder to generate the image embedding \(x_{v}\) with dimension \(d_{v}\), where \(v\) stands for "vision":
\[x_{v}=\texttt{VisionEncoder}(I)\qquad\qquad x_{v}\in\mathbb{R}^{d_{v}}. \tag{1}\]
Figure 2: **An overview of CITE.** (a) The pathological images are cut into patches. (b) The class token, image tokens, and learnable prompt tokens are concatenated. (c) The tokens are processed by a pre-trained vision transformer to generate image embeddings. Those 3 steps refer to _learning visual prompt_ (Sec. 3.2). (d) The image is recognized as the class with maximum cosine similarity between image and text embeddings. (e) The class names are processed by a biomedical language model to generate text embeddings. Those 2 steps _connect text and imaging_ (Sec. 3.1).
For the label information, we encode the class names \(T_{c}\) (\(c\in[1,C]\)) with a pre-trained biomedical language model instead of training a classification head (see Fig. 2(e)). We tokenize and process \(T_{c}\) through the language encoder to generate the text embedding \(x_{l}^{c}\) with dimension \(d_{l}\), where \(l\) stands for "language":
\[x_{l}^{c}=\texttt{LanguageEncoder}(\texttt{Tokenizer}(T_{c}))\hskip 28.452756ptx _{l}^{c}\in\mathbb{R}^{d_{l}}. \tag{2}\]
Vision-language models like CLIP [5] contain both a vision encoder and a language encoder, which provide well-aligned embeddings in the same feature space. In this case, prediction \(\hat{y}\) is obtained by applying softmax on scaled cosine similarities between the image and text embeddings (see Fig. 2(d)):
\[p(\hat{y}=c|I)=\frac{\exp(\text{sim}(x_{l}^{c},x_{v})/\tau)}{\sum_{c^{\prime}= 1}^{C}\exp(\text{sim}(x_{l}^{c^{\prime}},x_{v})/\tau)}, \tag{3}\]
where \(\text{sim}(\cdot,\cdot)\) refers to cosine similarity and \(\tau\) is the temperature parameter.
For irrelevant vision and language encoders, we introduce an extra projection layer to the end of the vision encoder to map the image embeddings to the same latent space as the text embeddings. We replace \(x_{v}\) in Eq. (3) with \(x_{v}^{\prime}\):
\[x_{v}^{\prime}=\texttt{Projection}(x_{v})\hskip 56.905512ptx_{v}^{\prime}\in \mathbb{R}^{d_{l}}. \tag{4}\]
During adaptation, the extra parameters are updated by minimizing the cross-entropy of the predictions from Eq. (3) and the ground truth labels.
### Learning Visual Prompt
Medical concepts exhibit a great visual distribution shift from natural images, which becomes impractical for a fixed vision encoder to capture task-specific information in few-shot scenarios. Visual prompt tuning (VPT [23]) is a lightweight adaptation method that can alleviate such an inherent difference by only tuning prompt tokens added to the visual inputs of a fixed vision transformer [24], showing impressive performance especially under data deficiency. Thus, we adopt VPT to adapt the vision encoder in our approach.
A vision transformer first cuts the image into a sequence of \(n\) patches and projects them to patch embeddings \(E_{0}\in\mathbb{R}^{n\times d_{v}}\), where \(d_{v}\) represents the visual embedding dimension. A \(\texttt{CLS}\) token \(c_{0}\in\mathbb{R}^{d_{v}}\) is prepended to the embeddings, together passing through \(K\) transformer layers \(\{L_{v}^{k}\}_{k=1,2,\dots,K}\). \(\texttt{CLS}\) embedding of the last layer output is the image feature \(x_{v}\). Following the setting of shallow VPT, we concatenate the learnable prompt tokens \(\mathbf{P}=[\mathbf{p}^{1},\dots,\mathbf{p}^{p}]\in\mathbb{R}^{p\times d_{v}}\), where \(p\) is the prompt length, with \(\texttt{CLS}\) token \(c_{0}\) and patch embeddings \(E_{0}\) before they are processed through the first transformer layer:
\[[c_{1},\mathbf{Z}_{1},E_{1}] =L_{v}^{1}([c_{0},\mathbf{P},E_{0}]) \tag{5}\] \[=L_{v}^{k}([c_{k-1},\mathbf{Z}_{k-1},E_{k-1}])\quad k=2,3,\dots,K\] \[x_{v} =c_{K} x_{v}\in\mathbb{R}^{d_{v}},\]
where \([\cdot,\cdot]\) refers to concatenation along the sequence length dimension, and \(\mathbf{Z}_{k}\in\mathbb{R}^{p\times d_{v}}\) represents the output embeddings of the \(k\)-th transformer layer at the position of the prompts (see Fig. 2(a-c)). The prompt parameters are updated together with the projection layer introduced in Section 3.1.
## 4 Experimental Settings
**Dataset.** We adopt the PatchGastric [25] dataset, which includes histopathological image patches extracted from H&E stained whole slide images (WSI) of stomach adenocarcinoma endoscopic biopsy specimens. There are 262,777 patches of size \(300\times 300\) extracted from 991 WSIs at x20 magnification. The dataset contains 9 subtypes of gastric adenocarcinoma. We choose 3 major subtypes including "well differentiated tubular adenocarcinoma", "moderately differentiated tubular adenocarcinoma", and "poorly differentiated adenocarcinoma" to form a 3-class grading-like classification task with 179,285 patches from 693 WSIs. We randomly split the WSIs into _train_ (20%) and _validation_ (80%) subsets for measuring the model performance. To extend our evaluation into the real-world setting with insufficient data, we additionally choose 1, 2, 4, 8, or 16 WSIs with the largest numbers of patches from each class as the training set. The evaluation metric is patient-wise accuracy, where the prediction of a WSI is obtained by a soft vote over the patches, and accuracy is averaged class-wise.
**Implementation.** We use CLIP ViT-B/16 [5] as the visual backbone, with input image size \(224\times 224\), patch size \(16\times 16\), and embedding dimension \(d_{v}=512\). We adopt BioLinkBERT-large [11] as the biomedical language model, with embedding dimension \(d_{l}=1,024\). To show the extensibility of our approach, we additionally test on vision encoders including ImageNet-21k ViT-B/16 [26, 24] and INTERN ViT-B/16 [6], and biomedical language model BioBERT-large [10]. Our implementation is based on CLIP4, HuggingFace5 and MMClassification6.
Footnote 4: [https://github.com/openai/CLIP](https://github.com/openai/CLIP)
Footnote 5: [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers)
Footnote 6: [https://github.com/open-mmlab/mmclassification](https://github.com/open-mmlab/mmclassification)
**Training Details.** Prompt length \(p\) is set to 1. We resize the images to \(224\times 224\) to fit the model and follow the original data pipeline in PatchGastric [25]. A class-balanced sampling strategy is adopted by choosing one image from each class in turn. Training is done with 1,000 iterations of stochastic gradient descent (SGD), and the mini-batch size is 128, requiring 11.6GB of GPU memory and 11 minutes on two NVIDIA GeForce RTX 2080 Ti GPUs. All our experiment results are averaged on 3 random seeds unless otherwise specified.
## 5 Results
**CITE consistently outperforms all baselines under all data scales.** Fig. 3 shows the classification accuracy on the PatchGastric dataset of our approach
compared with baseline methods and related works, including (1) R50-21k: fine-tune the whole ResNet50 [27] backbone pre-trained on ImageNet-21k [26]. (2) Linear probe: train a classification head while freezing the backbone encoder. (3) Fine-tune: train a classification head together with the backbone encoder. (4) CLAM [18]: apply an attention network on image features to predict pseudo labels and cluster the images. (5) Zero-shot [5]: classify images to the nearest text embeddings obtained by class names, without training. (6) Few-shot [28]: cluster image features of the training data and classify images to the nearest class center. (7) VPT [23]: train a classification head together with visual prompts. Note that CLIP ViT-B/16 vision encoder is adopted as the backbone for (2)-(7). Our CITE outperforms all baselines that require training classification heads, as well as image feature clustering methods, demonstrating the key benefit of leveraging additional biomedical text information for pathological image classification.
**CITE shows a favorable improvement when data is scarce.** When only one training slide per class is available, CITE achieves a remarkable performance, outperforming all baselines by a significant margin (from 51.4% to 60.2%). As data deficiency is commonly seen in medical tasks, CITE presents an appealing property to handle data-limited pathological analysis. Together, our findings demonstrate that adding domain-specific text information provides an efficient means to guide foundation model adaptation for pathological image diagnosis.
**Visual prompt and text information are both necessary.** We conduct ablation studies to show the effectiveness of visual prompt learning and text information. From the results in Table 1, we demonstrate that visual prompt
Figure 3: **Accuracy on the PatchGastric [25] 3-category classification task.** R50-21k refers to ResNet50 [27] backbone pre-trained on ImageNet-21k [26]. Other methods adopt CLIP ViT-B/16 [5] backbone. Averaged results and standard deviation (error bars) of 3 runs are displayed. Our CITE consistently outperforms all baselines under all data fractions, showing a remarkable improvement under data deficiency.
learning outperforms fine-tuning as the adaptation method, and in-domain text information outperforms classification heads. Combining the two components yields the best results under all data scales. Importantly, text information is particularly effective when training data is extremely scarce (1 slide per class).
**CITE shows model extensibility.** We evaluate our approach with additional backbones and biomedical language models to assess its potential extensibility. Table 2 displays the findings of our approach compared with linear probe and fine-tune baselines. The results demonstrate that CITE is compatible with a variety of pre-trained models, making it immune to upstream model modifications. The text information encoded in biomedical language models allows vision models pre-trained with natural imaging to bridge the domain gap without task-specific pre-training on medical imaging. Importantly, when using both the vision and language encoders of CLIP ViT-B/16, our approach still outperforms the baselines by a remarkable margin (47.7% to 60.1%), demonstrating the impor
\begin{table}
\begin{tabular}{c c|c c c c c c} \hline \hline Prompt & Text & 1 & 2 & 4 & 8 & 16 & All \\ \hline \multirow{3}{*}{\(\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\checkcheckcheckcheckcheck }}}}} }}}}}}}\) & \multirow{3}{*}{\(\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{ \check{\check{\check{\check{\check{\check{\checkcheckcheck{{\checkcheckcheckcheckcheckcheckcheck { }}}}}}}}}}}}}}})}} & 39.0 & \(39.1\)\(\pm\)0.6 & \(39.0\)\(\pm\)0.8 & \(44.1\)\(\pm\)2.2 & \(51.7\)\(\pm\)1.6 & \(57.1\)\(\pm\)0.3 & \(66.0\)\(\pm\)1.2 \\ & & \(\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{ \check{\check{\check{\check{\check{\check{\checkcheck{{\checkcheckcheckcheckcheckcheckcheckcheck { }}}}}}}}}}}}}}}\) &479.9 & \(57.6\)\(\pm\)0.5 & \(57.6\)\(\pm\)0.2 & \(60.6\)\(\pm\)0.4 & \(62.2\)\(\pm\)0.6 & \(66.1\)\(\pm\)0.9 \\ & \(\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{ \check{\check{\check{\check{\check{\check{\checkcheck{\checkcheckcheckcheckcheckcheckcheck { }}}}}}}}}}}}}}}\) & \(\mathbf{60.1}\)\(\pm\)**0.9** & \(\mathbf{59.0\)\(\pm\)**0.1** & \(\mathbf{60.9\)\(\pm\)**0.9** & \(\mathbf{63.2\)\(\pm\)**0.2** & \(\mathbf{65.9\)\(\pm\)**0.5** & \(\mathbf{68.7\)\(\pm\)**0.6** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Ablation study of CITE with and without prompt and text.** We report the average accuracy and standard deviation. When prompt is not used, we fine-tune the whole vision backbone. When text is not used, we adopt the traditional classification head. Each component improves the performance.
\begin{table}
\begin{tabular}{c c c c|c c c c c} \hline \hline Visual & Method & Textual & 1 & 2 & 4 & 8 & 16 & All \\ \hline \multirow{3}{*}{CLIP} & Linear & - & \(47.7\)\(\pm\)0.1 & \(49.9\)\(\pm\)0.1 & \(51.2\)\(\pm\)0.1 & \(60.3\)\(\pm\)0.1 & \(61.4\)\(\pm\)0.1 & \(65.4\)\(\pm\)0.1 \\ & Fine-tune & - & \(39.1\)\(\pm\)1.2 & \(39.0\)\(\pm\)1.2 & \(44.1\)\(\pm\)1.2 & \(51.7\)\(\pm\)1.2 & \(57.1\)\(\pm\)1.2 & \(66.3\)\(\pm\)1.2 \\ ViT-B/16 & CITE & CLIP & \(60.1\)\(\pm\)0.9 & \(59.0\)\(\pm\)0.1 & \(\mathbf{60.9\)\(\pm\)**0.9** & \(63.2\)\(\pm\)0.2 & \(65.9\)\(\pm\)0.5 & \(68.7\)\(\pm\)0.6 \\ & CITE & BLB & \(\mathbf{60.2\)\(\pm\)**1.2** & \(\mathbf{59.1\)\(\pm\)**1.2** & \(60.3\)\(\pm\)0.8 & \(\mathbf{66.4\)\(\pm\)**0.7** & \(\mathbf{67.9\)\(\pm\)**0.4** & \(\mathbf{69.7\)\(\pm\)**0.1** \\ \hline \hline \multirow{3}{*}{IN-21k} ViT-B/16} & Linear & - & \(46.7\)\(\pm\)0.7 & \(45.8\)\(\pm\)1.6 & \(53.4\)\(\pm\)1.2 & \(59.5\)\(\pm\)0.5 & \(60.6\)\(\pm\)0.6 & \(66.5\)\(\pm\)0.8 \\ & Fine-tune & - & \(48.0\)\(\pm\)0.3 & \(49.6\)\(\pm\)0.1 & \(50.8\)\(\pm\)0.1 & \(59.3\)\(\pm\)0.3 & \(62.2\)\(\pm\)0.4 & \(66.3\)\(\pm\)0.2 \\ ViT-B/16 & CITE & BB & \(51.4\)\(\pm\)1.4 & \(51.8\)\(\pm\)1.3 & \(56.6\)\(\pm\)1.9 & \(62.7\)\(\pm\)1.0 & \(64.0\)\(\pm\)0.5 & \(67.2\)\(\pm\)1.4 \\ & CITE & BLB & \(\mathbf{52.4\)\(\pm\)**1.5** & \(\mathbf{52.7\)\(\pm\)**0.8** & \(\mathbf{57.0\)\(\pm\)**0.9** & \(\mathbf{62.8\)\(\pm\)**1.2** & \(\mathbf{64.5\)\(\pm\)**1.1** & \(\mathbf{67.4\)\(\pm\)**0.7** \\ \hline \hline \multirow{3}{*}{INTERBN ViT-B/16} & Linear & - & \(47.3\)\(\pm\)0.2 & \(47.2\)\(\pm\)0.2 & \(52.4\)\(\pm\)0.5 & \(59.7\)\(\pm\)0.3 & \(63.1\)\(\pm\)0.2 & \(66.8\)\(\pm\)0.7 \\ & Fine-tune & - & \(42.0\)\(\pm\)0.3 & \(46.0\)\(\pm\)0.3 & \(51.0\)\(\pm\)0.9 & \(60.4\)\(\pm\)0.1 & \(62.7\)\(\pm\)0.5 & \(68.2\)\(\pm\)0.4 \\ ViT-B/16 & CITE & BB & \(\mathbf{51.7\)\(\pm\)**0.1** & \(\mathbf{55.4\)\(\pm\)**1.8** & \(\mathbf{59.6\)\(\pm\)**0.3** & \(\mathbf{66.4\)\(\pm\)**0.8** & \(\mathbf{68.1\)\(\pm\)**0.8}}}}}\) & & \\ & CITE & BLB & \(48.4\)\(\pm\)5.2 & \(49.1\)\(\pm\)5.5 & \(57.9\)\(\pm\)0.8 & \(65.3\)\(\pm\)0.4 & \(67.9\)\(\pm\)0.8 & \(69.4\)\(\pm\)0.9 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **CITE fits in with various pre-trained encoders.** We include CLIP ViT-B/16 [5], ImageNet-21k ViT-B/16 [26] and INTERN ViT-B/16 [6] visual encoders, combined with CLIP textual encoder [5], BioBERT (BB) [10] and BioLinkBERT (BLB) [11] language models. The highest performance of each visual encoder is bolded. For each combination, CITE consistently outperforms linear and fine-tune baselines.
tance of multi-modal information. While CLIP gains such modality matching through pre-training, our CITE shows an appealing trait that irrelevant vision and language models can be combined to exhibit similar multi-modal insights on pathological tasks without a need of joint pre-training.
## 6 Conclusion
Adapting powerful foundation models into medical imaging constantly faces data-limited challenges. In this study, we propose CITE, a data-efficient and model-agnostic approach to adapt foundation models for pathological image classification. Our key contribution is to inject meaningful medical domain knowledge to advance pathological image embedding and classification. By tuning only a small number of parameters guided by biomedical text information, our approach effectively learns task-specific information with only limited training samples, while showing strong compatibility with various foundation models. To augment the current pipeline, the use of synthetic pathological images is promising [29]. Also, foundation training on multi-modal medical images is of substantial interest to enhance model robustness under data-limited conditions [30].
|
2308.15242
|
Distance Labeling for Families of Cycles
|
For an arbitrary finite family of graphs, the distance labeling problem asks
to assign labels to all nodes of every graph in the family in a way that allows
one to recover the distance between any two nodes of any graph from their
labels. The main goal is to minimize the number of unique labels used. We study
this problem for the families $\mathcal{C}_n$ consisting of cycles of all
lengths between 3 and $n$. We observe that the exact solution for directed
cycles is straightforward and focus on the undirected case. We design a
labeling scheme requiring $\frac{n\sqrt{n}}{\sqrt{6}}+O(n)$ labels, which is
almost twice less than is required by the earlier known scheme. Using the
computer search, we find an optimal labeling for each $n\le 17$, showing that
our scheme gives the results that are very close to the optimum.
|
Arseny M. Shur, Mikhail Rubinchik
|
2023-08-29T12:06:48Z
|
http://arxiv.org/abs/2308.15242v1
|
# Distance Labeling for Families of Cycles+
###### Abstract
For an arbitrary finite family of graphs, the distance labeling problem asks to assign labels to all nodes of every graph in the family in a way that allows one to recover the distance between any two nodes of any graph from their labels. The main goal is to minimize the number of unique labels used. We study this problem for the families \(\mathcal{C}_{n}\) consisting of cycles of all lengths between \(3\) and \(n\). We observe that the exact solution for directed cycles is straightforward and focus on the undirected case. We design a labeling scheme requiring \(\frac{n\sqrt{n}}{\sqrt{6}}+O(n)\) labels, which is almost twice less than is required by the earlier known scheme. Using the computer search, we find an optimal labeling for each \(n\leq 17\), showing that our scheme gives the results that are very close to the optimum.
Keywords:Distance labeling Graph labeling Cycle
## 1 Introduction
_Graph labeling_ is an important and active area in the theory of computing. A typical problem involves a parametrized finite family \(\mathcal{F}_{n}\) of graphs (e.g., all planar graphs with \(n\) nodes) and a natural function \(f\) on nodes (e.g., distance for _distance labeling_ or adjacency for _adjacency labeling_). The problem is to assign labels to all nodes of every graph in \(\mathcal{F}_{n}\) so that the function \(f\) can be computed solely from the labels of its arguments. Note that the algorithm computing \(f\) knows \(\mathcal{F}_{n}\) but not a particular graph the nodes belong to. The main goal is to minimize the number of distinct labels or, equivalently, the maximum length of a label in bits. Additional goals include the time complexity of computing both \(f\) and the labeling function. In this paper, we focus solely on the main goal.
The area of graph labeling has a rather long history, which can be traced back at least to the papers [6, 7]. The main academic interest in this area is in finding the limits of efficient representation of information. For example, the adjacency labeling of \(\mathcal{F}_{n}\) with the minimum number of labels allows one to build the smallest "universal" graph, containing all graphs from \(\mathcal{F}_{n}\) as induced subgraphs. Similarly, the optimal distance labeling of \(\mathcal{F}_{n}\) gives the smallest "universal" matrix, containing the distance matrices of all graphs from \(\mathcal{F}_{n}\) as principal minors.
The distributed nature of labeling makes it also interesting for practical applications such as distributed data structures and search engines [2; 8; 12], routing protocols [9; 19] and communication schemes [20].
The term _distance labeling_ was coined by Peleg in 1999 [18], though some of the results are much older [15; 20]. Let us briefly recall some remarkable achievements. For the family of all undirected graphs with \(n\) nodes it is known that the labels of length at least \(\frac{n}{2}\) bits are necessary [16; 17]. The first labeling scheme with \(O(n)\)-bit labels was obtained by Graham and Pollak [15]. The state-of-the-art labeling by Alstrup et al. [2] uses labels of length \(\frac{\log 3}{2}n+o(n)\) bits1 and allows one to compute the distances in \(O(1)\) time (assuming the word-RAM model).
Footnote 1: Throughout the paper, log stands for the binary logarithm.
For planar graphs with \(n\) nodes, the lower bound of \(\Omega(\sqrt[3]{n})\) bits per label and a scheme using \(O(\sqrt{n}\log n)\) bits per label were presented in [13]. Recently, Gawrychowski and Uznanski [14] managed to shave the log factor from the upper bound. Some reasons why the gap between the lower and upper bounds is hard to close are discussed in [1].
For trees with \(n\) nodes, Peleg [18] presented a scheme with \(\Theta(\log^{2}n)\)-bits labels. Gavoille et al. [13] proved that \(\frac{1}{8}\log^{2}n\) bits are required and \(\approx 1.7\log^{2}n\) bits suffice. Alstrup et al. [4] improved these bounds to \(\frac{1}{4}\log^{2}n\) and \(\frac{1}{2}\log^{2}n\) bits respectively. Finally, Freedman et al. [10] reached the upper bound \((\frac{1}{4}+o(1))\log^{2}n\), finalizing the asymptotics up to lower order terms.
Further, some important graph families require only polynomially many labels and thus \(O(\log n)\) bits per label. Examples of such families include interval graphs [11], permutation graphs [5], caterpillars and weighted paths [4].
Cycles are among the simplest graphs, so it may look surprising that graph labeling problems for the family \(\mathcal{C}_{n}\) of all cycles up to length \(n\) are not settled yet. A recent result [3] states that \(n+\sqrt[3]{n}\) labels are necessary and \(n+\sqrt{n}\) labels are sufficient for the _adjacency_ labeling of \(\mathcal{C}_{n}\). Still, there is a gap between the lower and the upper bounds. (As in the case of planar graphs, this is the gap between \(\sqrt[3]{n}\) and \(\sqrt{n}\), though at a different level.)
To the best of our knowledge, no papers were published yet on the _distance_ labeling of the family \(\mathcal{C}_{n}\). However, there are some folklore results2, namely, a lower bound of \(\Omega(n^{4/3})\) labels and a labeling scheme requiring \(O(n^{3/2})\) labels; together they produce another gap between \(\sqrt[3]{n}\) and \(\sqrt{n}\).
Footnote 2: Communicated to us by E. Porat.
In this paper we argue that the upper estimate is correct. We describe a distance labeling scheme for \(\mathcal{C}_{n}\) that uses almost twice less labels than the folklore scheme. While this is a rather small improvement, we conjecture that our scheme produces labelings that are optimal up to an additive \(O(n)\) term. To support this conjecture, we find the optimal number of labels for the families \(\mathcal{C}_{n}\) up to \(n=17\). Then we compare the results of our scheme with an extrapolation of the optimal results, demonstrating that the difference is a linear term with a small constant. Finally, we describe several improvements to our scheme that further reduce this constant.
## 2 Preliminaries
Given two nodes \(u,v\) in a graph, the _distance_\(d(u,v)\) is the length of the shortest \((u,v)\)-path. Suppose we are given a finite family \(\mathcal{F}=\{(V_{1},E_{1}),\ldots,(V_{t},E_{t})\}\) of graphs and an arbitrary set \(\mathcal{L}\), the elements of which are called _labels_. A _distance labeling_ of \(\mathcal{F}\) is a function \(\phi:V_{1}\cup\cdots\cup V_{t}\rightarrow\mathcal{L}\) such that there exists a function \(d^{\prime}:\mathcal{L}^{2}\rightarrow\mathbb{Z}\) satisfying, for every \(i\) and each \(u,v\in V_{i}\), the equality \(d^{\prime}(\phi(u),\phi(v))=d(u,v)\). Since \(d(u,u)=0\), no label can appear in the same cycle twice. Thus for every single graph in \(\mathcal{F}\) we view labels as unique names for nodes and identify each node with its label when speaking about paths, distances, etc.
In the rest of the paper, _labeling_ always means distance labeling. A _labeling scheme_ (or just _scheme_) for \(\mathcal{F}\) is an algorithm that assigns labels to all nodes of all graphs in \(\mathcal{F}\). A scheme is _valid_ if it outputs a distance labeling.
We write \(C_{n}\) for the undirected cycle on \(n\) nodes and let \(\mathcal{C}_{n}=\{C_{3},\ldots,C_{n}\}\). The family \(\mathcal{C}_{n}\) is the main object of study in this paper. We denote the minimum number of labels in a labeling of \(\mathcal{C}_{n}\) by \(\lambda(n)\).
### Warm-up: Labeling Directed Cycles
Consider a distance labeling of the family \(\mathcal{C}_{n}^{\bigcirc}=\{C_{3}^{\bigcirc},\ldots,C_{n}^{\bigcirc}\}\) of _directed_ cycles. Here, the distance between two vertices is the length of the _unique_ directed path between them. We write \(\lambda_{D}(n)\) for the minimum number of labels needed to label \(\mathcal{C}_{n}^{\bigcirc}\). A bit surprisingly, the exact formula for \(\lambda_{D}(n)\) can be easily found.
Proposition 1: \(\lambda_{D}(n)=\frac{n^{2}+2n+n\bmod 2}{4}\)_._
Proof: Let \(u\) and \(v\) be two labels in \(C_{i}^{\bigcirc}\). Then \(d(u,v)+d(v,u)=i\). Hence \(u\) and \(v\) cannot appear together is any other cycle. Thus every two cycles have at most one label in common. Then \(C_{n-1}^{\bigcirc}\) contains at least \(n-2\) labels unused for \(C_{n}^{\bigcirc}\), \(C_{n-2}^{\bigcirc}\) contains at least \(n-4\) labels unused for both \(C_{n}^{\bigcirc}\) and \(C_{n-1}^{\bigcirc}\), and so on. This gives the total of at least \(n+(n-2)+(n-4)+\cdots+n\bmod 2\) labels, which sums up exactly to the stated formula. To build a labeling with this number of labels, label cycles in decreasing order; labeling \(C_{i}^{\bigcirc}\), reuse one label from each of the larger cycles such that neither label is reused twice.
### Basic Facts on Labeling Undirected Cycles
From now on, all cycles are undirected, so the distance between two nodes in a cycle is the length of the shortest of two paths between them. The maximal distance in \(C_{i}\) is \(\lfloor i/2\rfloor\). We often view a cycle as being drawn on a circumference, with equal distances between adjacent nodes, and appeal to geometric properties.
We say that a set \(\{u,v,w\}\) of labels occurring in a cycle is a _triangle_ if each of the numbers \(d(u,v),d(u,w),d(v,w)\) is strictly smaller than the sum of the other two. Note that three nodes are labeled by a triangle if and only if they form an acute triangle on a circumference (see Fig. 1).
Lemma 1: _In a labeling of a family of cycles each triangle appears only once._
Proof: If \(\{u,v,w\}\) is a triangle in \(C_{i}\), then \(d(u,v)+d(u,w)+d(v,w)=i\).
Lemma 1 implies the only known lower bound on the number of labels for \(\mathcal{C}_{n}\).
Proposition 2: _Each labeling of \(\mathcal{C}_{n}\) contains \(\Omega(n^{4/3})\) distinct labels._
Proof: As \(n\) tends to infinity, the probability that a random triple of labels of \(C_{n}\) forms a triangle approaches the probability that three random points on a circumference generate an acute triangle. The latter probability is \(1/4\): this is a textbook exercise in geometric probability. Thus, the set of labels of \(C_{n}\) contains \(\Omega(n^{3})\) triangles. By Lemma 1, the whole set of labels contains \(\Omega(n^{4})\) distinct triangles; they contain \(\Omega(n^{4/3})\) unique labels.
By _diameter_ of a cycle \(C_{i}\) we mean not only the maximum length of a path between its nodes (i.e., \(\lfloor i/2\rfloor\)) but also any path of this length. From the context it is always clear whether we speak of a path or of a number.
Lemma 2: _For any labeling of two distinct cycles \(C_{i}\) and \(C_{j}\) there exist a diameter of \(C_{i}\) and a diameter of \(C_{j}\) such that every label appearing in both cycles belongs to both these diameters._
Proof: Let \(i>j\) and let \(\mathcal{L}\) be the set of common labels of \(C_{i}\) and \(C_{j}\). We assume \(\#\mathcal{L}\geq 3\) as otherwise the diameters trivially exist. By Lemma 1, \(\mathcal{L}\) contains no triangles. Hence for every triple of elements of \(\mathcal{L}\) the maximum distance is the sum of the two other distances. Let \(u,v\) be labels with the maximum distance in \(\mathcal{L}\). We have \(d(u,v)\leq\lceil i/2\rceil-1\), because there are no larger distances in \(C_{j}\). Then the shortest \((u,v)\)-path in \(C_{i}\) is unique. Since \(d(u,v)=d(u,w)+d(w,v)\) for any \(w\in\mathcal{L}\) by the maximality of \(d(u,v)\), all labels from \(\mathcal{L}\) appear on this unique path, and hence on any diameter containing this path.
Though \(C_{j}\) may contain two shortest \((u,v)\)-paths, all labels from \(\mathcal{L}\) appear on one of them. Indeed, let \(w,z\in\mathcal{L}\setminus\{u,v\}\). Considering the shortest \((u,v)\)-path in \(C_{i}\), we have w.l.o.g. \(d(u,w)+d(w,z)+d(z,v)=d(u,v)\). This equality would be violated if \(w\) and \(z\) belong to different shortest \((u,v)\)-paths in \(C_{j}\). Thus, \(C_{i}\) has a diameter, containing the shortest \((u,v)\)-path with all labels from \(\mathcal{L}\) on it.
The known upper bound for cycle labeling is based on the following _folklore labeling scheme_. For each \(C_{i}\in\mathcal{C}_{n}\) we choose arbitrary adjacent nodes \(u\) and \(v\) and cover \(C_{i}\) by two disjoint paths: the path \(P_{1}\) contains \(\lceil i/2\rceil\) nodes, including \(u\), while \(P_{2}\) contains \(\lfloor i/2\rfloor\) nodes, including \(v\) (see the example in Fig. 2). Each node from \(P_{1}\) gets the label \((1,d_{1},m_{1})\), where \(d_{1}\) is the distance to \(u\) and \(m_{1}=i\bmod\left\lceil\sqrt{n}\right\rceil\); each node from \(P_{2}\) gets the label \((2,d_{2},m_{2})\), where \(d_{2}\) is the distance to \(v\), \(m_{2}=\left\lfloor i/\big{\lceil}\sqrt{n}\big{\rceil}\right\rfloor\).
Proposition 3 (folklore): _The folklore scheme is valid and uses \(\frac{3}{4}n\sqrt{n}+O(n)\) labels to label \(\mathcal{C}_{n}\)._
Figure 1: Triangles in a labeled cycle. The sets \(\{u,w,z\}\) and \(\{v,w,z\}\) are triangles, while \(\{u,v,w\}\) and \(\{u,v,z\}\) are not.
Proof: We prove the validity describing a procedure that derives the distance between two nodes from their labels (for illustration, see Fig. 2). Suppose that the labels \((b,d,m)\) and \((b^{\prime},d^{\prime},m^{\prime})\) appear together in some (unknown) cycle. If \(b=b^{\prime}\), the two labels belong to the same path, so the distance between them is \(|d-d^{\prime}|\). If \(b\neq b^{\prime}\), we compute the length \(i\) of the cycle from the pair \((m,m^{\prime})\). The two analysed labels are connected by a path of length \(d+d^{\prime}+1\) and by another path of length \(i-d-d^{\prime}-1\); comparing these lengths, we get the distance.
To compute the number of triples \((b,d,m)\) used for labeling, note that there are two options for \(b\), \(\lfloor n/2\rfloor+1\) options for \(d\), and \(\left\lceil\sqrt{n}\right\rceil\) options for \(m\), for the total of \(n\sqrt{n}+O(n)\) options. However, some triples are never used as labels. The label \((b,d,m)\) is unused iff for every number \(i\leq n\) compatible with \(m\), the length of the path \(P_{b}\) in \(C_{i}\) is less than \(d\). If \(b=1\), the maximum \(i\) compatible with \(m\) is \(i=(\lceil\sqrt{n}\rceil-1)\lceil\sqrt{n}\rceil+m\), which is \(O(\sqrt{n})\) away from \(n\). Therefore, each of \(O(\sqrt{n})\) values of \(m\) gives \(O(\sqrt{n})\) unused labels, for the total of \(O(n)\). Let \(b=2\). The maximum \(i\) compatible with \(m\) is \(i=(m+1)\lceil\sqrt{n}\rceil-1\). The number of impossible values of \(d\) for this \(m\) is \((\lfloor n/2\rfloor-1)-(\lfloor i/2\rfloor-1)=\frac{\sqrt{n}-m}{2}\cdot\sqrt{n}+ O(\sqrt{n})\). Summing up these numbers for \(m=0,\ldots,\lceil\sqrt{n}\rceil-1\), we get \(\frac{n\sqrt{n}}{4}+O(n)\) unused labels. In total we have \(\frac{n\sqrt{n}}{4}+O(n)\) unused labels out of \(n\sqrt{n}+O(n)\) possible; the difference gives us exactly the stated bound.
## 3 More Efficient Labeling Scheme and Its Analysis
We start with more definitions related to labeled cycles. An _arc_ is a labeled path, including the cases of one-node and empty paths. The labels on the arc form a string (up to reversal), and we often identify the arc with this string. In particular, we speak about _substrings_ and _suffixes_ of arcs. "To label a path \(P\) with an arc \(a\)" means to assign labels to the nodes of \(P\) to turn \(P\) into a copy of \(a\).
By _intersection_ of two labeled cycles we mean the labeled subgraph induced by all their common labels. Clearly, this subgraph is a collection of arcs. Lemma 2 says that the intersection is a subgraph of some diameter of each cycle. The intersections of two arcs and of a cycle and an arc are defined in the same way.
An _arc labeling_ of a family of cycles is a labeling with the property that the intersection of any two cycles is an arc. Arc labelings are natural and easy to work with. Note that the folklore scheme produces an arc labeling: the intersection of two cycles is either empty or coincides with the path \(P_{1}\) or the path \(P_{2}\) of the smaller cycle. The schemes defined below produce arc labelings as well.
_2-arc labeling scheme._ In this auxiliary scheme, cycles are labeled sequentially. The basic step is "label a cycle with two arcs". Informally, we partition the cycle
into two paths of equal or almost equal length and label each of them with an arc (or with its substring of appropriate length). To specify details and reduce ambiguity, we define this step in the function \(\mathsf{label}(a_{1},a_{2},C_{j})\) below.
```
1:function\(\mathsf{label}(a_{1},a_{2},C_{j})\)
2:if\(|a_{1}|+|a_{2}|<n\) or \(\min\{|a_{1}|,|a_{2}|\}<\lceil j/2\rceil-1\)then
3: return error\(\triangleright\) not enough labels for \(C_{j}\)
4:else
5: label any path in \(C_{j}\) of length \(\min\{|a_{2}|,\lfloor j/2\rfloor+1\}\) by a suffix of \(a_{2}\)
6: label the remaining path in \(C_{j}\) by a suffix of \(a_{1}\)
```
The definition says that we label with suffixes (rather than arbitrary substrings) of arcs and use the longest possible suffix of the _second_ arc. By default, we suppose that both suffixes can be read on the cycle in _the same direction_ (i.e., the last labels from \(a_{1}\) and \(a_{2}\) are at the distance \(\approx j/2\)).
The 2-arc scheme starts with a set \(A\) of \(m\) pairwise disjoint arcs of sufficient length.
It calls the function \(\mathsf{label}()\) for each pair of arcs in \(A\). As a result, it produces a family of up to \(\frac{m(m+1)}{2}\) labeled cycles. Lemma 3 below proves that the result is a labeling; we call it the _2-arc labeling_. In Fig. 3, such a labeling of the family \(\mathcal{C}_{5}\) is shown.
Lemma 3: _The 2-arc labeling scheme is valid and produces arc labelings._
Proof: The intersection of two cycles is an arc (possibly, empty) by construction, so it suffices to prove that the output of the 2-arc scheme is a labeling. Thus we need to define the function \(d^{\prime}\) on labels. This is possible iff for every two labels \(u,v\) the distance \(d(u,v)\) is the same for each cycle containing both \(u\) and \(v\). Since the scheme uses each pair of arcs once, the labels \(u,v\) sharing several cycles belong to some arc \(a\in A\). The intersection of a cycle \(C_{i}\) with \(a\) is at most the diameter of \(C_{i}\). Hence \(d(u,v)\) in \(C_{i}\) is the same as \(d(u,v)\) in \(a\), and this property holds for any cycle shared by \(u\) and \(v\). Thus the scheme is valid.
Remark 1: For a 2-arc labeling of the family \(\mathcal{C}_{n}\) one can take \(\sqrt{2n}+O(1)\) arcs of length \(\frac{n}{2}+O(1)\) each, to the total of \(\frac{n\sqrt{n}}{\sqrt{2}}+O(n)\) labels. So the 2-arc labeling beats the folklore labeling, which requires \(\frac{3n\sqrt{n}}{4}+O(n)\) labels by Proposition 3.
Next, we develop the idea of 2-arc labeling to obtain a scheme that, as we believe, produces asymptotically optimal labelings for the families \(\mathcal{C}_{n}\).
Figure 3: Example: 2-arc labeling for the family \(\mathcal{C}_{5}\).
Chain labeling scheme.First we present the 2-arc labeling scheme for the family \(\mathcal{C}_{n}\) as greedy Algorithm 1, which labels cycles in the order of decreasing length and proceeds in _phases_ until all cycles are labeled. Each phase starts with creating a new arc with the function \(\mathsf{create}(arc,length)\); then the function \(\mathsf{label}()\) is called in a cycle, each time using the new arc and one of the earlier created arcs. The length of the new arc is taken to barely pass the length condition for the first call to \(\mathsf{label}()\). In the preliminary phase 1 (lines 1-3), two arcs are created and the largest cycle is labeled. Other phases are iterations of the **while** cycle in lines 4-10. See Fig. 3 for the example.
```
1:\(\mathsf{create}(a_{0},\lceil n/2\rceil)\); \(\mathsf{create}(a_{1},\lfloor n/2\rfloor)\)
2:\(\mathsf{label}(a_{0},a_{1},C_{n})\)
3:\(i\gets 2\); \(j\gets n-1\)\(\triangleright\) next arc to create; next cycle to label
4:while\(j>2\)do\(\triangleright\) start of \(i\)th phase
5:\(\mathsf{create}(a_{i},\lceil j/2\rceil-1)\)\(\triangleright\) minimum length of an arc needed to label \(C_{j}\)
6:\(k\gets 0\)\(\triangleright\) next arc to use
7:while\(k<i\) and \(j>2\)do
8:\(\mathsf{label}(a_{k},a_{i},C_{j})\)
9:\(j\gets j-1\); \(k\gets k+1\)
10:\(i\gets i+1\)
```
**Algorithm 1** Greedy 2-arc labeling scheme for \(\mathcal{C}_{n}\)
The _chain scheme_ is a modification of Algorithm 1 that allows to use previously created arcs more efficiently. The difference is in the first parameter of the \(\mathsf{label}\) function (line 8). During phase \(i\), a _chain_ is a path labeled by the concatenation \(a_{0}a_{1}\cdots a_{i-1}\) of the strings labeling all previously created arcs. Though formally the chain is an arc, the distance between labels in the chain may differ from the distance between the same labels in already labeled cycles. For example, the string \(a_{0}a_{1}\) labels both the cycle \(C_{n}\) with the diameter \(\lfloor n/2\rfloor+1\) and a path of diameter \(n-1\). However, with some precautions the chain can be used for labeling cycles. The chain scheme is presented below as Algorithm 2. The auxiliary function \(\mathsf{trim}(c)\) deletes the suffix of the chain \(c\) that was used to label a cycle on the current iteration of the internal cycle.
```
1:\(\mathsf{create}(a_{0},\lceil n/2\rceil)\); \(\mathsf{create}(a_{1},\lfloor n/2\rfloor)\)
2:\(\mathsf{label}(a_{0},a_{1},C_{n})\)
3:\(i\gets 2\); \(j\gets n-1\)\(\triangleright\) next arc to create; next cycle to label
4:while\(j>2\)do\(\triangleright\) start of phase \(i\)
5:\(c\gets a_{0}a_{1}\cdots a_{i-1}\)\(\triangleright\) chain for phase \(i\)
6:\(\mathsf{create}(a_{i},\lceil j/2\rceil-1)\)\(\triangleright\) minimum length of an arc needed to label \(C_{j}\)
7:while\(|c|\geq\lceil j/2\rceil-1\) and \(j>2\)do
8:\(\mathsf{label}(c,a_{i},C_{j})\)
9:\(j\gets j-1\); \(\mathsf{trim}(c)\)\(\triangleright\) deleting the just used suffix from the chain
10:\(i\gets i+1\)
```
**Algorithm 2** Chain labeling scheme for \(\mathcal{C}_{n}\)
To prove validity of the chain scheme, we need an auxiliary lemma.
Lemma 4: _Let \(j\) be the length of the longest unlabeled cycle at the beginning of i'th phase of Algorithm 2, \(i\geq 2\). Then every substring of length \(\leq\lfloor j/2\rfloor+1\) of the chain \(c=a_{0}a_{1}\cdots a_{i-1}\) labels an arc in a cycle \(C_{j^{\prime}}\) for some \(j^{\prime}>j\)._
Proof: Note that if a string labels an arc in an already labeled cycle, then every its substring does the same. We proceed by induction on \(i\). From line 2 we see that each substring of length \(\lfloor n/2\rfloor+1\) of the string \(a_{0}a_{1}\) labels a diameter in \(C_{n}\). Hence we have the base case \(i=2\). Since \(j\) decreases with each phase, the inductive hypothesis implies that at the start of \((i-1)\)th phase each substring of length \(\lfloor j/2\rfloor+1\) of \(a_{0}a_{1}\cdots a_{i-2}\) labels an arc in some already labeled cycle. Let \(C_{\hat{j}}\) be the first cycle labeled at this phase. Then a diameter of \(C_{\hat{j}}\) is labeled with a suffix of \(a_{0}a_{1}\cdots a_{i-2}\), say, \(a^{\prime}\), and the remaining arc is labeled with the whole string \(a_{i-1}\). Since \(\hat{j}>j\), both the prefix \(a_{0}a_{1}\cdots a_{i-2}\) and the suffix \(a^{\prime}a_{i-1}\) of the chain \(c\) have the desired property: each substring of length \(\leq\lfloor j/2\rfloor+1\) labels an arc in an already labeled cycle. As these prefix and suffix of \(c\) intersect by a substring \(a^{\prime}\) of length \(\geq\lfloor j/2\rfloor+1\), the whole chain \(c\) has this property. This proves the step case and the lemma.
Lemma 5: _The chain labeling scheme is valid and builds an arc labeling._
Proof: To prove that Algorithm 2 builds a labeling, it suffices to check the following property: in every cycle \(C_{j}\), each pair of labels either do not appear in larger cycles or appear in some larger cycle at the same distance as in \(C_{j}\). This property trivially holds for \(C_{n}\), so let us consider a cycle \(C_{j}\) labeled at \(i\)'th phase, \(i\geq 2\). The cycle \(C_{j}\) is labeled by two arcs: a substring of the chain \(c=a_{0}\cdots a_{i-1}\) and a suffix of the new arc \(a_{i}\). We denote them by \(c^{\prime}\) and \(a^{\prime}_{i}\) respectively.
Suppose that a pair of labels \((u,v)\) from \(C_{j}\) appeared in a larger cycle. Since all substrings of \(c\) used for labeling together with \(a_{i}\) are disjoint, \(u\) and \(v\) belong to the same arc (\(c^{\prime}\) or \(a^{\prime}_{i}\)). Then the shortest \((u,v)\)-path in \(C_{j}\) is within this arc. If \(u,v\) are in \(a^{\prime}_{i}\), then \(d(u,v)\) in \(C_{j}\) is the same as in the larger cycle containing \(a_{i}\). If \(u,v\) are in \(c^{\prime}\), then \(d(u,v)\) in \(C_{j}\) is the same as in the larger cycle containing the arc \(c^{\prime}\); as \(|c^{\prime}|\leq\lceil j/2\rceil-1\), such a cycle exists by Lemma 4. Hence we proved that Algorithm 2 indeed builds a labeling; it is an arc labeling by construction.
We call the labelings obtained by chain scheme _chain labelings_. Now we estimate the efficiency of the scheme.
Theorem 4.1: _A chain labeling of a family \(\mathcal{C}_{n}\) of cycles uses \(\frac{n\sqrt{n}}{\sqrt{6}}+O(n)\) labels._
We first need a simpler estimate.
Lemma 6: _Algorithm 2 labels \(\mathcal{C}_{n}\) using \(O(\sqrt{n})\) phases and \(O(n\sqrt{n})\) labels._
Proof: Let us compare the runs of Algorithms 1 and 2 on the family \(\mathcal{C}_{n}\). Suppose that \(\ell_{i}\) is the length of \(a_{i}\) for Algorithm 2 and \(N_{i}\) is the number of cycles labeled
by Algorithm 2 during the first \(i\) phases. For Algorithm 1, we denote the same parameters by \(\ell_{i}^{\prime}\) and \(N_{i}^{\prime}\). Note that \(N_{1}=N_{1}^{\prime}=1\).
Let \(i\geq 2\). If \(N_{i-1}=N_{i-1}^{\prime}\), then \(\ell_{i}=\ell_{i}^{\prime}\) as both algorithms begin the \(i\)'th phase with the same value of \(j\). During this phase, Algorithm 1 labels \(i\) cycles, while Algorithm 2 labels at least \(i\) cycles (the length of the chain allows this), and possibly more. Hence \(N_{i}\geq N_{i}^{\prime}\). If \(N_{i-1}>N_{i-1}^{\prime}\), Algorithm 2 begins the \(i\)'th phase with smaller value of \(j\) compared to Algorithm 1. Then \(\ell_{i}\leq\ell_{i}^{\prime}\) and again, the length of the chain allows Algorithm 2 to label at least \(i\) cycles during the \(i\)'th phase. Hence \(N_{i}>N_{i}^{\prime}\) (or \(N_{i}=N_{i}^{\prime}=n-2\) if both algorithms completed the labeling during this phase). Therefore, Algorithm 2 uses at most the same number of phases and at most the same number of labels as Algorithm 1. The latter uses \(\sqrt{2n}+O(1)\) arcs and thus \(O(n\sqrt{n})\) labels. The lemma is proved.
Proof (of Theorem 3.1): Let us count distinct _pairs_ of labels. First we count the pairs of labels that appear together in some cycle. We scan the cycles in the order of decreasing length and add "new" pairs (those not appearing in larger cycles) to the total. After applying Algorithm 2 to \(\mathcal{C}_{n}\), each cycle \(C_{j}\) is labeled by two arcs of almost equal length: one of them is a substring of the chain \(c\) and the other one is a suffix of the current arc \(a_{i}\). All pairs from \(c\) in \(C_{j}\) are not new by Lemma 4. All pairs between \(c\) and \(a_{i}\) are new by construction, and their number is \(\frac{j^{2}}{4}+O(j)\). Over all cycles \(C_{j}\), this gives \(\frac{n^{3}}{12}+O(n^{2})\) distinct pairs in total. All pairs from \(a_{i}\) are new if and only if \(C_{j}\) is the first cycle labeled in a phase. By Lemma 6, Algorithm 2 spends \(O(\sqrt{n})\) phases. Hence there are \(O(\sqrt{n})\) arcs \(a_{i}\), each containing \(O(n^{2})\) pairs of labels, for the total of \(O(n^{5/2})\) pairs. Therefore, the number of pairs that appear together in a cycle is \(\frac{n^{3}}{12}+O(n^{5/2})\).
Next we count the pairs of labels that _do not_ appear together. The labels in such a pair belong to different arcs. Let \(u,v\) be from \(a_{i^{\prime}}\) and \(a_{i}\) respectively, \(i^{\prime}<i\). If \(u\) and \(v\)_appear_ together, then the largest cycle containing both \(u\) and \(v\) was labeled at phase \(i\). Indeed, earlier phases have no access to \(v\), and if \(u\) and \(v\) share a cycle labeled at a later phase, then Lemma 4 guarantees that they also appear in a larger cycle. Thus, to get the total number of pairs that do not appear together we count, for each phase \(i\), the pairs \((u,v)\) such that \(u\) is from the chain, \(v\) is from \(a_{i}\), and neither of the cycles labeled during this phase contains both \(u\) and \(v\).
There are three reasons why neither of the cycles labeled during phase \(i\) contains both \(u\) from the chain \(c\) and \(v\) from \(a_{i}\). First, this can be the last phase, which is too short to use \(u\). Since \(|a_{i}|<n\) and \(|c|=O(n\sqrt{n})\) by Lemma 6, the last phase affects \(O(n^{5/2})\) pairs. Second, \(u\) can belong to a short prefix of \(c\) that remained unused during the phase due to the condition in line 7 of Algorithm 2. This prefix is shorter than \(a_{i}\), so this situation affects less than \(|a_{i}|^{2}\) pairs. As the number of phases is \(O(\sqrt{n})\) (Lemma 6), the total number of such pairs is \(O(n^{5/2})\). Third, \(v\) can belong to a prefix of \(a_{i}\) that was unused for the cycle containing \(u\). The number of labels from \(a_{i}\) that were unused for _at least one_ cycle during phase \(i\) does not exceed \(|a_{i}|-|a_{i+1}|\), which gives \(O(n)\) labels over all phases. Each such label \(v\) is responsible for \(O(n\sqrt{n})\) pairs by Lemma 6, for the total of \(O(n^{5/2})\) pairs. Thus there are \(O(n^{5/2})\) pairs that do not appear together.
Putting everything together, we obtain that a chain labeling of \(\mathcal{C}_{n}\) contains \(p=\frac{n^{3}}{12}+O(n^{5/2})\) pairs of labels. Hence the number \(ch(n)\) of labels is
\[ch(n)=\sqrt{2p}+O(1)=\frac{n\sqrt{n}}{\sqrt{6}}\cdot\sqrt{1+O(n^{-1/2})}+O(1)= \frac{n\sqrt{n}}{\sqrt{6}}+O(n),\]
as required.
## 4 Chain Labelings vs Optimal Labelings
The chain labeling beats the folklore labeling almost twice in the number of labels (Theorem 4.1 versus Proposition 3). However, it is not clear how good this new labeling is, given that the known lower bound (Proposition 2) looks rather weak. In this section we describe the results of an experimental study we conducted to justify Conjecture 1, stating that the chain labeling is asymptotically optimal.
Conjecture 1: \(\lambda(n)=\frac{n\sqrt{n}}{\sqrt{6}}+O(n)\)_._
We proceed in three steps, which logically follows each other.
Step 1: Compute as many values of \(\lambda(n)\) as possible and compare them to \(\frac{n\sqrt{n}}{\sqrt{6}}\).
Outline of the search algorithm.To compute \(\lambda(n)\), we run a recursive depth-first search, labeling cycles in the order of decreasing length. The upper bound _max_ on the total number of labels is a global variable. The recursive function \(\mathsf{labelCycle}(j,L,D)\) gets the length \(j\) of the cycle to label, the set \(L\) of existing labels, and the table \(D\) of known distances between them. The function runs an optimized search over all subsets of \(L\). When it finds a subset \(X\) that is both
* _compatible_: all labels from \(X\) can be assigned to the nodes of \(C_{j}\) respecting the distances from \(D\), and
* _large_: labeling \(C_{j}\) with \(X\cup Y\), where the set \(Y\) of labels is disjoint with \(L\), holds the total number of labels below the upper bound _max_,
it labels \(C_{j}\) with \(X\cup Y\), adds \(Y\) to \(L\) to get some \(L^{\prime}\), adds newly defined distances to \(D\) to get some \(D^{\prime}\), and compares \(j\) to \(3\). If \(j=3\), the function reports \((L^{\prime},D^{\prime})\), sets _max_\(=\#L^{\prime}\), and returns; otherwise, it calls \(\mathsf{labelCycle}(j{-}1,L^{\prime},D^{\prime})\). The value of _max_ in the end of search is reported as \(\lambda(n)\).
Results.We managed to find \(\lambda(n)\) for \(n\leq 17\); for \(n=17\) the algorithm made over \(5\cdot 10^{12}\) recursive calls. Computing \(\lambda(18)\) would probably require a cluster. The witness labelings can be found in the Appendix; the numbers \(\lambda(n)\) fit well in between the bounds \(\frac{n\sqrt{n}}{\sqrt{6}}\) and \(\frac{n(\sqrt{n}+1)}{\sqrt{6}}\) (see Fig. 4). As a side result we note that almost all optimal labelings we discovered are _arc labelings_.
If we view the "corridor" in Fig. 4 as an extrapolation for \(\lambda(n)\) for big \(n\), we have to refer \(ch(n)\) to this corridor.
Step 2:Estimate the constant in the \(O(n)\) term in Theorem 1, to compare \(ch(n)\) to the results of step 1.
Results:We computed \(ch(n)\) for many values of \(n\) in the range \([10^{3}..10^{7}]\). In all cases, \(ch(n)\approx\frac{n(\sqrt{n}+1.5)}{\sqrt{6}}\), which is only \(\frac{n}{2\sqrt{6}}\) away from the "corridor" in Fig. 4.
The natural next question is whether we can do better.
Step 3:Find resources to improve the chain scheme to come closer to \(\lambda(n)\).
In order to reduce the amount of resources "wasted" by the chain scheme, we describe three improving tricks. An example of their use is an optimal labeling of \(\mathcal{C}_{14}\) presented in Fig. 5.
Trick 1: reusing ends of arcsDuring a phase, if a cycle is labeled with the strings \(a_{1}\cdots a_{j}\) from the new arc and \(c_{x}\cdots c_{x+j}\) or \(c_{x}\cdots c_{x+j+1}\) from the chain, then it is correct to use for the next cycle the string \(c_{x-j+1}\cdots c_{x}\) (resp., \(c_{x-j}\cdots c_{x}\)), thus reusing the label \(c_{x}\); for example see \(C_{13}\) and \(C_{12}\) in Fig. 5.
The function label() is defined so that the above situation happens only in the beginning of the phase, so this trick saves 1 or 2 labels in the chain. Still, sometimes this leads to labeling an additional cycle during a phase.
Trick 2: using chain remaindersIn the end of a phase, we memorize the remainder \(c\) of the chain and the current arc \(a\). Thus, at any moment we have the set \(S\) of such pairs of strings (initially empty). Now, before labeling a cycle we check whether \(S\) contains a pair \((c,a)\) that can label this cycle. If yes, we extract \((c,a)\) from \(S\) and run a "mini-phase", labeling successive cycles with \(c\) as the arc and \(a\) as the chain; when the mini-phase ends, with \(a^{\prime}\) being the chain remainder, we put the pair \((a^{\prime},c)\) in \(S\) and proceed to the next cycle. Otherwise, we label the current cycle as usual. In Fig. 5, the pair \((12,abcdef)\) was added to \(S\) after phase 2. Later, the cycles \(C_{6}\) and \(C_{5}\) were labeled during a mini-phase with this pair; note that trick 1 was used to label \(C_{5}\).
_Trick 3: two-pass phase._ We combine the last two phases as follows. Let \(a\) be the arc in the penultimate phase and the prefix \(a^{\prime}\) of \(a\) was unused for labeling the last cycle during the phase. As this cycle consumed \(|a|-|a^{\prime}|\) labels from the arc, for the last phase we need the arc of the same (up to a small constant) length. We create such an arc of the form \(\hat{a}a^{\prime}\), where the labels from \(\hat{a}\) are new (if \(|a^{\prime}|>|a|/2\), no new labels are needed). During the last phase, we reverse both the arc and the chain: the chain \(c\) is cut from the beginning, and the arc \(\hat{a}a^{\prime}\) is cut from the end. In this way, the labels from \(a^{\prime}\) will not meet the labels from \(c\) for the second time, so the phase will finish correctly. An additional small trick can help sometimes: for a cycle, we use one less symbol from the arc than the maximum possible. This charges the chain by an additional label but this label can be reused from the previous cycle by employing trick 1. In Fig. 5, this was done for \(C_{8}\). As a result, \(a^{\prime}=v\) did not meet \(1,\ldots,6\) from the chain and we were able to label \(C_{4}\) and \(C_{3}\) with no additional labels.
Results.We applied the _enhanced chain scheme_, which uses tricks 1-3, for labeling many families \(\mathcal{C}_{n}\) for \(n\in[10^{3}..10^{7}]\). In all cases, we get the number of labels \(ch^{+}(n)\approx\frac{n(\sqrt{n}+1)}{\sqrt{6}}\), which is exactly the upper bound of the corridor in Fig. 4. In Table 1, we compare the results of chain schemes to the known optima.
Overall, the results gathered in Fig. 4 and Table 1 together with the behavior of \(ch(n)\) and \(ch^{+}(n)\) for big \(n\) give a substantial support to Conjecture 1.
## 5 Discussion and Future Work
The main open problem for distance labeling of the families \(\mathcal{C}_{n}\) is the gap between the lower bound \(\lambda(n)=\Omega(n\sqrt[3]{n})\) and the upper bound \(\lambda(n)=O(n\sqrt{n})\). Our results suggest that the upper bound provides the correct asymptotics but improving the lower bound probably needs some new approach.
Figure 5: An optimal labeling of \(\mathcal{C}_{14}\) by the enhanced chain scheme.
This is pretty alike the situation with distance labeling of planar graphs. Here, the gap (in terms of the length of a label in bits, i.e., in logarithmic scale) is between \(\Omega(\sqrt[3]{n})\) and \(O(\sqrt{n})\) and there is an evidence [14, 1] that the upper bound gives the correct asymptotics but the existing approach does not allow to improve the lower bound. Another similar gap between the cubic root and the quadratic root appears in the adjacency labeling problem for \(\mathcal{C}_{n}\)[3].
As a possible approach to the improvement of the lower bound for \(\lambda(n)\) we propose to study the number \(\lambda_{k}(n)\) of labels needed to label the family \(\mathcal{C}_{n,k}=\{C_{n},C_{n-1},\ldots,C_{n-k+1}\}\), starting from small \(k\). Algorithm 1 and Lemma 2 imply \(\lambda_{2}(n)=\lambda_{3}(n)=1.5n+O(1)\) but already the next step is not completely trivial.
|
2306.12081
|
A degree reduction method for an efficient QUBO formulation for the
graph coloring problem
|
We introduce a new degree reduction method for homogeneous symmetric
polynomials on binary variables that generalizes the conventional degree
reduction methods on monomials introduced by Freedman and Ishikawa. We also
design an degree reduction algorithm for general polynomials on binary
variables, simulated on the graph coloring problem for random graphs, and
compared the results with the conventional methods. The simulated results show
that our new method produces reduced quadratic polynomials that contains less
variables than the reduced quadratic polynomials produced by the conventional
methods.
|
Namho Hong, Hyunwoo Jung, Hyosang Kang, Hyunjin Lim, Chaehwan Seol, Seokhyun Um
|
2023-06-21T07:56:56Z
|
http://arxiv.org/abs/2306.12081v2
|
# A degree reduction method for an efficient Qubo formulation for the graph coloring problem
###### Abstract.
We introduce a degree reduction method for symmetric polynomials on binary variables. We also design an degree reduction algorithm for general polynomials on binary variables, simulated on the graph coloring problem for random graphs, and compared the results with the conventional method. The data shows that our method produces quadratic polynomial of less variables than the conventional method. The algorithm for our new degree reduction method is robust, and applies to any QUBO formulation for quantum annealing systems.
Key words.:Degree reduction, Graph coloring, QUBO, Quantum annealing
## 1. Introduction
A graph is a 1-dimensional object that consists of vertices and edges. Two vertices are called adjacent if they are connected by an edge, and a graph is called simple if there is no multiple edges that joins adjacent vertices. A vertex coloring is an assignment of "colors" to vertices such that no adjacent vertices are assigned as the same color. The graph coloring problem is finding the minimum number of colors that properly colors the graph. The graph coloring problem has a wide range of applications such as scheduling [9], register allocation in compilers [3], and frequency assignment in wireless communications [2].
The graph coloring problem is NP-hard, meaning that there is no known polynomial-time algorithm that solves the problem [5, 6]. One of the heuristic methods for solving the graph coloring problem is the quantum annealing, which uses quantum tunneling effect [1, 8]. To use quantum annealing system such as D-Wave's for solving graph coloring problems, we should first formulate the problem as QUBO (quadratic unconstrained binary optimization) problem [10, 11]. This means the utility functions should be given as a polynomial on binary variables of degree two. Since the utility functions are of higher degree in general, we need to apply degree reduction methods to obtain a quadratic polynomial [12].
The utility function for the graph coloring problem is formulated as follows. Let \(V\) and \(E\) be the sets of vertices and edges in the graph \(G\). Suppose that we color
Introduction
Let \(G\) be a graph with vertex \(v\) and let \(w\) be a vertex of \(G\). We say that \(G\) is _connected_ if \(v\) is connected to \(w\) and \(w\) is connected to \(w\). We say that \(G\) is _connected_ if \(v\) is connected to \(w\) and \(w\) is connected to \(w\). We say that \(G\) is _connected_ if \(v\) is connected to \(w\) and \(w\) is connected to \(w\). We say that \(G\) is _connected_ if \(v\) is connected to \(w\) and \(w\) is connected to
applied to a monomial, new auxiliary variables are produced. The more auxiliary variables we have, heavier loads are put on a quantum annealing system, and the number of auxiliary variables easily exceeds the number of original variables. For example, the utility polynomial \(Q_{G}\) for the complete graph \(G\) of vertex size \(8\) has \(24\) variables and \(1429\) monomials. After the monomial degree reduction, there there are \(1156\) new auxiliary variables (c.f. Table 3).
In this paper, we propose a new method called the **symmetric reduction** (**SymmRed** for short) that produces less number of auxiliary variables than the monomial reduction. The main idea is to reduce symmetric polynomials. In SS2, we state two main results (Theorem 1, 2) on reducing the symmetric polynomials of positive and negative coefficients. We prove them in SS3, 4. In SS5, we describe the algorithm of the symmetric reduction (c.f. Table 1, 2), and show the efficiency on lowering the number of auxiliary variables in the symmetric reduction. We tested the monomial degree reduction method and the symmetric reduction method on random graphs and complete graphs of various vertex sizes.
## 2. The main results
Let us redefine a binomial coefficient symbol \(\binom{n}{m}\) for integer \(n,m\) as followings:
\[\binom{n}{m}=\left\{\begin{array}{cl}\frac{n!}{m!(n-m)!}&\text{if }m\leq n \\ 0&\text{otherwise}\end{array}\right..\]
Let \(S_{n}^{(m)}\) be the collection of all subsets of \(\{1,\cdots,n\}\) consists of \(m\) distinct integers. Let \(P_{n}^{(m)}(\mathbf{x})\) be the homogeneous polynomial of degree \(m\) on \(n\) variables \(\mathbf{x}=(x_{1},\ldots,x_{n})\):
\[P_{n}^{(m)}(\mathbf{x})=\sum_{\{i_{1},\cdots,i_{m}\}\in S_{n}^{(m)}}x_{i_{1}} \cdots x_{i_{m}}.\]
We will call \(P_{n}^{(m)}\) the **symmetric \(m\)-polynomial on the variable \(\mathbf{x}\)**. For example, the symmetric \(3\)-polynomial on \(5\) variables \(x_{1},\cdots,x_{5}\) is the following.
\[P_{5}^{(3)}(\mathbf{x}) =x_{1}x_{2}x_{3}+x_{1}x_{2}x_{4}+x_{1}x_{2}x_{5}+x_{1}x_{3}x_{4}+ x_{1}x_{3}x_{5}\] \[\qquad+x_{1}x_{4}x_{5}+x_{2}x_{3}x_{4}+x_{2}x_{3}x_{5}+x_{2}x_{4} x_{5}+x_{3}x_{4}x_{5}.\]
The following two theorems are the keys to to the symmetric reduction.
**Theorem 1**.: _Let \(n>m>2\) and_
\[L(n,m)=\binom{n-2}{m-2},\quad d=\left\lfloor\frac{n-1}{2}\right\rfloor.\]
_For \(1\leq i\leq d\), define_
\[a_{i}(n,m)= (4i-1)\binom{n-2}{m-2}-2i\binom{2i}{m-1}+2i\binom{2i-2}{m-1}+ \binom{2i-2}{m-2}, \tag{2}\] \[b_{i}(n,m)= -2\binom{n-2}{m-2}+\binom{2i}{m-1}-\binom{2i-2}{m-1}. \tag{1}\]
_Then_
\[P_{n}^{(m)}(\mathbf{x})=L(n,m)\sum_{1\leq i<j\leq n}x_{i}x_{j}+\min_{w_{i}\in \{0,1\}}\left[\sum_{i=1}^{d}w_{i}\left(a_{i}(n,m)+b_{i}(n,m)\sum_{j=1}^{n}x_{ j}\right)\right].\]
**Theorem 2**.: _Let \(n>m>2\) and \(d=\left\lfloor\dfrac{n-m+2}{2}\right\rfloor\). For \(0\leq i\leq d\), define_
\[a_{i}(m) =(m-1)\left(\binom{m+2i-1}{m}-\binom{m+2i-3}{m}\right), \tag{4}\] \[b_{i}(m) =\binom{m+2i-4}{m-1}-\binom{m+2i-2}{m-1}. \tag{3}\]
_Then_
\[-P_{n}^{(m)}(\mathbf{x})=\min_{w_{i}\in\{0,1\}}\sum_{i=1}^{d}w_{i}\left(a_{i}( m)+b_{i}(m)\sum_{j=1}^{n}x_{j}\right).\]
Let us explain the underlying ideas in Theorem 1 with the example of \(P_{5}^{(3)}(\mathbf{x})\). The goal is to find a quadratic polynomial that attains the same value of \(P_{5}^{(3)}\) with some auxiliary variables \(w_{1},\ldots,w_{d}\). The only possible way to formulate such quadratic polynomial is the following:
\[P_{5}^{(3)}(\mathbf{x})=\sum_{1\leq i<j\leq 5}c_{ij}x_{i}x_{j}+\sum_{i}d_{i}x _{i}+\min_{w_{j}=0,1}\sum_{j=1}^{d}w_{j}\left(a_{j}+\sum_{i=1}^{5}b_{i,j}x_{i }\right). \tag{5}\]
In fact, this is the general form of any degree reduction. We omitted the constant term in Equation (5) since the minimum \(P_{n}^{(m)}\) is always \(0\). Let us assume that \(b_{i,j}=b_{j}\) only depends on \(i\), and \(c_{i,j}=c\), \(d_{i}=0\) are constants. Let \(l=l(\mathbf{x})=x_{1}+\ldots x_{5}\) be the arithmetic sum (_not_ the binary sum) of all values in the variable \(\mathbf{x}\). Then the sequence \(A_{l}\) defined by
\[A_{l}=P_{5}^{(3)}(\mathbf{x})-c\sum_{i,j}x_{i}x_{j} \tag{6}\]
is the sum of arithmetic progressions on \(l\):
\[A_{l}=\min_{w_{j}=0,1}\left[\sum_{j=1}^{d}w_{j}\left(a_{j}+b_{j}l\right)\right]. \tag{7}\]
We observe that
\[P_{5}^{(3)}(\mathbf{x})=\binom{l}{3},\quad\sum_{i<j}x_{i}x_{j}=\binom{l}{2}.\]
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \(l(\mathbf{x})\) & \(P_{5}^{(3)}(\mathbf{x})\) & \(\sum_{i<j}x_{i}x_{j}\) & \(A_{l}\) & \(\min_{w_{1}=0,1}w_{1}(7-5l(\mathbf{x}))\) & \(\min_{w_{2}=0,1}w_{2}(3-l(\mathbf{x}))\) \\ \hline \hline
0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 \\
2 & 0 & 1 & \(-3\) & \(-3\) & 0 \\
3 & 1 & 3 & \(-8\) & \(-8\) & 0 \\
4 & 4 & 6 & \(-14\) & \(-13\) & \(-1\) \\
5 & 10 & 10 & \(-20\) & \(-18\) & \(-2\) \\ \end{tabular}
\end{table}
Table 1. The sequence \(A_{l}\) in Equation (6) when \(c=3\) and the relative arithmetic progressions.
When \(c=3\), the sequence \(A_{0},A_{1},\ldots,A_{5}\) becomes a decreasing sequence of non-positive integers as in Table 1. Such sequence can always be expressed as the sum of multiple arithmetic progressions of decreasing non-positive integers. With \(\min\) and the auxiliary variables \(w_{j}\), the value of arithmetic progression \(a_{j}+b_{j}l\) never exceed \(0\).
Table 1 shows the values of \(A_{l}\) and the arithmetic progressions that exhaust \(A_{l}\) when added together. The first two nonzeros \(-3,-8\) of \(A_{l}\) follows the arithmetic progression \(7-5l\). By subtracting the sequence \(\min w_{1}(7-5l)\) from \(A_{l}\) index wise, we get a sequence that has two negatives \(-1,-2\) at \(l=4,5\). We can eliminate this by the sequence \(\min w_{2}(3-l)\). Thus we get the following equation.
\[P_{5}^{(3)}(\mathbf{x})=3\sum_{i<j}x_{i}x_{j}+\min_{w_{1},w_{2}=0,1}\left[w_{1 }(7-5\sum_{i=1}^{5}x_{i})+w_{2}(3-\sum_{i=1}^{5}x_{i})\right].\]
Now let us explain the idea of Theorem 2 with the example of \(-P_{5}^{(3)}(\mathbf{x})\). Let \(l=l(\mathbf{x})\) be the arithmetic sum of \(\mathbf{x}\) as before, and define \(B_{l}=-P_{5}^{(3)}(\mathbf{x})\). The sequence \(B_{l}\) is already a decreasing sequence of negative integers as shown in Table 2. The first two negatives \(-1,-4\) follows the arithmetic progression \(8-3l\). Subtracting the sequence \(\min w_{1}(8-3l)\) from \(B_{l}\) leaves two values \(0,-3\) at \(l=4,5\). This can be exhausted by the sequence \(\min w_{2}(12-3l)\). Thus we get the following equation.
\[-P_{5}^{(3)}(\mathbf{x})=\min_{w_{1},w_{2}=0,1}\left[w_{1}(8-3\sum_{i=1}^{5}x_ {i})+w_{2}(12-3\sum_{i=1}^{5}x_{i}).\right]\]
In the next section, we will prove Theorem 1, 2 for the general symmetric polynomial \(P_{n}^{(m)}\). It is worth to point two facts.
1. There are \(n-1\) negatives in the sequence \(A_{l}\). Thus we need \(\lfloor(n-1)/2\rfloor\) auxiliary variables for \(A_{l}\). Also, the coefficients \(a_{j}\) and \(b_{j}\) are dependent to both \(n\) and \(m\).
2. There are \(n-m+2\) negatives in \(B_{l}\), so we need \(\lfloor(n-m+2)/2\rfloor\) auxiliary variables. We do not need the constant \(n\) to determine \(a_{j}\) and \(b_{j}\), because the sequence \(B_{l}\) is determined by \(m\) only.
\begin{table}
\begin{tabular}{c|c|c|c} \(l(\mathbf{x})\) & \(-P_{5}^{(3)}(\mathbf{x})\) & \(\min_{w_{1}=0,1}w_{1}(8-3l(\mathbf{x}))\) & \(\min_{w_{2}=0,1}w_{2}(12-3l(\mathbf{x}))\) \\ \hline \hline
0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 \\
2 & 0 & 0 & 0 \\
3 & \(-1\) & \(-1\) & 0 \\
4 & \(-4\) & \(-4\) & 0 \\
5 & \(-10\) & \(-7\) & \(-3\) \\ \end{tabular}
\end{table}
Table 2. The values of \(-P_{5}^{(3)}(\mathbf{x})\) and the relative arithmetic progressions.
## 3. Proof of Theorem 1
Let \(\mathbf{x}=(x_{1},\cdots,x_{n})\) and
\[l=l(\mathbf{x})=\sum_{i=1}^{n}x_{i} \tag{8}\]
be the sum of all values in the variables \(x_{1},\ldots,x_{n}.\) Our first task is to find a value \(c=c(n,m)\) so that
\[A_{l}=P_{n}^{(m)}(\mathbf{x})-c\sum_{i<j}x_{i}x_{j}\]
becomes a decreasing sequence on \(l.\) For each \(i\geq 0\) satisfying \(n-i-2\geq m,\) we observe that
\[A_{n-i} =\binom{n-i}{m}-c\binom{n-i}{2},\] \[A_{n-i-1} =\binom{n-i-1}{m}-c\binom{n-i-1}{2},\] \[A_{n-i-2} =\binom{n-i-2}{m}-c\binom{n-i-2}{2}.\]
We want these three values become a decreasing sequence for any such \(i.\) This is possible only when
\[(A_{n-i}-A_{n-i-1})-(A_{n-i-1}-A_{n-i-2})\] \[=\left(\binom{n-i-1}{m-1}-\binom{n-i-2}{m-1}\right)-c\left(\binom {n-i-1}{1}-\binom{n-i-2}{1}\right)\] \[=\binom{n-i-2}{m-2}-c\leq 0.\]
Thus we get
\[c=\max_{i}\binom{n-i-2}{m-2}=\binom{n-2}{m-2}.\]
Next, let us find \(a_{j},b_{j}\) satisfying Equation (7). For \(i\geq 1,\)
\[\binom{2i}{m}-\binom{n-2}{m-2}\binom{2i}{2} =\sum_{k=1}^{i}a_{k}+\sum_{k=1}^{i}2kb_{1}, \tag{10}\] \[\binom{2i+1}{m}-\binom{n-2}{m-2}\binom{2i+1}{2} =\sum_{k=1}^{i}a_{k}+\sum_{k=1}^{i}(2k+1)b_{1}. \tag{9}\]
Thus we get
\[\sum_{k=1}^{i}b_{k}=\binom{2i}{m-1}-2i\binom{n-2}{m-2}. \tag{11}\]
This gives the formula (2).
Next, let us take (9) + (10) to get
\[\binom{2i}{m}+\binom{2i+1}{m}-\binom{n-2}{m-2}\left(\binom{2i}{2}+\binom{2i+ 1}{2}\right)=2\sum_{k=1}^{i}a_{k}+(4i+1)\sum_{k=1}^{i}b_{k}\]
Applying Equation (11), we get
\[\sum_{k=1}^{i}a_{k}=2{n-2\choose m-2}{2i+1\choose m}-(m-1){2i+1\choose m}.\]
This derives Equation (1).
## 4. Proof of Theorem 2
With the same notation for \(l\) in Equation (8), let us define \(B_{l}=-P_{n}^{(m)}(\mathbf{x})\). For each \(j\geq 0\), We want to find \(a_{j},b_{j}\) that satisfies
\[B_{l}=\min_{w_{j}=0,1}\left[\sum_{j=1}^{d}w_{j}\left(a_{j}+b_{j}l\right)\right].\]
For each \(i\geq 1\),
\[-{m+2i-2\choose m}= \sum_{k=1}^{i}a_{k}+(m+2j-2)\sum_{k=1}^{i}b_{k}, \tag{13}\] \[-{m+2i-1\choose m}= \sum_{k=1}^{i}a_{k}+(m+2j-1)\sum_{k=1}^{i}b_{k}. \tag{12}\]
By taking (12)-(13), we get
\[-{m+2j-2\choose m-1}=\sum_{k=1}^{j}b_{k}. \tag{14}\]
Thus we get Equation (4). By taking (12)+(13), we get
\[2\sum_{k=1}^{i}a_{k}+(2m+4i-3)\sum_{k=1}^{i}b_{k}=-{m+2i-1\choose m}-{m+2i-2 \choose m}.\]
Applying Equation (14), we get
\[\sum_{k=1}^{i}a_{k}=(m-1){m+2i-1\choose m}.\]
Therefore, we get Equation (3).
## 5. Simulated results
In this section, we introduce two algorithms, **MaxSymm** and **SymmRed**. **MaxSymm** is an algorithm that finds a maximal symmetric polynomial in a given polynomial. **SymmRed** is a degree reduction algorithm that reply on **MaxSymm** and **MonoRed** (c.f. SS1).
Algorithm 1 is the pseudocode for **MaxSymm** algorithm. We will say a polynomial \(q\)**lies in** a polynomial \(p\) if all monomials in \(q\) are monomials in \(p\). We will say a symmetric \(m\)-polynomial \(q\) is **maximal in**\(p\) if \(q\) lies in \(p\) and there is no larger symmetric \(m\)-polynomial that lies in \(p\). In this section, we introduce algorithm to find a maximal symmetric polynomials in \(p\).
Algorithm 2 is the pseudocode for **SymmRed** algorithm. For each symmetric polynomial found by this algorithm, we apply Theorem 1 or 2. (c.f. Line 9, 11 in
Algorithm 2) We apply the monomial degree reduction (by Freedman and Ishikawa) on the rest of the parts.
```
1:procedureMaxSymm(\(p\), \(m\))\(\triangleright\)\(p\) is a polynomial, \(m\geq 3\) is a degree
2: Define \(A:=B:=C:=\) the list of all \(m\)-monomials in \(p\)
3:while\(C\) is not empty do
4:\(B\gets C\)
5:\(C\leftarrow\) the empty list (of polynomials)
6:for\(q\) in \(B\)do
7:for\(r\) in \(A\)do
8:\(S\leftarrow\) the set of all variables in \(q\) and \(r\)
9:\(s\longleftarrow\) the symmetric \(m\)-polynomial on \(S\)
10:if all monomials in \(s\) lie in \(p\)then
11:\(C.\)append(\(s\))
12:endif
13:endfor
14:endfor
15:endwhile
16:return\(B\)
17:endprocedure
```
**Algorithm 1** MaxSymm algorithm
Line 6 in Algorithm 2) defines a coefficient \(a\) of a symmetric polynomial. We choose \(a\) so that we can eliminate as many monomials as possible in Line 7. However, we may have more one candidate for \(a\). For example, the polynomial below give
two choices for \(a\) of different signs.
\[-x_{0}x_{1}x_{2}-x_{0}x_{1}x_{3}+x_{0}x_{2}x_{3}+x_{1}x_{2}x_{3}\] \[=\underbrace{-x_{0}x_{1}x_{2}-x_{0}x_{1}x_{3}-x_{0}x_{2}x_{3}-x_{1 }x_{2}x_{3}}_{\text{Apply Theorem 1}}+\underbrace{2x_{0}x_{2}x_{3}+2x_{1}x_{2}x_{3}}_{ \text{monomial-wise reduction}}\] \[=\underbrace{x_{0}x_{1}x_{2}+x_{0}x_{1}x_{3}+x_{0}x_{2}x_{3}+x_{1 }x_{2}x_{3}}_{\text{Apply Theorem 2}}+(\underbrace{-2x_{0}x_{1}x_{2}-2x_{0}x_{1}x_{3}}_{ \text{monomial-wise reduction}})\]
We tested the Symmetric Reduction algorithm on random \(p\)-graphs. We varied the number of vertices \(V=3,4,5,6,7,8\) and the probabilities \(p=0.75\), \(0.80\), \(0.85\), \(0.90\), \(0.95\), and \(1.00\). We obtained the average number of variables \(r_{i}\) and monomials \(N_{i}\) of the reduced polynomials for both reduction methods where \(i=1\) for symmetric reduction and \(i=2\) for monomial reduction. Note that when \(p=1.00\) we get the complete graph, so the numbers are fixed. In the simulation, we set the number of colors to be \(d=\lceil\log_{2}V\rceil\).
The results are shown in Table 3 and Figure 1. On average, the numbers of variables produced by the symmetric reduction method are approximately \(38.90\%\sim 35.95\%\) of that by the monomial reduction method. The reduced polynomial obtained by the symmetric reduction method approximately contains \(41.12\%\sim 43.35\%\) of monomials that produced by the monomial reduction method.
Figure 1. Graphs of the average numbers of variables (vertical) on the numbers of vertices (horizontal) of random \(p\)-graphs. The squared-dots are for the monomial reduction where as the rounded-dots are for the the symmetric reduction.
## 6. Conclusion
One of the main limitation that the current quantum annealing systems have in solving real-world problems is the number of qubits available on the system. Thus the optimizing the number of variables in a QUBO formulation is essential.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\(p\)} & \multirow{2}{*}{\(V\)} & \multicolumn{2}{c|}{Symmetric reduction} & \multicolumn{2}{c|}{Monomial reduction} & \multirow{2}{*}{\(r_{1}/r_{2}\)} & \multirow{2}{*}{\(N_{1}/N_{2}\)} \\ \cline{3-3} \cline{5-8} & & \(r_{1}\) & \(N_{1}\) & \(r_{2}\) & \(N_{2}\) & & \(r_{1}/r_{2}\) & \(N_{1}/N_{2}\) \\ \hline \multirow{6}{*}{0.75} & 3 & 10.01 & 40.41 & 16.67 & 64.81 & 60.07\% & 62.34\% \\ & 4 & 16.92 & 76.14 & 30.49 & 125.90 & 55.49\% & 60.47\% \\ & 5 & 132.63 & 706.06 & 329.15 & 1604.47 & 40.30\% & 44.01\% \\ & 6 & 191.24 & 1033.05 & 483.05 & 2367.48 & 39.59\% & 43.63\% \\ & 7 & 265.02 & 1446.57 & 675.62 & 3324.94 & 39.23\% & 43.51\% \\ & 8 & 344.87 & 1896.55 & 886.97 & 4376.71 & 38.90\% & 43.35\% \\ \hline \multirow{6}{*}{0.80} & 3 & 10.59 & 43.35 & 17.81 & 69.83 & 59.45\% & 62.08\% \\ & 4 & 17.54 & 80.14 & 31.95 & 132.98 & 54.90\% & 60.27\% \\ & 5 & 140.79 & 757.57 & 350.27 & 1710.45 & 40.19\% & 44.00\% \\ & 6 & 204.09 & 1107.13 & 517.53 & 2540.70 & 39.44\% & 43.58\% \\ & 7 & 277.52 & 1518.67 & 709.22 & 3493.69 & 39.13\% & 43.47\% \\ & 8 & 368.10 & 2030.41 & 948.97 & 4688.19 & 38.79\% & 43.31\% \\ \hline \multirow{6}{*}{0.85} & 3 & 10.95 & 45.38 & 18.57 & 73.34 & 58.94\% & 61.87\% \\ & 4 & 18.20 & 84.57 & 33.55 & 140.83 & 54.26\% & 60.05\% \\ & 5 & 147.75 & 792.82 & 368.94 & 1804.23 & 40.05\% & 43.94\% \\ & 6 & 215.48 & 1173.98 & 550.00 & 2703.81 & 39.18\% & 43.42\% \\ & 7 & 296.99 & 1629.53 & 759.03 & 3743.98 & 39.13\% & 43.52\% \\ & 8 & 387.75 & 2144.74 & 1003.50 & 4962.17 & 38.64\% & 43.22\% \\ \hline \multirow{6}{*}{0.90} & 3 & 11.33 & 47.66 & 19.42 & 77.32 & 58.35\% & 61.64\% \\ & 4 & 18.80 & 88.62 & 35.00 & 148.05 & 53.70\% & 59.86\% \\ & 5 & 155.46 & 837.38 & 389.48 & 1907.39 & 39.92\% & 43.90\% \\ & 6 & 225.05 & 1231.30 & 579.07 & 2849.89 & 38.86\% & 43.21\% \\ & 7 & 314.98 & 1732.93 & 806.38 & 3981.91 & 39.06\% & 43.52\% \\ & 8 & 409.07 & 2269.40 & 1063.36 & 5262.93 & 38.47\% & 43.12\% \\ \hline \multirow{6}{*}{0.95} & 3 & 11.68 & 49.84 & 20.22 & 81.15 & 57.76\% & 61.41\% \\ & 4 & 19.39 & 92.74 & 36.48 & 155.38 & 53.16\% & 59.68\% \\ & 5 & 163.19 & 881.79 & 409.50 & 2008.00 & 39.85\% & 43.91\% \\ \cline{1-1} & 6 & 231.97 & 1278.27 & 608.69 & 2998.73 & 38.11\% & 42.63\% \\ \cline{1-1} & 7 & 328.62 & 1813.48 & 845.34 & 4177.61 & 38.87\% & 43.41\% \\ \cline{1-1} & 8 & 425.13 & 2374.63 & 1126.03 & 5577.85 & 37.90\% & 42.69\% \\ \hline \multirow{6}{*}{1.00} & 3 & 12 & 52 & 21 & 85 & 57.14\% & 61.18\% \\ & 4 & 20 & 97 & 38 & 163 & 52.63\% & 59.51\% \\ \cline{1-1} & 5 & 171 & 927 & 430 & 2111 & 39.77\% & 43.91\% \\ \cline{1-1} & 6 & 234 & 1306 & 639 & 3151 & 36.62\% & 41.45\% \\ \cline{1-1} & 7 & 345 & 1909 & 889 & 4397 & 38.81\% & 43.42\% \\ \cline{1-1} & 8 & 424 & 2405 & 1180 & 5849 & 35.93\% & 41.12\% \\ \hline \end{tabular}
\end{table}
Table 3. The average numbers of variables \(r_{i}\) and monomials \(N_{i}\) from the symmetric (\(i=1\)) and monomial (\(i=2\)) reduction methods on random \(p\)-graphs with \(V\) vertices and the ratios.
We showed that there is alternative degree reduction method that produces less number of variables during the QUBO formulation. The algorithm we implemented for degree reduction is robust. The algorithm works better on polynomials that has higher symmetry. We expect that we can apply this method to any QUBO formulations of various problems.
|
2306.02292
|
New cases of super-flares on slowly rotating solar-type stars and large
amplitude super-flares in G- and M-type main-sequence stars
|
In our previous work, we searched for super-flares on different types of
stars while focusing on G-type dwarfs using entire Kepler data to study
statistical properties of the occurrence rate of super-flares. Using these new
data, as a by-product, we found fourteen cases of super-flare detection on
thirteen slowly rotating Sun-like stars with rotation periods of 24.5 to 44
days. This result supports earlier conclusion by others that the Sun may
possibly have a surprise super-flare. Moreover, we found twelve and seven new
cases of detection of exceptionally large amplitude super-flares on six and
four main-sequence stars of G- and M-type, respectively. No large-amplitude
flares were detected in A, F, or K main-sequence stars. Here we present
preliminary analysis of these cases. The super-flare detection, i.e. an
estimation of flare energy, is based on a more accurate method compared to
previous studies. We fit an exponential decay function to flare light curves
and study the relation between e-folding decay time, $\tau$, vs. flare
amplitude and flare energy. We find that for slowly rotating Sun-like stars,
large values of $\tau$ correspond to small flare energies and small values of
$\tau$ correspond to high flare energies considered. Similarly, $\tau$ is large
for small flare amplitudes and $\tau$ is small for large amplitudes considered.
However, there is no clear relation between these parameters for large
amplitude super-flares in the main sequence G- and M-type stars, as we could
not establish clear functional dependence between the parameters via standard
fitting algorithms.
|
A. k. Althukair, D. Tsiklauri
|
2023-06-04T07:58:20Z
|
http://arxiv.org/abs/2306.02292v2
|
New cases of super-flares on slowly rotating solar-type stars and large amplitude super-flares in G- and M-type main-sequence stars.
###### Abstract
In our previous work, we searched for super-flares on different types of stars while focusing on G-type dwarfs using entire Kepler data to study statistical properties of the occurrence rate of super-flares. Using these new data, as a by-product, we found fourteen cases of super-flare detection on thirteen slowly rotating Sun-like stars with rotation periods of 24.5 to 44 days. This result supports earlier conclusion by others that the Sun may possibly have a surprise super-flare. Moreover, we found twelve and seven new cases of detection of exceptionally large amplitude super-flares on six and four main-sequence stars of G- and M-type, respectively. No large-amplitude flares were detected in A, F, or K main-sequence stars. Here we present preliminary analysis of these cases. The super-flare detection, i.e. an estimation of flare energy, is based on a more accurate method compared to previous studies. We fit an exponential decay function to flare light curves and study the relation between e-folding decay time, \(\tau\), vs. flare amplitude and flare energy. We find that for slowly rotating Sun-like stars, large values of \(\tau\) correspond to small flare energies and small values of \(\tau\) correspond to high flare energies considered. Similarly, \(\tau\) is large for small flare amplitudes and \(\tau\) is small for large amplitudes considered. However, there is no clear relation between these parameters for large amplitude super-flares in the main sequence G- and M-type stars, as we could not establish clear functional dependence between the parameters via standard fitting algorithms.
stars: activity -- stars: flare -- stars: rotation -- stars: solar-type -- stars: statistics -- Sun: flares Vol.0 (20xx) No.0, 000-000 1220xx month day; accepted 20xx month day
## 1 Introduction
It is believed the Solar and stellar flares are powered by a physical process called magnetic reconnection, in which connectivity of magnetic field lines in the atmospheres of stars changes rapidly (Masuda et al. 1994; Shibata et al. 1995). This is accompanied by acceleration of plasma particles and release of heat. The source of this kinetic and thermal energy is the energy stored in the magnetic field. Thus, the magnetic dynamo process which is one of possible means to generate magnetic field via bulk plasma flows is of great importance for understanding of what can power and therefore be source for a flare or a super-flare. The energies of observed stellar flares lie in the wide range from \(10^{28}\) to \(10^{37}\) erg,
while the highest energy of any observed solar flare is approximately few times \(10^{32}\) erg. Thus, generally agreed terminology is that a super-flare should have an energy in excess of \(10^{34}\) erg. A plausible dynamo model capable to explain the generation of magnetic energy sufficient to support super-flares has been recently suggested in Kitchatinov & Olemskoy (2016) and then further investigated in Katsova et al. (2018). In this scenario, rather than producing stellar cycles similar to the solar 11 year cycle, the dynamos in superflaring stars excite some quasi-stationary magnetic configuration with a much higher magnetic energy. Further, Kitchatinov et al. (2018) used a flux-transport model for the solar dynamo with fluctuations of the Babcock-Leighton type \(\alpha\)-effect to generate statistics of magnetic cycles. As a result, they concluded that the statistics of the computed energies of the cycles suggest that super-flares with energies in excess of \(10^{34}\) erg are not possible on the Sun.
Historical records suggest that no super-flares have occurred on the Sun in the last two millennia. In the past there were notable examples detection of super-flares on Sun-like stars. There are two references which support such clam Schaefer et al. (2000) and Nogami et al. (2014).
As claimed by Schaefer et al. (2000) they identified nine cases of super-flares involving \(10^{33}\) to \(10^{38}\) ergs on main sequence Sun-like stars. Sun-like means that stars are on or near the main-sequence, have spectral class from F8 to G8, are single (or have a very distant binary companion). The super-flare energy estimation by Schaefer et al. (2000) was based on photometric methods.
Nogami et al. (2014) reported the results of high dispersion spectroscopy of two'super-flare stars', KIC 9766237, and KIC 9944137 using Subaru/HDS telescope. These two stars are G-type main sequence stars, and have rotation periods of 21.8 days, and 25.3 days, respectively. Their spectroscopic results confirmed that these stars have stellar parameters similar to those of the Sun in terms of the effective temperature, surface gravity, and metallicity. By using the absorption line of Ca II 8542, the average strength of the magnetic field on the surface of these stars was estimated to be 1-20 G. The super-flare energy estimation by Nogami et al. (2014) was based semi-empirical method based on magnetic energy density times volume of flare. These results claim that the spectroscopic properties of these super-flare stars are very close to those of the Sun, and support the hypothesis that the Sun may have a super-flare. What causes super-flares is an open issue and many theories exist to explain their origin. Karak et al. (2020) have shown that sun-like slowly rotating stars, having anti-solar differential rotation, i.e. when equatorial regions of the star rotate slower than the polar regions, can produce a very strong magnetic field and that could be a possible explanation for the superflare. It is the anti-solar differential rotation that can produce strong fields in slowly rotating stars. A study conducted by Karak et al. (2020) focus on mean-field kinematic dynamo modeling to investigate the behaviour of large-scale magnetic fields in different stars with varying rotation periods. They specifically consider two cases: stars with rotation periods larger than 30 days, which exhibit antisolar differential rotation (DR), and stars with rotation periods shorter than 30 days, which exhibit solar-like DR. The study supports the possible existence of antisolar differential rotation in slowly rotating stars and suggests that these stars may exhibit unusually enhanced magnetic fields and potentially produce cycles that are prone to the occurrence of super-flares. In general the transition from solar to anti-solar differential rotation happens somewhere around the Rossby number of unity. For the Sun, it is obviously solar-like, but when the rotation rate decreases, one expects to have an anti-solar differential rotation. This robust transition has been seen in many numerical simulations. Karak et al. (2015), for example, using global MHD convection simulations, consistently find anti-solar differential rotation when the star rotates slowly.
Statistical study of super-flares on different stellar types has been an active area of research (Maehara et al. 2012; Shibayama et al. 2013; Wu et al. 2015; He et al. 2015, 2018; Yang et al. 2017; Van Doorsselaere et al. 2017; Lu et al. 2019; Yang & Liu 2019; Gunther et al. 2020; Tu et al. 2020; Gao et al. 2022). Shibayama et al. (2013) studied statistics of stellar super-flares. These authors discovered that for Sun-like stars (with surface temperature 5600-6000 K and slowly rotating with periods longer than 10 days), the occurrence rate of super-flares with an energy of \(10^{34}-10^{35}\) erg is once in 800-5000 yr. Shibayama et al. (2013) confirmed the previous results of Maehara et al. (2012) in that the occurrence rate (\(dN/dE\)) of super-flares versus flare energy \(E\) shows a power-law distribution with \(dN/dE\propto E^{-\alpha}\), where \(\alpha\sim 2\). Such occurrence rate distribution versus flare energy is roughly similar to that for solar flares. Tu et al. (2020) identified and verified 1216 super-flares on 400 solar-type
stars by analyzing 2-minute cadence data from 25,734 stars observed during the first year of the TESS mission. The results indicate a higher frequency distribution of super-flares compared to the findings from the Kepler mission. This difference may be due to a significant portion of the TESS solar-type stars in the dataset are rapidly rotating stars. The power-law index \(\gamma\) of the super-flare frequency distribution was determined to be \(\gamma=2.16\pm 0.10\), which is consistent with the results obtained from the Kepler mission. The study highlights an extraordinary star, TIC43472154, which exhibits approximately 200 super-flares per year. Tu et al. (2020) analyzed the correlation between the energy and duration of super-flares, represented as \(T_{duration}\propto E^{\beta}\). They derived a power-law index \(\beta=0.42\pm 0.01\) for this correlation, which is slightly larger than the value of \(\beta=1/3\) predicted by magnetic reconnection theory. Similar conclusion has been reached earlier by Maehara et al. (2015), who found that the duration of superflares, \(\tau\), scales as the flare energy, \(E\), according to \(\tau\propto E^{0.39\pm 0.03}\). Yang et al. (2023) analyzed TESS light curves from the first 30 sectors of TESS data with a two-minute exposure time. They identified a total of 60810 flare events occurring on 13478 stars and performed a comprehensive statistical analysis focusing on the characteristics of flare events, including their amplitude, duration, and energy. We believe that method for flare energy estimation used in (Shibayama et al., 2013; Yang et al., 2017) is more accurate than one used by Schaefer et al. (2000) and Nogami et al. (2014). We therefore base of flare energy estimate on method of Shibayama et al. (2013) and Althukair & Tsiklauri (2023) referred thereafter as Paper I.
In Paper I we searched for super-flares on different spectral class stars, while focusing on G-type dwarfs (solar-type stars) using Kepler data using quarters \(0-17\) with the purpose of study the statistical properties of the occurrence rate of super-flares. Shibayama et al. (2013) studied statistics of stellar super-flares based on Kepler data in quarters \(0-6\) (\(Q0-Q6\)). In Paper I we investigated how the results are modified by adding more quarters, i.e. what is \(\alpha\) power-law for data quarters \(0-17\) and \(7-17\). Here using the more extended Kepler data, we also found 14 cases of detection of Super-flares on 13 Slowly Rotating Sun-like starts in each of KIC 3124010, KIC 3968932, KIC 7459381, KIC 7459381, KIC 7821531, KIC 9142489, KIC 9528212, KIC 9963105, KIC 10275962, KIC 11086906, KIC 11199277, KIC 11350663 and KIC 11971032. Thus the main purpose of the present study is to present analysis of these new 14 cases. The main novelty here is that the detection is based on a Shibayama et al. (2013) method, which is more accurate, as compared to Schaefer et al. (2000) and Nogami et al. (2014) for the flare energy estimation. Our results support earlier conclusion by others (Schaefer et al. (2000) and Nogami et al. (2014)) that the Sun may have a surprise super-flare. We stress that Paper I has conducted a more comprehensive study of determination of stellar rotation periods based on a robust method such as used by McQuillan et al. (2014) in comparison to Shibayama et al. (2013). We believe that more accurate period determination used in Paper I has led to the current new results, presented in this paper. In addition to the 14 cases of of Super-flares on Slowly Rotating Sun-like starts, we detected 12 and 7 super-flares with a large amplitude on five G-type and four M-type main sequence stars, respectively.
Solar flares emit energy at all wavelengths, but their spectral distribution is still unknown. When white-light continuum emission is observed, the flares are referred to as "white-light flares" (WLF). Kretzschmar (2011) identified and examined visible light emitted by solar flares and found that the white light is present on average during all flares and must be regarded as a continuum emission. Kretzschmar (2011) also demonstrated that this emission is consistent with a black body spectrum with temperature of 9000 K and that the energy of the continuum contains roughly 70% of the total energy emitted by the flares. WLFs are among the most intense solar flares, and it has been demonstrated that an optical continuum appears anytime the flare's EUV or soft X-ray luminosity reaches a reasonably large threshold (McIntosh & Donnelly, 1972; Neidig & Cliver, 1983). Thus, optical continuum is presumably present in all flares but only in a few cases it reaches a measurable degree of brightness. This conclusion implies that WLFs are not fundamentally different from conventional flares Neidig (1989). Nonetheless, WLFs are important in flare studies because they are similar to stellar flares ways Worden (1983) and because they represent the most extreme conditions encountered in solar optical flares Neidig (1989). Using observations from the Transiting Exoplanet Survey Satellite (TESS), Ilin et al. (2021) present four fully convective stars that exhibited white light flares of large size and long duration. The underlying flare amplitude as a fraction of the quiescent flux of two flares is greater than two. After the discovery
of the largest amplitude flares ever recorded on the L0 dwarf, which reached \(\Delta V\approx-11\) magnitude Schmidt et al. (2016), Jackman et al. (2019) detected a large amplitude white-light super-flare on the L2.5 dwarf ULAS J224940.13011236.9 with the flux \(\Delta V\approx-10\) magnitude which corresponds to a relative brightness ratio of 10000. This can be demonstrated as follows:
\[\Delta V=V_{\rm max}-V_{\rm min}\approx-10, \tag{1}\]
where \(V_{\rm max}\) is the apparent magnitude in the visible band corresponding to the flux at the maximum amplitude (\(F_{\rm max}\)), and \(V_{\rm min}\) is the apparent magnitude in the visible band corresponding to the flux at the minimum state (\(F_{\rm min}\)). Using the magnitude difference calculation :
\[V_{\rm max}-V_{\rm min}=-10=2.5\log_{10}(F_{\rm min}/F_{\rm max}), \tag{2}\]
it is clear that \(F_{\rm max}/F_{\rm min}=\Delta F/F=1000\).
Stellar superflares have been studied in multiple wavelength bands, such as X-rays and the H\(\alpha\) band. It is important to mention relevant studies here: Wu et al. (2022) analyzed spectroscopic data from LAMOST DR7 and identified a stellar flare on an M4-type star that is characterized by an impulsive increase followed by a gradual decrease in the H\(\alpha\) line intensity. The H\(\alpha\) line, which corresponds to a specific transition in hydrogen, exhibits a Voigt profile during the flare. After the impulsive increase in the H\(\alpha\) line intensity, a clear enhancement was observed in the red wing of the H\(\alpha\) line profile. Additionally, the estimated total energy radiated through the H\(\alpha\) line during the flare is on the order of \(10^{33}\) erg, providing an indication of the overall energy release associated with the event. Chandra/HETGS time-resolved X-ray spectroscopic observations were used by Chen et al. (2022) to study the behaviour of stellar flares on EV Lac. They discovered distinct plasma flows caused by flares in the corona of EV Lac, but none of them provided evidence for the actual occurrence of stellar CMEs. In most flares, the flow of plasma is accompanied by a rise in the density and temperature of the coronal plasma.
Here we present the detection of 14 super-flares on 13 slowly-rotating Sun-like stars, 12 and 7 cases of large amplitude super-flares on five G-type dwarfs and four M-type dwarfs respectively. Section 2 presents the method used including the flare detection, the flare energy estimation, and rotation period determination. Section 3 provides the main results of this study. Section 4 closes this work by providing our main conclusions.
## 2 Methods
### Flare Detection
We conducted an automated search for super-flares on main-sequence stars type (A, F, G, K, M) based on entire Kepler data, using our Python script on long cadence data from Data Release 25 (DR 25) following (Maehara et al. 2012; Shibayama et al. 2013) method. The parameters for all targets observed by Kepler have been taken from the Kepler Stellar interactive table in NASA Exoplanet Archive. The study was carried out on a sample of main sequence stars, comprising 2222, 10307, 25442, 10898, and 2653 stars for the spectral types of M, K, G, F, and A, respectively. The following is a brief description of this method. We generate light curves of the stars using the PDCSAP flux. Then, in order to be statistically precise, we computed the distributions of brightness variation by calculating the flux difference in adjacent time intervals between every two neighboring data points in the light curve. Then, we determine the flux difference value at which the area under the distribution equals 1% of the total area. In order to increase the threshold, the 1% value of the area was multiplied by a factor of three. The start time of a flare was defined as the time at which the flux difference between two consecutive points exceeded the threshold for the first time. To determine the end time of the flare, we computed the three standard deviations 3\(\sigma\) of the distribution of brightness variation. Figure 1 displays a typical results for this method for KIC 9963105. The light curve of KIC 9963105 are shown in Figure 1(a). Figure 1(b) shows the distribution of the brightness difference between every two adjacent data points of KIC 9963105 light curve. 1% of the total area under the distribution curve is represented by the green vertical line. The red vertical line represents the flare detection threshold value, which is equal to three
times 1% value of the area under the distribution curve. The blue vertical line is 3\(\sigma\) of the distribution of the brightness variation. To determine the flare end time, we fit a B-spline curve through three points on the relative flux (\(\Delta F/F_{\rm avg}\)) distributed around the flare. One point just before the flare, and the other two points five and eight hours after the flare peak, respectively. Then we subtract the B-spline curve from the relative flux in order to remove long-term brightness variations around the flare Shibayama et al. (2013). We define the flare end time as the time when the relative flux produced by the subtraction drops below the value of 3\(\sigma\) for the first time. After detecting flare events, conditions were applied to all flare candidates. These conditions are: the flare duration must exceed 0.05 days, corresponding to at least three data points two of them after the flare peak, and the flare's decline phase must be longer than its rising phase Shibayama et al. (2013). Only flare incidents meeting these criteria were analysed.
### Energy Calculation
Schaefer et al. (2000) identified nine cases of super-fares involving \(10^{33}-10^{38}\) ergs on normal solar-type stars. Their super-flare energy estimate has a large uncertainty, e.g. Groombridge 1830 (HR 4550) total flare energy (in the blue band alone) is \(10^{35}\) ergs with an uncertainty of a factor of a few due to having only four points on the light curve.
The possibility that super-flares can be explained by magnetic energy stored on the star's surface was considered by Nogami et al. (2014) Using the Ca ii 8542 absorption line, they estimate that the average magnetic field strength (\(B\)) of KIC 9766237 and KIC 9944137 is 1-20 Gauss, and the super-flare of these targets has a total energy of \(10^{34}\) erg. Under the assumption that the energy released during the flare represents a fraction (\(f\)) of the magnetic energy stored around the spot area. Their flare energy (\(E_{\rm flare}\)) was calculated as follows:
\[E_{\rm flare}\sim f\frac{B^{2}}{8\pi}L^{3}. \tag{3}\]
The length of the magnetic structure causing the flare (\(L\)), has been considered to be the same size as the spotted region, i.e. \(L=\sqrt{a\pi R_{*}^{2}}\) where \(a\) is the spot's area, which giving that
\[E_{\rm flare}\sim f\frac{B^{2}}{8\pi}(a\pi R_{*}^{2})^{3/2}. \tag{4}\]
Our energy estimation for each flare depends on the star's luminosity (\(L_{\rm star}\)), flare amplitude (\(f_{\rm amp}\)), and flare duration (Shibayama et al., 2013; Yang et al., 2017). \(L_{\rm star}\), the amount of energy that the star
Figure 1: Illustrations of flares detection method used by Shibayama et al. (2013). (a) The light curve of KIC 9963105. (b) The distribution of brightness variation between each pair of adjacent data points in the light curves of KIC 9963105. The green vertical line represents the value of 1% of the total area under the curve, the red vertical lines represents the flare detection threshold and the blue vertical represents 3\(\sigma\) of the brightness variation distribution.
emits in one second, is proportional to the star's radius \(R\) squared and its surface temperature \(T_{\rm eff}\) to the fourth power, and is obtained from the following equation:
\[L_{\rm star}=\sigma_{\rm SB}T_{\rm eff}^{4}4\pi R^{2}, \tag{5}\]
where \(\sigma_{\rm SB}\) is the Stefan-Boltzmann constant, \(4\pi R^{2}\) is the entire surface area of the star. The continuum emission from a white-light flare is consistent with blackbody radiation at around 9000 K, as suggested by (Hawley & Fisher 1992; Kretzschmar 2011). Based on (Shibayama et al. 2013; Yang et al. 2017; Gunther et al. 2020), we set \(T_{\rm flare}\) = 9000 K and derive the luminosity of a blackbody-emitting star as follows:
\[L_{\rm flare}(t)=\sigma_{\rm SB}T_{\rm flare}^{4}A_{\rm flare}, \tag{6}\]
where \(A_{\rm flare}\) is the flare's area, as determined by the formula:
\[A_{\rm flare}(t)=f_{\rm amp}(t)\pi R^{2}\frac{\int R_{\lambda}B_{\lambda}(T_{ \rm eff})d\lambda}{\int R_{\lambda}B_{\lambda}(T_{\rm flare})d\lambda}, \tag{7}\]
where \(f_{\rm amp}\) represents the flare amplitude for the relative flux and \(R_{\lambda}\) represents the Kepler instrument's response function Caldwell et al. (2010). The Kepler photometer covers various wavelengths, from 420 to 900 nm. The Plank function at a specific wavelength, denoted by \(B_{\lambda}(T)\), is given by:
\[B_{\lambda}(T)=\frac{2hc^{2}/\lambda^{5}}{e^{hc/\lambda kT}-1}, \tag{8}\]
where \(h\) represents Planck's constant, \(c\) the speed of light, \(T\) the black body temperature, and \(k\) Boltzmann's constant. By substituting Eq.(7) into (6), we calculate the total flare energy by the integral of \(L_{\rm flare}\) over the flare duration :
\[E_{\rm flare}=\int_{t_{\rm start}}^{t_{\rm end}}L_{\rm flare}(t)dt. \tag{9}\]
We determine energy of the flares using Shibayama et al. (2013) energy estimation method, which assumes blackbody radiation from both the star and flare, with a fixed flare temperature of 10,000 K, to estimate the quiescent luminosity. We note that Shibayama et al. (2013) energy estimation can have an error of up to 60% and yet this is more accurate than the one used by Schaefer et al. (2000) and Nogami et al. (2014). To improve the accuracy, Davenport (2016) proposed an alternative method for estimating the quiescent luminosity of each star to determine the actual energy of the flares. They used the Equivalent Duration (ED) parameter, which represents the integral under the flare in fractional flux units, as a relative energy measurement for each flare event without requiring flux calibration of the Kepler light curves. To calculate the actual energy of the flares emitted in the Kepler band pass (erg), the ED values (sec) are multiplied by the quiescent luminosity (erg/sec) of the respective star. This approach establishes an absolute scale for the relative flare energies, as the quiescent luminosity is individually estimated for each star.
### Rotational Period Determination
Light curve periods were calculated using the Lomb-Scargle periodogram, a common statistical approach for detecting and characterising periodic signals in sparsely sampled data. We used an oversampling factor of five VanderPlas (2018), and use PDCSAP flux to generate a Lomb-Scargle periodogram for each light curve from Q2 to Q16. Furthermore, the period corresponding to the maximum power of the periodogram was allocated as the rotation period for the Kepler ID in a specific quarter. This value was estimated with an accuracy of a day without the decimal component because fractions of a day would not significantly alter the results, allowing us to automate the selection of the star's rotation period rather than manually. We set 0.5 days for periods shorter than a day and eliminated periods less than 0.1 days. Finally, for each Kepler ID, we choose the most frequent period across all quarters from Q2 to Q16. Following the McQuillan et al. (2014) technique, we required that the period chosen for all quarters be identified in at least two unique segments, with the segment defined as three consecutive Kepler quarters. (Q2,Q3,Q4), (Q5, Q6, Q7), (Q8, Q9, Q10), (Q11, Q12, Q13) and (Q14,Q15,Q16).
## 3 Results
By performing an automated search for super-flares on G-type main-sequence stars during 1442 days of Kepler observation in all of (DR 25) long-cadence data from Q0 to Q17, we found 14 super-flares on 13 slowly rotating Sun-like stars in each of (KIC 3124010, KIC 3968932, KIC 7459381, KIC 7459381, KIC 7821531, KIC 9142489, KIC 9528212, KIC 9963105, KIC 10275962, KIC 11086906, KIC 11199277, KIC 11350663 and KIC 11971032), with a surface temperature of \(5600K\leqslant T_{\rm eff}<6000K\), a surface gravity of \(\log~{}g>4.0\), and a rotational period \(P_{\rm rot}\) range between 24.5 and 44 days. Figure 2 shows seven light curves of these events. The left panels display light curves over a 90-day of observation. The blue arrow on the left panel indicates the observed super-flare, which met all conditions. The right panels show zoomed in time light curves of super-flares. The blue squares represent the data points for a super-flare. We fitted an exponential decay function to the flare light curve to characterise the flares, shown by a red dashed curve. This exponential decay function is given by:
\[f(t)=a~{}e^{-t/\tau}+b \tag{10}\]
where \(f(t)\) is the relative flux as a function of time, \(a\) is the flare peak height, which is approximately equal to the relative flux at the flare peak, \(b\) is the relative flux in the quiescent state and \(\tau\) is the decay time of the flare, which is the time at which the relative flux is decreased to \(1/e\simeq\) 0.3679 of its initial value. The value of \(\tau\), \(a\) and \(b\) of the exponential decay function for each flare are shown in the right panel. Flare parameters and their duration, amplitudes, energies and \(\tau\) values are listed in Table 1. The rotation periods for these slowly rotating Sun-like stars were taken from McQuillan et al. (2014). Their flare energies range from \((1.9-9.0)\times 10^{34}\) erg. The flare amplitude of the slowly rotating Sun-like stars is relatively small, ranging between 0.002 and 0.018. We also find that the duration of flares in all of these cases is the same 0.061 days. This is probably due to the fact that flare duration of all small amplitude super-flares is two data points in time or _less_. Therefore, because one of the flare detection conditions in our code is that there must be at least two data points between the flare's peak and the end, cases with flare duration less than two points were not detected, and we end up with the same duration super-flares with the two data points. Because it appears that there are no small amplitude super-flares with duration _greater_ than two data points, all flare duration end up the same. This selection effect is because we use long cadence light curve data, which has a 29.4-minute interval between each data point in time. The value of \(\tau\) varies from 0.012 to 0.036 days.
We calculated the frequency distribution of the 14 super-flares on the 13 slowly rotating Sun-like stars and plotted a log scale histogram presenting this distribution as shown in Figure 3. The x-axis
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline Kepler ID & T\({}_{\rm eff}\) & log g & Radius & \(P_{\rm rot}\)a & t\({}_{\rm start}\) & t\({}_{\rm end}\) & t\({}_{\rm peak}\) & amp & Flare Duration & \(\tau\) & Flare Energy \\ & (\(K\)) & & (\(R_{\odot}\)) & (day) & (BJD) & (BJD) & (BJD) & & (day) & (day) & (erg) \\ \hline
3124010 & 5688 & 4.46 & 1.01 & 25.90 & 913.24 & 913.30 & 913.26 & 0.008 & 0.061 & 0.017 & \(4.57\times 10^{34}\) \\
3968932 & 5716 & 4.39 & 0.96 & 24.56 & 868.88 & 868.94 & 868.90 & 0.004 & 0.061 & 0.026 & \(3.89\times 10^{34}\) \\
7459381 & 5635 & 4.27 & 1.11 & 26.19 & 312.31 & 312.37 & 312.33 & 0.005 & 0.061 & 0.017 & \(4.89\times 10^{34}\) \\
766385 & 5668 & 4.36 & 0.98 & 41.83 & 841.33 & 841.19 & 841.15 & 0.003 & 0.061 & 0.030 & \(1.94\times 10^{34}\) \\
7821531 & 5681 & 4.52 & 0.92 & 32.66 & 891.81 & 891.87 & 891.83 & 0.018 & 0.016 & 0.015 & \(6.63\times 10^{34}\) \\
9142489 & 5878 & 4.51 & 0.95 & 25.20 & 1561.13 & 1561.19 & 1561.15 & 0.004 & 0.061 & 0.028 & \(2.61\times 10^{34}\) \\
9528212 & 5872 & 4.42 & 0.97 & 61.43 & 1332.34 & 1332.40 & 1332.36 & 0.003 & 0.061 & 0.036 & \(2.31\times 10^{34}\) \\
9963105 & 5751 & 4.39 & 1.01 & 28.09 & 883.80 & 883.86 & 883.82 & 0.016 & 0.061 & 0.012 & \(9.00\times 10^{34}\) \\
10275962 & 5782 & 4.51 & 0.91 & 26.12 & 213.21 & 213.27 & 213.23 & 0.007 & 0.061 & 0.035 & \(3.71\times 10^{34}\) \\
10275962 & 5782 & 4.51 & 0.91 & 26.12 & 599.85 & 599.91 & 599.87 & 0.004 & 0.061 & 0.029 & \(2.04\times 10^{34}\) \\
11086906 & 5758 & 4.38 & 1.11 & 29.18 & 1206.01 & 1206.07 & 1206.03 & 0.002 & 0.061 & 0.018 & \(1.90\times 10^{34}\) \\
11199277 & 5638 & 4.49 & 0.92 & 29.00 & 325.43 & 325.49 & 325.45 & 0.008 & 0.061 & 0.029 & \(3.72\times 10^{34}\) \\
11350663 & 5966 & 4.49 & 0.96 & 36.92 & 1232.31 & 1232.37 & 1232.33 & 0.010 & 0.061 & \(0.014\) & \(5.21\times 10^{34}\) \\
11971032 & 5942 & 4.51 & 0.94 & 44.00 & 1231.92 & 1231.98 & 1231.94 & 0.006 & 0.061 & 0.030 & \(3.80\times 10^{34}\) \\ \hline \end{tabular}
\end{table}
Table 1: super-flares on slowly rotating Sun-like stars.
Figure 2: The left panel displays the light curves of super-flares. The x-axis in the left panel is the time in (BJD) and the y-axis is the normalized flux. The blue arrows indicates the occurrence of super-flares. The right panels show zoom in time of these super-flares. The x-axis in the right panel is the time from the flare peak in (day) and the y-axis is the relative flux (\(\Delta\rm F/F_{avg}\)). Each data points for a super-flare is represented by blue squares in the right panel. The dashed red curve indicates an exponential fit of the decay phase. \(\tau\) in the equation refers to the best fit of exponential decay time, \(a\) refers to the final value of the amplitude fit and \(b\) refers to the fitted relative flux in the quiescent state.
represents the flare's energy, and the y-axis represents the number of super-flares per star per year per unit of energy. Therefore, we calculated the weight for each bin using
\[w=\frac{3.16\times 10^{7}}{N_{\rm os}\times D\times E}, \tag{11}\]
where \(N_{\rm os}\) is the number of observed stars, \(D\) is the duration of the observation period in seconds, and \(E\) is the super-flare energy that belongs to that bin. From the number of stars in Table 3 in our previous work Althukair & Tsiklauri (2023), we estimated that the number of observed G-type dwarfs with \(5600K\leqslant T_{\rm eff}<6000K\) and \(P_{\rm rot}>10\) days is equal to 19160 stars. Since this distribution is related to slowly rotating Sun-like stars with \(P_{\rm rot}\) between 24.5 and 44 days, we estimated the number of the observed stars to be one-third of the original sample, i.e. 5635 stars, given that the average rotation
Figure 2: Figure 2 continued
period is 34 days which is almost three times the period of 10 days. We estimated the probability of the occurrence of super-flares in slowly rotating Sun-like stars with \(P_{\rm rot}\) of 24.5 to 44 days. We found that the rate of super-flares incidence with the energy of \(4.54\times 10^{34}\) erg is \(1.94\times 10^{-4}\) flares per year per star, corresponding to a super-flare occurring on a star once every 5160 years. We calculated this value by multiplying the average energy from the x-axis by the average dN/dE from the y-axis, \(4.54\times 10^{34}\times 4.27\times 10^{-39}=1.94\times 10^{-4}\) flares per year per star, and by taking the reciprocal of \(1.94\times 10^{-4}\), we get 5160 which gives the number of years in which a flare occurs on a star. The frequency distribution of these 14 super-flares follows a power law relation \(dN/dE\propto E^{-\alpha}\) where \(\alpha=1.9\pm 0.2\). This is consistent with our previous result in Althukair & Tsiklauri (2023) for the frequency distribution of slowly rotating G-type dwarfs where \(\alpha=2.0\pm 0.1\).
In addition to the 14 cases of of super-flares on slowly rotating Sun-like starts, we detected 12 super-flares with a large amplitude on 6 G-type dwarfs in each of KIC 5865248, KIC 6783223, KIC 7505473, KIC 10053146, KIC 10057002 and KIC 11709752. Figure 4 shows eight light curves of these events same as Figure 2. Table 2 shows the duration, amplitude, energy, and \(\tau\) values for these super-flares with their parameters. The energy of their flares range from \(1.67\times 10^{36}\) to \(1.42\times 10^{38}\) erg. The rotation period of KIC 1170952 was obtained by this work. As for the rotation period for the other five stars, no such data is available. Even applying the method described in Paper I does not allow period determination in these five cases. According to Yang et al. (2017), there are three possible reasons: (i) due to the inclination angle and low activity level, the light curve has a small amplitude at the accuracy level of Kepler; (ii) fast-rotating stars have spots in the poles Schussler & Solanki (1992), making it hard to detect light variation through rotation; and (iii) the rotation period is longer than 90 days (a quarter), making it difficult (or impossible) to detect them in the frequency spectrum of the star. The flare amplitude for these cases range between 4.05 and 35.60. These flares tend to last longer than flares with smaller amplitude of slowly rotating Sun-like stars as their duration varies between 0.061 and 0.143 day. The \(\tau\) values of flares exhibiting large amplitude on G-type main-sequence stars are observed to be higher than those of flares occurring on slowly rotating Sun-like stars, as their values range between 0.014 and 0.058 days.
For stars of other spectral classes, no significant flares with large amplitudes were detected on main-sequence stars of type A, F, and K. Only M-type main-sequence stars manifested seven super-flares with large amplitude on each of KIC 6580019, KIC 7123391, KIC 7341517 and KIC 9201463. Similar to Figures 2 and 4, Figure 5 displays the seven light curves for these events. The parameters of these super
Figure 3: log-log scale histograms showing the distribution of flare frequency as a function of flare energy of 14 super-flares on slowly rotating Sun-like stars. The distribution follows a power-law relation \(dN/dE\propto E^{-\alpha}\) where \(\alpha=1.9\pm 0.2\)
Figure 4: Same as Figure 2 but for large amplitude super-flares on G-type main-sequence stars.
flares, including their duration, amplitude, energy, and \(\tau\) values, are displayed in Table 3. These flares have an energy between \(3.16\times 10^{33}\) and \(1.59\times 10^{35}\) erg, amplitude ranges between 3.91 and 15.14 and their duration lasts between 0.018 and 0.044 day. \(\tau\) values for super-flares with large amplitude on M-type main sequence stars vary from 0.030 to 0.049 days.
We examined whether there is a dependence between \(\tau\) vs. flare amplitude (\(f_{\rm amp}\)) and \(\tau\) vs. flare energy (\(E_{\rm flare}\)). Therefore, we graphically display six panels in Figure 6 showing the relationship between \(\tau\) and the amplitude of flares and \(\tau\) and the energy of flares in slow-rotating Sun-like stars 6(a, b), G-type stars 6(c, d) and M-type stars 6(e, f) respectively. In 6(a) for slowly-rotating Sun-like stars, we find that for small amplitude, \(\tau\) is large, and when the amplitude is large, \(\tau\) is consistently small in the
Figure 4: Same as Figure 2 but for large amplitude super-flares on G-type main-sequence stars
Figure 5: Same as Figures 2 and 4 but for large amplitude super-flares on M-type main-sequence stars.
range considered. The same applies to the relation between \(\tau\) and energy in Figure 6(b), we see that large \(\tau\) values correspond to small energies and small values of \(\tau\) correspond to large energies considered. On the contrary, there is no clear relation between \(\tau\) vs. \(f_{\rm amp}\) and \(\tau\) vs. \(E_{\rm flare}\) in G-type and M-type main sequence stars in Figure 6(c-f). However, as mentioned in the Introduction, according to Maehara et al. (2015), the duration of superflares, \(\tau\), scales as the flare energy, \(E\), according to \(\tau\propto E^{0.39\pm 0.03}\). Similarly, Tu et al. (2020) found that \(T_{duration}\propto E^{0.42\pm 0.01}\). It broadly follows from the simple reconnection scaling arguments, that \(\tau\propto E^{1/3}\). We believe that we could not deduce such scaling because of small number of data points in Figure 6. We tried various functions of fit using Python's _curve_fit_ and Excel's _trendline_, referred to as a (line of best fit), to visualize the general trend for the data. We could not find any reliable, functional fit dependence between those parameters because _the coefficient of determination_, \({\rm R}^{2}\), which shows how well the data fit the regression model, is less than 0.5 for all those cases in Figure 6(a to f). Hence any attempted fit has been unreliable, as only fit with \({\rm R}^{2}>0.5\) can be deemed acceptable. To determine the extent to which the two variables, \(\tau\) and flare amplitude, as well as \(\tau\) and flare energy, are correlated, we calculated the _Pearson Correlation Coefficient_ (\(r\)), which measures the strength and direction of the relationship between two variables, using IDL's built-in function \({\rm CORRELATE(X,Y)}\) and Python's function _scipy_._stats_._pearsonr_. Both IDL and Python gave the same values. The values for the Pearson correlation coefficient (\(r\)) between two datasets are listed in Table 4. We note that for the slowly rotating sun-like stars datasets, \(r=-0.592\) and \(r=-0.691\) for \(\tau\) vs. \(f_{amp}\) and \(\tau\) vs. \(E_{flare}\) respectively, which suggests a noticable negative correlation between the variables. For the remaining 4 cases these \(r\) values are close to zero, which indicates a weak or nonexistent correlation between the two variables. In general, \(r\) varies from \(-1\) to \(1\). The extreme cases of \(r=\pm 1\) mean that there is clear linear correlation/anti-correlation. \(r=0\) means that there is no linear relation between the variables.
In the context of explaining the absence of large-amplitude flares detected in A-, F-, and K-type main-sequence stars, while they are only detected in G-type and M-type stars we would like to remark the following. Using Kepler space telescope, Chang et al. (2018) studied
\begin{table}
\begin{tabular}{l l c c c c c c c c c c} \hline \hline Kepler ID & T\({}_{\rm eff}\) & log g & Radius & \(P_{\rm rot}\)a & t\({}_{\rm start}\) & t\({}_{\rm end}\) & t\({}_{\rm peak}\) & amp & Flare Duration & \(\tau\) & Flare Energy \\ & (\(K\)) & & (\(R_{\odot}\)) & (day) & (BJD) & (BJD) & (BJD) & & (day) & (day) & (erg) \\ \hline
5865248 & 5780 & 4.44 & 1 & NA & 1476.22 & 1476.33 & 1476.24 & 13.16 & 0.102 & 0.036 & 7.29 \(\times\) 10\({}^{37}\) \\
5865248 & 5780 & 4.44 & 1 & NA & 1496.09 & 1496.21 & 1496.11 & 8.78 & 0.123 & 0.041 & 9.56 \(\times\) 10\({}^{37}\) \\
5865248 & 5780 & 4.44 & 1 & NA & 1561.78 & 1561.86 & 1561.80 & 8.14 & 0.082 & 0.034 & 6.75 \(\times\) 10\({}^{37}\) \\
6738223 & 5780 & 4.44 & 1 & NA & 1510.76 & 1510.86 & 1510.78 & 35.60 & 0.102 & 0.030 & 8.6 \(\times\) 10\({}^{37}\) \\
7505473 & 5780 & 4.44 & 1 & NA & 1385.81 & 1385.95 & 1385.85 & 4.05 & 0.143 & 0.058 & 1.67 \(\times\) 10\({}^{36}\) \\
10053146 & 5780 & 4.44 & 1 & NA & 1411.27 & 1411.33 & 1411.29 & 22.75 & 0.061 & 0.040 & 1.42 \(\times\) 10\({}^{38}\) \\
10057002 & 5780 & 4.44 & 1 & NA & 1284.52 & 1284.58 & 1284.54 & 12.03 & 0.061 & 0.049 & 6.24 \(\times\) 10\({}^{37}\) \\
11709752 & 5780 & 4.44 & 1 & 0.5 & 1501.56 & 1501.62 & 1501.58 & 4.21 & 0.061 & 0.031 & 2.85 \(\times\) 10\({}^{37}\) \\
11709752 & 5780 & 4.44 & 1 & 0.5 & 1544.06 & 1544.13 & 1544.08 & 7.65 & 0.061 & 0.014 & 2.85 \(\times\) 10\({}^{37}\) \\
11709752 & 5780 & 4.44 & 1 & 0.5 & 1571.49 & 1571.55 & 1571.51 & 4.48 & 0.061 & 0.033 & 3.94 \(\times\) 10\({}^{37}\) \\
11709752 & 5780 & 4.44 & 1 & 0.5 & 1575.35 & 1575.43 & 1575.37 & 5.11 & 0.082 & 0.046 & 5.44 \(\times\) 10\({}^{37}\) \\
11709752 & 5780 & 4.44 & 1 & 0.5 & 1578.39 & 1578.45 & 1578.41 & 9.40 & 0.061 & 0.030 & 3.94 \(\times\) 10\({}^{37}\) \\ \hline \end{tabular}
\end{table}
Table 2: Large amplitude super-flares on G-type main-sequence stars.
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline \hline Kepler ID & T\({}_{\rm eff}\) & log g & Radius & \(P_{\rm rot}\)a & t\({}_{\rm start}\) & t\({}_{\rm end}\) & t\({}_{\rm peak}\) & amp & Flare Duration & \(\tau\) & Flare Energy \\ & (\(K\)) & & (\(R_{\odot}\)) & (day) & (BJD) & (BJD) & (BJD) & & (day) & (day) & (erg) \\ \hline
6580019 & 2661 & 5.28 & 0.12 & NA & 609.97 & 610.09 & 609.99 & 3.91 & 0.123 & 0.044 & 7.04\(\times\)10\({}^{33}\) \\
6580019 & 2661 & 5.28 & 0.12 & NA & 674.55 & 674.60 & 674.47 & 10.30 & 0.143 & 0.041 & 1.25\(\times\)10\({}^{34}\) \\
7123391 & 3326 & 5.12 & 0.19 & NA & 638.53 & 638.59 & 638.55 & 8.07 & 0.061 & 0.038 & 4.76\(\times\) 10\({}^{34}\) \\
7123391 & 3326 & 5.12 & 0.19 & NA & 692.09 & 692.19 & 692.13 & 12.38 & 0.102 & 0.018 & 1.23\(\times\) 10\({}^{35}\) \\
7123391 & 3326 & 5.12 & 0.19 & NA & 794.54 & 794.72 & 794.56 & 15.14 & 0.184 & 0.049 & 1.59\(\times\) 10\({}^{35}\) \\
7341517 & 2661 & 5.28 & 0.12 & NA & 877.95 & 878.12 & 877.99 & 5.27 & 0.163 & 0.041 & 4.82\(\times\) 10\({}^{33}\) \\
9201463 & 3319 & 5.14 & 0.18 & NA & 215.11 & 215.27 & 215.15 & 5.51 & 0.163 & 0.030 & 3.16\(\times\) 10\({}^{33}\)
M dwarfs. They found a number of flare events with the peak flux increases \(\Delta F/F\geq 1\). Magnetic fields of the M dwarfs are generated by turbulent magnetic dynamo mechanism. This is due to their
\begin{table}
\begin{tabular}{c c c} \hline \hline Sample & \(\tau\) vs. \(f_{\rm amp}\) & \(\tau\) vs. \(E_{\rm flare}\) \\ \hline Slowly rotating sun like stars & -0.592 & -0.691 \\ G-type large amplitude flares & -0.149 & 0.141 \\ M-type large amplitude flares & -0.046 & -0.098 \\ \hline \end{tabular}
\end{table}
Table 4: The Pearson correlation coefficient between \(\tau\) vs. \(f_{\rm amp}\) and \(\tau\) vs. \(E_{\rm flare}\).
Figure 5: Same as Figures 2 and 4 but for large amplitude super-flares on M-type main-sequence stars.
deep convective zones, and this leads to very powerful flares, compared the G-type stars (Davenport et al. 2014). As for G-type stars, detection of strength of flares in such stars have been know for some time starting from (Maehara et al. 2012). Therefore is not entirely surprising that that we detected large-amplitude flares in G- and M-type stars. As for A-, F-, K-type stars we remark that according to Pedersen et al. (2016) for the flare generation, stars must have: eaither a deep outer convection zone for F5-type stars, and the G-type stars are not in the same direction.
Figure 6: The left panels display a scatter plot showing the relation between tau values on the y-axis with the flare amplitude \(f_{\rm amp}\) on the x-axis. While the right panels display a scatter plot showing the relation between \(\tau\) values on the y-axis with the flare energy \(E_{\rm flare}\) on the x-axis. For flares on slowly rotating Sun-like stars,(a) demonstrates that \(\tau\) values are large for low flare amplitudes but consistently small for high flare amplitudes. Likewise,(b) demonstrates that high \(\tau\) values correspond to low flare energies, whereas low \(\tau\) values correspond to high flare energies. For large amplitude super-flares on G-type dwarfs (c,d) and M-type dwarfs (e,f), \(\tau\) has no clear connection to the \(f_{\rm amp}\) or \(E_{\rm flare}\).
and perhaps later-types; or strong, radiatively driven winds for B5-type and earlier types; or strong large-scale magnetic fields for A and B-type stars. Pedersen et al. (2016) and earlier works suggest that normal A-type stars have non such features and thus should not flare. However, flares and super-flares have previously been detected on such stars according to (Bai & Esamdin 2020) and references therein. The situation with K-type stars is somewhat a 'gray area'. Stars less massive and cooler than our Sun are K dwarfs; and even fainter and cooler stars are the red-coloured M dwarfs. Thus K dwarfs are probably borderline case where large-amplitude flares can occur.
## 4 Conclusions
Using our Python script on long cadence data from Data Release 25 (DR 25), we searched for super-flares on main-sequence stars of types (A, F, G, K, and M) based on the entire Kepler data following the method of (Maehara et al. 2012; Shibayama et al. 2013). The Kepler targets' parameters were retrieved from the Kepler Stellar interactive table in the NASA Exoplanet Archive. Using these data, we detected 14 super-flare on 13 Sun-like stars with a surface temperature of \(5600K\leqslant T_{\rm eff}<6000K\), and \(P_{\rm rot}\) range from 24.5 to 44 days. In addition, we found 12 and 7 cases of large amplitude super-flares on six and four main-sequence G and M type stars, respectively. Main-sequence stars of other spectral types A, F, and K showed no signs of large-amplitude super-flares. To characterise the flares, we fit an exponential decay function to the flare light curve given by \(f(t)=a~{}e^{-t/\tau}+b\). We study the relation between the decay time of the flare after its peak \(\tau\) vs. \(f_{\rm amp}\) and \(\tau\) vs. \(E_{\rm flare}\). For slowly rotating Sun-like stars, we find that \(\tau\) is large for small flare amplitudes and \(\tau\) is small for large flare amplitudes considered. Similarly, we find that large \(\tau\) values correspond to small flare energies and small \(\tau\) values correspond to high flare energies considered. However, for the main sequence stars of the G and M types, \(\tau\) has no apparent relation to the \(f_{\rm amp}\) or \(E_{\rm flare}\). We experimented with several different fit functions between \(\tau\) vs. \(f_{\rm amp}\) and \(\tau\) vs. \(E_{\rm flare}\) to better see the underlying pattern in the data. Since the \(\rm R^{2}\) is less than 0.5 in these cases, we could not identify a reliable fit functional dependence between these parameters.
In conclusion, we believe that:
(i) the thirteen peculiar Kepler IDs that are Sun-like, slowly rotating with rotation periods of 24.5 to 44 days, and yet can produce a super-flare with energies in the range of \((2\)-\(9)\times 10^{34}\) erg; and
(ii) six G-type and four M-type Kepler IDs with exceptionally large amplitude super-flares, with the relative flux in the range \(\Delta F/F_{\rm avg}=4-35\);
defy our current understanding of stars and hence are worthy of further investigation.
## Acknowledgements
Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX13AC07G and by other grants and contracts.
Authors would like to thank Deborah Kenny of STScI for kind assistance in obtaining the data, Cozmin Timis and Alex Owen of Queen Mary University of London for the assistance in data handling at the Astronomy Unit.
A. K. Althukair wishes to thank Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia and Royal Embassy of Saudi Arabia Cultural Bureau in London, UK for the financial support of her PhD scholarship, held at Queen Mary University of London.
Authors would like to thank an anonymous referee whose comments greatly improved this manuscript.
## Data Availability
All data used in this study was generated by our bespoke Python script that can be found at [https://github.com/akthukair/AFD](https://github.com/akthukair/AFD) under the filename AFD.py and other files in the same GitHub repository. The data underlying this article were accessed from Mikulski Archive for Space Telescopes (MAST) [https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html). The long-cadence Kepler light curves analyzed in this paper can be accessed via MAST STScI (2016). The Kepler Stellar parameters table for all targets can be found at Kepler Mission (2019). The derived data generated in this research will be shared on reasonable request to the corresponding author.
|
2301.09360
|
Noise crosscorrelations can induce instabilities in coupled driven
models
|
We study the effects of noise cross-correlations on the steady states of
driven, nonequilibrium systems, which are described by two stochastically
driven dynamical variables, in one dimension. We use a well-known
stochastically driven coupled model with two dynamical variables, where one of
the variables is autonomous being independent of the other, whereas the second
one depends explicitly on the former. Introducing cross-correlations of the two
noises in the two dynamical equations, we show that depending upon the details
of the nonlinear coupling between the dynamical fields, such cross-correlations
can induce instabilities in the models, that are otherwise stable in the
absence of any cross-correlations. { We argue that this is reminiscent of the
roughening transition found in the Kardar-Parisi-Zhang equation in dimensions
greater than two.} Phenomenological implications of our results are discussed.
|
Sudip Mukherjee
|
2023-01-23T10:57:54Z
|
http://arxiv.org/abs/2301.09360v2
|
# Noise crosscorrelations can induce instabilities in coupled driven models
###### Abstract
We study the effects of noise cross-correlations on the steady states of driven, nonequilibrium systems, which are described by two stochastically driven dynamical variables, in one dimension. We use a well-known stochastically driven coupled model with two dynamical variables, where one of the variables is autonomous being independent of the other, whereas the second one depends explicitly on the former. Introducing cross-correlations of the two noises in the two dynamical equations, we show that depending upon the details of the nonlinear coupling between the dynamical fields, such cross-correlations can induce instabilities in the models, that are otherwise stable in the absence of any cross-correlations. We argue that this is reminiscent of the roughening transition found in the Kardar-Parisi-Zhang equation in dimensions greater than two. Phenomenological implications of our results are discussed.
## I Introduction
Time-dependent statistical descriptions of condensed matter systems are often made in terms of continuum, Langevin equations of the relevant dynamical variables driven by noises [1]. The noises represent inherent microscopic stochasticity of the dynamics. In equilibrium systems, such stochasticity arises from thermal fluctuations. Conditions of thermal equilibrium, known as the fluctuation-dissipation theorem (FDT) [1], ensures that the damping in the system is proportional to the noise variance, which is assumed to be Gaussian-distributed with a zero mean. The proportionality constant in fact gives the temperature. In nonequilibrium systems, there is no FDT, and hence, the damping and the noise variance are independent of each other. As a result, nonequilibrium steady states (NESS) are far more diverse and complex than their equilibrium counterparts.
Physical descriptions of many natural driven systems involve coupled dynamics of several degrees of freedom. Prominent examples include driven symmetric mixture of a miscible binary fluid [2] and magnetohydrodynamics [3]. Stochastically driven binary fluid equations of the velocity and concentration gradient [4] and magnetohydrodynamics (MHD) equations of the velocity and magnetic fields [5] have been used to study turbulence in these systems. In equilibrium systems, conditions of thermal equilibrium, e.g., in the form an FDT, ensures that the noise statistics have no role to play in determining the thermodynamic properties of the system. For instance, relaxational dynamics both without and with a conservation law for the order parameter [6] refer to the same equal-time, thermal equilibrium properties. In contrast to equilibrium systems, in the absence of any FDT, varying the noise statistics can result into distinctly different NESS in driven systems. For instance, the Kardar-Parisi-Zhang (KPZ) equation driven by white noises [7; 8], and its conserved counterpart (the CKPZ equation) driven by conserved noises [9] have very different universal properties.
Introduction of noise cross-correlations in a stochastically driven coupled model necessarily changes the noise distribution, Whether or not this can lead to a new NESS, is a question of basic importance in nonequilibrium statistical mechanics. In fact, there are examples where non-zero cross-correlations of the two noises in the two dynamical equations are found to affect the scaling properties of the NESS. Presence of such noise cross-correlations in driven systems cannot be ruled out on the basis of any symmetry arguments or physical principles. Simpler reduced models have been proposed and further studied to explore the role of noise cross-correlations. For instance, noise cross-correlations in a nonconserved relaxational model for the complex scalar field turns out to be generally a relevant perturbation on the equilibrium states of the model near a critical point [10]. Subsequently, by using a coupled Burgers model originally proposed in [11], Refs. [11; 12] have been shown that noise cross-correlations can lead to continuously varying scaling exponents in the NESS. Nonetheless, general understanding of the effects of noise cross-correlations on the NESS of driven models still remains largely at an early stage.
In this work, we revisit the issue of the effects of noise crosscorrelations on the NESS of coupled driven models. Since we are interested in studying a question of principle, it suffices to work with simple models where explicit calculations can be performed relatively easily, but still nontrivial results can be obtained. To that end, we use a one-dimensional (1D) model [13; 14; 15], where one of the fields \(v(x,t)\) in autonomous, and satisfies the well-known 1D Burgers equation [16]. The second dynamical field \(b\) is dynamically influenced by \(v\), and follows an equation similar to the well-known passive scalar equation with a compressible velocity field. We investigated the universal spatio-temporal scaling properties. Using a specific structure of the noise crosscorrelations, we show that depending upon the specific model at hand, either it introduces instabilities to the NESS characterised by well-known scaling exponents, or is _irrelevant_ (in the renormalisation group or RG sense), leaving the long wave
length scaling properties unaffected. In the former case, the ultimate steady state that the instability by the noise cross-correlations lead to cannot be determined from the low order perturbation theory employed here. The remainder of this article is organised as follows. In Section II, we have introduced the models we have used, along with the form of the noise cross-correlations. Then in Section IV.2, we have discussed the details of the dynamic RG analysis on the model. In Section V, we summarise and conclude. Some of the intermediate technical details including the one-loop Feynman diagrams are given in Appendix for interested readers.
## II Model
Noise crosscorrelations can generically exist in any multi-variable system described by stochastically driven dynamical equations for the dynamical variables. Since we are trying to answer questions of principles, we use a simple, purpose-built 1D model that suffices for our purposes. The model consists of two dynamical variables \(v(x,t)\) and \(b(x,t)\). Out of these two, we assume \(v\) to be _autonomous_, i.e., independent of \(b\), and follows the well-known 1D Burgers equation [16]. This is given by
\[\frac{\partial v}{\partial t}=\nu\partial_{xx}v+\frac{\lambda_{1}}{2}\partial _{x}v^{2}+f_{v}.\] (II.1)
Here, \(\nu>0\) is a diffusivity, and \(\lambda_{1}\) a non-linear coupling constant, and can be of any sign. Further, \(f_{v}\) is a conserved noises. Hereafter, we refer to \(v\) as the "Burgers velocity field" [16].
The second field \(b\) is assumed to be advected by the Burgers velocity \(v\)_passively_, i.e., it has no effect on the dynamics of \(v\). The dynamics of \(b\) follows [15]
\[\frac{\partial b}{\partial t}=\mu\partial_{xx}b+\lambda_{2}\partial_{x}(vb)+f _{b}.\] (II.2)
Here, \(\mu>0\) is a diffusivity, and \(\lambda_{2}\) a non-linear coupling constant, and can be of any sign. Further, \(f_{b}\) is a conserved noise. Following the nomenclature of Ref. [11], we call \(b\) as the "Burgers magnetic field". Equations (II.1) and (II.2) in fact can be obtained from the coupled Burgers equations without the feedback term in the equation of \(v\); see Refs. [11; 17]. These can also be written in terms of nonconserved height or displacement fields; see, e.g., Refs. [13; 18].
The noises \(f_{v}\) and \(f_{b}\) have zero mean, and are Gaussian-distributed. Their joint probability distribution is characterised by the three variances (written in the Fourier space and functions of frequency \(\omega\) and wavevector \(k\))
\[\langle f_{v}(k,\omega)f_{v}(-k,-\omega)\rangle = 2D_{v}k^{2},\] (II.3) \[\langle f_{b}(k,\omega)f_{b}(-k,-\omega)\rangle = 2D_{b}k^{2},\] (II.4) \[\langle f_{v}(k,\omega)f_{b}(-k,-\omega)\rangle = 2iD_{\times}k|k|,\] (II.5)
where \(D_{v}\), \(D_{b}>0\) necessarily, but the sign of \(D_{\times}\) is arbitrary. Reality of the noises means the noise variance matrix constructed from (II.3)-(II.5) must have real, non-negative eigenvalues, which in turn implies \(D_{v}\,D_{b}\geq D_{\times}^{2}\). Equation (II.5) gives the noise cross-correlation here, which is _purely imaginary_ and _odd_ in \(k\), i.e., \(D_{\times}(k)=-D_{\times}(-k)\). The structure of (II.5) is dictated by the symmetry properties of (II.1) and (II.2). In line with [11], we assume \(v\) to be a _pseudo-scalar_ field and \(b\) to be a _scalar_ field (i.e., a vector and a pseudo-vector at dimensions \(d>1\)). This means Eqs. (II.1) and (II.2) are invariant under \(x\rightarrow-x,\,v\rightarrow-v,\,b\to b\). These symmetries hold true even if there is a "mean \(b\)", i.e., \(\langle b\rangle\neq 0\)[11]. On the other hand, if \(\langle b\rangle=0\), the model has a higher symmetry: Equations (II.1) and (II.2) are also invariant under \(x\rightarrow-x,\,v\rightarrow-v,\,b\rightarrow-b\). While our subsequent calculations specialise for \(\langle b\rangle=0\), we continue to impose invariance under (II.1) and (II.2) are invariant under \(x\rightarrow-x,\,v\rightarrow-v,\,b\to b\) only. This symmetry implies the cross-correlation function \(\langle v(x,t)b(0,0)\rangle\) is an _odd_ function of \(x\). This in turn means, as in Ref. [11], \(\langle v(k,t)b(-k,0)\rangle\) is purely imaginary and odd in \(k\). Since \(\langle v(k,t)b(-k,0)\rangle\) proportional to the noise cross-correlations, and the noises in (II.1) and (II.2) must follow the symmetries of the corresponding dynamical variables, (II.5) follows directly [19].
## III Galilean invariance
Model equations (II.1) and (II.2) are invariant under a pseudo-Galilean transformation \(x\to x+v_{0}t,\,v\to v+v_{0},\,t\to t,\,\frac{\partial}{\partial t} \rightarrow\frac{\partial}{\partial t}+\lambda_{1}v_{0}\partial_{x}\), when \(\lambda_{1}=\lambda_{2}\). When \(\lambda_{1}\neq\lambda_{2}\), there is no Galilean invariance. However, as our calculations below show (see also Refs. [11; 12; 13; 20] for related discussions) even if \(\lambda_{1}\neq\lambda_{2}\), i.e., even when they are unequal microscopically, Galilean invariance is recovered and appears as an _emergent symmetry_ in the long wavelength limit, i.e., \(\lambda_{1}=\lambda_{2}\) in the renormalised theory, so long as \(\lambda_{1}\lambda_{2}>0\) holds. In contrast, if \(\lambda_{1}\lambda_{2}<0\), Galilean invariance is _not_ restored even in the long wavelength limit. Hence, Galilean invariance is genuinely broken even in the renormalised theory in this case. These are the two distinct cases, which we discuss separately below. We further consider a third case, in which we set \(\lambda_{1}=0\). Thus in this case, \(v\) satisfies the linear 1D diffusion equation. The corresponding equation in terms of a height field is the well-known Edward-Wilkinson equation [8], forced by a nonconserved noise.
## IV Scaling
We are interested in calculating the scaling exponents which characterise the time-dependent correlation func
tions of \(v\) and \(b\):
\[C_{v}(r,t)\equiv\langle v(r,t)v(0,0)\rangle = |r|^{2\chi_{v}}f_{v}(|r|^{z_{v}}/t),\] (IV.1) \[C_{b}(r,t)\equiv\langle b(r,t)b(0,0)\rangle = |r|^{2\chi_{b}}f_{b}(|r|^{z_{b}}/t).\] (IV.2)
or their Fourier transformed versions
\[C_{v}(k,\omega)\equiv\langle|v(k,\omega)|^{2}\rangle = k^{2\tilde{\chi}_{v}}\tilde{f}_{v}(k^{z_{v}}/\omega),\] (IV.3) \[C_{b}(k,\omega)\equiv\langle|b(k,\omega)|^{2}\rangle = k^{2\tilde{\chi}_{b}}\tilde{f}_{b}(k^{z_{b}}/\omega),\] (IV.4)
in the long wavelength limit. Here \(\chi_{v}\) and \(z_{v}\) are the roughness and dynamic exponents of \(v(x,t)\); \(\chi_{b}\) and \(z_{b}\) are respectively the corresponding roughness and dynamic exponents of \(b(x,t)\); \(\tilde{\chi}_{v}\) (\(\tilde{\chi}_{b}\)) can be connected to \(\chi_{v}\) (\(\tilde{\chi}_{b}\)) by Fourier transform, giving
\[\tilde{\chi}_{v}=1+\chi_{v}+z_{v},\ \ \tilde{\chi}_{b}=1+\chi_{b}+z_{b}.\] (IV.5)
Further, \(f_{v,b}(|r|^{z}/t)\) and \(\tilde{f}_{v,b}(k^{z}/\omega)\) are dimensionless scaling functions of their respective arguments. Notice that we have allowed for two different dynamic exponents. If \(z_{v}=z_{b}\), then one gets _strong_ dynamic scaling, else, if \(z_{v}\neq z_{b}\), weak dynamic scaling ensues [21].
### Linear theory
The linear limit of the model equations is obtained by setting all nonlinear term to zero, i.e., by setting \(\lambda_{1}=0=\lambda_{2}\). In this limit, all the two point correlations can be calculated exactly. We have
\[\langle|v(k,\omega)|^{2}\rangle = \frac{2D_{v}k^{2}}{\omega^{2}+\nu^{2}k^{4}},\] (IV.6) \[\langle|b(k,\omega)|^{2}\rangle = \frac{2D_{b}k^{2}}{\omega^{2}+\mu^{2}k^{4}},\] (IV.7) \[\langle v(k,\omega)b(-k,-\omega)\rangle = \frac{2ik|k|D_{\times}}{(-i\omega+\nu k^{2})(i\omega+\mu k^{2})}.\] (IV.8)
These give the exact exponent values \(\chi_{v}=\chi_{b}=-1/2\), which may be obtained by inverse Fourier transforming the above correlators, and \(z_{v}=z_{b}=2\), corresponding to strong dynamic scaling in the linear theory. If noise crosscorrelations vanish, the linearised equations actually admit an FDT. In fact, if \(D_{\times}=0\), \(v\) and \(b\) fully decouple at the linear level, and by using FDT one can identify two "temperatures" \(T_{v}=D_{v}/\nu\) and \(T_{b}=D_{b}/\mu\) in the linear theory, which are in general unequal. But a non-zero noise cross-correlation breaks FDT even at the linear level, making it impossible to identify any temperature-like quantity. Lastly, it is straightforward to show by using that (IV.6)-(IV.8) that the ratios of the equal-time correlators \(\langle v(x,0)v(0,0)\), \(\langle b(x,0)b(0,0)\rangle\), \(\langle v(x,0)b(x,0)\rangle\) are all just numbers.
### Nonlinear effects
The presence of the nonlinear terms no longer allows enumeration of the exact scaling exponents for (II.1) and (II.2), unlike in the linear theory. Thus perturbative treatments are necessary. Naive perturbation theory produces diverging corrections to the model parameters. These divergences may be systematically handled within the framework of dynamic RG [6].
While the dynamic RG procedure is already well-documented in the literature [6], we give below a brief outline of this method for the convenience of the readers. It is useful to first cast the model equations (II.1) and (II.2) into a dynamic generating functional by introducing dynamic conjugate fields \(\tilde{v}(x,t)\) and \(\tilde{b}(x,t)\); see Ref. [22], see also Appendix A and Appendix B for some intermediate details. The dynamic generating functional is then averaged over the Gaussian distribution of the noises \(f_{v}\) and \(f_{b}\) with variances (II.3), (II.4) and (II.5). The momentum shell dynamic RG procedure consists of integrating over the short wavelength Fourier modes of \(v(x,t)\), \(b(x,t)\), \(\tilde{v}(x,t)\) and \(\tilde{b}(x,t)\) in the generating functional. This is then followed by rescaling of lengths and time. In particular, we follow the standard approach of initially restricting wavevectors to lie in a Brillouin zone: \(|q|<\Lambda\), where \(\Lambda\) is an ultra-violet cutoff of order the inverse of the lattice spacing \(a\), although its precise value is unimportant so far as the scaling in the long wavelength limit is considered. The fields \(v(x,t)\), \(b(x,t)\) and their dynamic conjugates \(\tilde{v}(x,t)\), \(\tilde{b}(x,t)\) are then split into two parts, a high and low wave vector parts \(v(x,t)=v^{>}(x,t)+v^{<}(x,t)\), \(b(x,t)=b^{>}(x,t)+b^{<}(x,t)\) and \(\tilde{v}(x,t)=\tilde{v}^{>}(x,t)+\tilde{v}^{<}(x,t)\), \(\tilde{b}(x,t)=\tilde{b}^{>}(x,t)+\tilde{b}^{<}(x,t)\), where \((v,b)^{>}(x,t)\) and \((\tilde{v},\tilde{b})^{>}(x,t)\) are non-zero in the high wavevector range \(\Lambda/b<k<\Lambda,\,b>1\), whereas \((v,b)^{<}(x,t)\) and \((\tilde{v},\,\tilde{b})^{<}(x,t)\) are non-zero in the low wavevector range \(k<\Lambda/b\). Next, \(v^{>}(x,t)\), \(b^{>}(x,t)\) and \(\tilde{v}^{>}(x,t)\), \(\tilde{b}^{>}(x,t)\) are to be integrated out in the dynamic generating functional. Of course, this integration cannot be done exactly, but is done perturbatively in the anharmonic couplings in Appendix A. This perturbation theory is usually represented by Feynman diagrams, with the order of perturbation theory given by the number of loops in the diagrams that we calculate; see Appendix B. Next to this perturbative step, we rescale length by \(x=x^{\prime}\exp(l)\), in order to restore the UV cutoff back to \(\Lambda\). We further rescale time by \(t=t^{\prime}\exp(l\,z)\), whether or not \(z\) is the actual dynamic exponent will be found as we go along. This is then followed by rescaling of \(v^{<}(x,t)\), \(b^{<}(x,t)\) and \(\tilde{v}^{<}(x,t)\), \(\tilde{b}^{<}(x,t)\); the long wave length parts of \(v(x,t),\,b(x,t)\) and \(\tilde{v}(x,t)\), \(\tilde{b}(x,t)\); see Appendix B. We discuss (i) \(\lambda_{1}\lambda_{2}>0\) (Case I), (ii) \(\lambda_{1}\lambda_{2}<0\) (Case II), and (iii) \(\lambda_{1}=0\,\lambda_{2}>0\) (Case III) separately below. Before we discuss the RG results in details, we note that the dynamics of \(v\), independent of \(b\), follows the well-known 1D Burgers equation [16], for the scaling exponents \(\chi_{u}\) and \(z_{u}\) are known _exactly_, thanks to the Galilean invariance and FDT [8; 16]. This gives \(\chi_{v}=-1/2\), \(z_{v}=3/2\), which are exact. The corresponding scaling exponents of \(b\), however, cannot be obtained exactly, necessitating perturbative approaches. The rel
evant one-loop Feynman diagrams for the model parameters and the noise strengths are given in Appendix B.1. Independent of the sign of \(\lambda_{1},\lambda_{2}\), the critical dimension of the model is two. Since we are interested in 1D, less than the critical dimension, we use a fixed dimension RG scheme, same as that used in the RG calculations on the 1D KPZ equation [8].
### Case I: Renormalisation group analysis
The one-loop Feynman diagrams upon evaluation produces the discrete recursion relations. These are given in Appendix B.2. This procedure is followed by rescaling of space, time and the fields together with \({\cal L}=e^{dl}\approx 1+dl\) for small \(dl\) (here \({\cal L}\) is a running scale factor, not to be confused with the system size, which we formally take to be infinity), which ultimately give the following RG recursion relations:
\[\frac{d\nu}{dl} = \nu\left[z-2+\frac{g}{4}\right],\] (IV.9) \[\frac{d\mu}{dl} = \mu\left[z-2+\frac{g\psi^{2}}{2(1+P)P}+\frac{g(1-P)\psi^{2}}{(1+ P)^{2}P}\right],\] (IV.10) \[\frac{dD_{v}}{dl} = D_{v}\left[z-2\chi_{v}-3+\frac{g}{4}\right],\] (IV.11) \[\frac{dD_{b}}{dl} = D_{b}\left[z-2\chi_{b}-3+\frac{g\psi^{2}}{P(1+P)}-\frac{4\Gamma \alpha g\psi^{2}}{(1+P)^{3}}\right],\] (IV.12) \[\frac{d\lambda_{1}}{dl} = \lambda_{1}\left[\chi_{v}+z-1\right],\] (IV.13) \[\frac{d\lambda_{2}}{dl} = \lambda_{2}\bigg{[}\chi_{v}+z-1-\frac{\psi^{2}g}{(1+P)^{2}}+\frac {\psi g(3+P)}{2(1+P)^{2}}\] (IV.14) \[- \frac{g\psi}{2(1+P)}\bigg{]},\] \[\frac{dD_{\times}}{dl} = D_{\times}\left[z-\chi_{v}-\chi_{b}-3\right].\] (IV.15)
Here, \(g\equiv\frac{\lambda_{1}^{2}D_{v}}{\mu^{2}}\) is the dimensionless coupling constant, \(P\equiv\frac{\mu}{\nu}\) is the dimensionless magnetic Prandtl number, \(\Gamma\equiv D_{\times}^{2}/D_{v}^{2},\,\alpha\equiv D_{v}/D_{b}\). All these are non-negative by construction. For reasons of notational convenience, we define \(\Phi\equiv\alpha\Gamma\). In addition, \(\psi\equiv\lambda_{2}/\lambda_{1}\), which in the present case in positive. Notice that there are no relevant one-loop corrections to \(D_{\times}\). This is because the vertices in Eqs. (II.1) and (II.2) are \({\cal O}(k)\), and hence cannot generate any relevant corrections to the noise crosscorrelation (II.5), which scales as \(k|k|\)[12]. This reason for the lack of renormalisation of \(D_{\times}\) actually holds to all orders in the perturbation theory, making \(D_{\times}\) unrenormalised to any order in the perturbation theory.
Flow equations (IV.9)-(IV.15) can be used to calculate the flow equations for \(g,P,\Gamma,\alpha\). We find
\[\frac{dg}{dl} = g\left[1-\frac{g}{2}\right],\] (IV.16) \[\frac{dP}{dl} = Pg\left[\frac{\psi^{2}}{2P(1+P)}+\frac{\psi^{2}(1-P)}{P(1+P)^{2} }-\frac{1}{4}\right],\] (IV.17) \[\frac{d\psi}{dl} = \psi g\bigg{[}-\frac{\psi^{2}}{(1+P)^{2}}+\frac{\psi}{(1+P)^{2}} \bigg{]},\] (IV.18) \[\frac{d\Phi}{dl} = \Phi g\left[-\frac{1}{4}-\frac{\psi^{2}}{1+P}+\frac{4\Phi\psi^{2 }}{(1+P)^{3}}\right].\] (IV.19)
At the RG fixed point, \(dg/dl=0=dP/dl=d\psi/dl=d\Phi/dl\). This gives \(g^{*}=2,\,\psi^{*}=1\) and \(P^{*}=1\) as the _stable_ RG fixed point; here and below a superscript \({}^{*}\) refers to the RG fixed point value of any quantity. Notice that \(\psi^{*}=1\) is obtained from (IV.18) for any \(P\). Notice that \(\psi=0=g\) is _also_ a fixed point of (IV.16) and (IV.18); it is however globally unstable. Unsurprisingly, \(g^{*}\) is same as that for the 1D Burgers equation [8; 16], since \(g\) depends only on the parameters of (II.1), which is autonomous. This in turn gives \(\chi_{v}=-1/2,\,z_{v}=3/2\), which are _exact_, due to the Galilean invariance and FDT of the 1D Burgers equation [8; 16; 23]. Further, \(P^{*}=1\) means at the RG fixed point \(\nu^{*}=\mu^{*}\), i.e., the two renormalised diffusivities are equal, even if they were unequal microscopically. This further implies that the fields \(v\) and \(b\) have the _same_ dynamic exponent: \(z_{v}=z_{b}=z\). This is an example of _strong dynamic scaling_[21]. Furthermore, at the RG fixed point \(\psi^{*}=1\) implies that \(\lambda_{1}^{*}=\lambda_{2}^{*}\) at the RG fixed point, such that the fixed point is Galilean invariant, even if it were not so microscopically (but with \(\lambda_{1}\lambda_{2}>0\) microscopically). Thus, the Galilean invariance is an emergent symmetry, even though it is absent microscopically, a statement that holds even in the presence of noise crosscorrelations just as it does in its absence [13].
The RG flow of \(\Phi\) requires careful attention. We see that the fixed point condition \(d\Phi/dl=0\) produces the following fixed points, at each of which the spatial scaling of \(b\) given by \(\chi_{b}\) is analysed:
(i) \(\Phi^{*}=0\), a stable fixed point. At this RG fixed point, the noise crosscorrelations effectively vanish, and the long wavelength scaling properties of the system are identical to that without it: \(\chi_{b}=-1/4\).
(ii) There is a second fixed point given by the condition
\[-\frac{1}{4}-\frac{\psi^{2}}{1+P}+\frac{4\Phi\psi^{2}}{(1+P)^{3}}=0,\] (IV.20)
giving \(\Phi^{*}=3/2\equiv\Phi_{c1}\) obtained by using \(\psi^{*}=1=P^{*}\), which is _linearly unstable_. This implies if \(\Phi(\ell=0)<\Phi_{c1}\), the system flows to the fixed point \(\Phi^{*}=0\), meaning a steady state that is statistically identical to a state having no crosscorrelations at the microscopic level in the long wavelength limit, and hence the scaling exponents have the values identical to their values _without_ cross-correlations. At this unstable fixed point \(\chi_{b}=-7/8\), different from its value when \(\Phi^{*}=0\), i.e., without cross-correlations. However, \(\Phi(\ell=0)>\Phi_{c1}\), then the system
reaches a steady state not known from perturbation theories. Thus in this case, noise crosscorrelations remain _relevant_ in the RG sense.
(iii) Given that \(\Phi^{*}=\Phi_{c1}\) is a linearly unstable fixed point, such that if the "initial value" \(\Phi(l=0)>\Phi_{c1}\), \(\Phi(l)\) flows away as \(l\) grows, there should be another "strong coupling" fixed point presumably stable, but cannot be accessed in this one-loop perturbation theory. This indicates an instability of the zero crosscorrelation state, induced by noise crosscorrelations. It is instructive to find out how \(\Phi(l)\) diverges for sufficiently large \(l\). As \(\Phi(l)\) grows, (IV.19) reduces to
\[\frac{d\Phi}{dl}\approx\frac{4g\Phi^{2}\psi^{2}}{(1+P)^{3}}=\Phi^{2}.\] (IV.21)
Solving this, we find
\[\Phi(l)=\frac{\Phi(\ell=0)}{\ell\Phi(l=0)-1}.\] (IV.22)
This shows that \(\Phi(\ell)\) diverges as \(\ell\to l_{c}\equiv 1/\Phi(\ell=0)\) from below. In other words, \(\phi(\ell)\) diverges as the system size reaches a _non-universal_ threshold \(a_{0}\exp(l_{c})\), where \(a_{0}\) is a small-scale cutoff. This happens so long as the "initial" or microscopic value \(\Phi(l=0)>\Phi_{c1}\). What is the nature of the steady state in this case? We note from (IV.12) that as \(\Phi(\ell)\) grows, \(\chi_{b}\) decreases continuously. This is of course unphysical. We cannot in fact follow the flow of \(\Phi(\ell)\) all the way to \(\ell\to l_{c}\), as the perturbation theory breaks down long before that.
Combining the fixed points of \(\psi\) and \(\Phi\), we thus find the following fixed points in the \(\psi-\phi\) plane for \(g=2\):
(i) The origin (0,0). This has quite an interesting stability property - it is _marginally unstable_ along the \(\psi\)-direction, but _stable_ along the \(\Phi\)-direction! Naturally, the flow along the \(\Phi\)-axis is towards the origin.
(ii) A globally stable fixed point \((1,0)\). At this fixed point, the long wavelength scaling properties are identical to those with zero noise crosscorrelations.
(iii) A fixed point \((1,3/2)\) that is stable along the \(\psi\)-direction, but unstable along the \(\Phi\)-direction.
(iv) A putative globally stable "strong coupling" fixed point, which cannot be captured in our perturbation theory.
The presence of the several fixed points suggests the existence of one or more separatrix, which separates different regions of the phase space having distinct behaviours. Since the origin is stable along the \(\Phi\)-axis, but unstable along the \(\psi\)-axis, there should be a separatrix originating from the origin delineating these behaviours. Linearising (IV.18) and (IV.19) about (0,0), and defining \(\Gamma_{1}=\Phi/\psi\) as the slope of the separatrix near the origin, we set \(d\Gamma_{1}/dl=0\) for the separatrix. This gives \(\Gamma_{1}=0\) near the origin, i.e., the \(\psi\)-axis. In the same way, we linearise about the unstable fixed point \((1,3/2)\), and define \(\tilde{\Gamma}_{1}=\delta\Phi/\delta\psi\), where \(\delta\Phi\) and \(\delta\psi\) are (small) fluctuations of \(\Phi\) and \(\psi\) from their fixed point values. For a separatrix that passes through \((1,3/2)\), we set \(d\tilde{\Gamma}/dl=0\) giving
\[\delta\Phi=-\frac{3}{4}\delta\psi,\] (IV.23)
giving the separatrix near the fixed point (1,3/2).
A schematic RG flow diagram of the model in the \(\psi-\Phi\) plane is shown in Fig. 1.
At this point, it is instructive to draw a formal analogy of this instability with the roughening transition in the KPZ equation at \(d>2\), a transition between the smooth phase and the perturbatively inaccessible rough phase. This can be accessed, e.g., by increasing the noise strength. Similarly in the present problem, by increasing \(\Phi\) one can observe a transition from a perturbatively accessible phase having no effects of the noise crosscorrelations to a perturbatively inaccessible phase, where noise crosscorrelations should be relevant in the RG sense, via an unstable fixed point that is reminiscent of a critical point.
While one-loop perturbative RG cannot predict the nature of the steady states near the putative strong coupling fixed point, we note that in the special limit with \(\lambda_{1}=\lambda_{2}\) and \(\nu=\mu\) "initially" (i.e., microscopically), these conditions remain satisfied under mode eliminations. It is therefore reasonable to expect that even at the strong coupling fixed point, these should hold in the long wavelength limit at least when these are satisfied microscopically. This in turn means \(z_{v}=z_{b}=3/2\) (strong dynamic scaling) at the strong coupling fixed point. Roughness exponent \(\chi_{b}\) however cannot be estimated in this way. Nonetheless, given that
Figure 1: Schematic RG flow diagram in the \(\psi-\Phi\)-plane for Case I. Arrows show the RG flow directions. The filled square represents the unstable fixed point, and the filled circle on the \(\psi\)-axis is a stable fixed point. The red broken line is the separatrix (IV.23).
at the unstable fixed point \(\Phi^{*}=\Phi_{c1}\), we are tempted to speculate that \(\chi_{b}\neq\chi_{v}\) at the strong coupling fixed point as well. Furthermore, since \(\Phi\) should be non-zero, at this strong coupling fixed point, we expect at this fixed point \(\chi_{b}<\chi_{b}(\Phi=0)\). In addition, the topology of the RG flow lines suggest that at the perturbatively inaccessible strong coupling fixed point \(\Phi^{*}>\Phi_{c1}\), the fixed point value of \(\phi\) at the unstable fixed point. If this is the case, then we must have the hierarchy \(\chi_{b}(\text{strong\,coupling})<\chi_{b}(\Phi=\Phi_{c1})<\chi_{b}(\Phi=0)\). This runs in contrast to the KPZ equation at \(d>2\), where the strong coupling phase is also the "rough phase", being rougher than both the smooth phase and at the roughening transition. Physical implication of what this means is not immediately clear to us. Numerical studies of the equations of motion or mode coupling approaches should help in this regard.
The different scaling exponents obtained in Case I are presented in a tabular form in Table 1 below.
Although we cannot obviously follow the RG flow of \(\Phi(l)\), starting from \(\Phi(l=0)>\Phi_{c1}\), all the way to infinity as \(\Phi(l)\) appears to diverge as \(l\to l_{c}\) from below, it is possible to speculate about the nature of the phases in this region of the parameter space. For this, we are guided by the fact that \(\psi=1\) and \(P=1\) is maintained by the perturbation theory, and hence at the strong coupling fixed point also. Given that the form of the noise crosscorrelations, as given in (II.5), _does not_ break the Galilean invariance by itself, this persuades us to speculate that \(\psi^{*}=1\) (corresponding to Galilean invariance) and \(P^{*}=1\) are stable at the strong coupling fixed point. All that the noise crosscorrelations can do is to generate a non-zero fixed value of \(\Phi\), leading to \(\chi_{b}\neq\chi_{v}\), something that already happens at the unstable fixed point. This suggests the existence of _another_ stable fixed point located at \(\psi=1\) and \(\Phi>\Phi_{c1}\). This is an Occam's razor-style argument which allows us to draw the simplest RG flow lines that are one hand physically intuitive and also consistent with the perturbatively obtained flow lines; see Fig. 2.
### Case II: Renormalisation group analysis
We now consider the case with \(\psi<0\), or \(\lambda_{1}\lambda_{2}<0\). As in Case I, there are no relevant corrections to \(D_{\times}\). To proceed further, we assume \(\lambda_{1}>0\) without any loss of generality. Then \(\lambda_{1}\lambda_{2}<0\) implies \(\lambda_{2}<0\). Writing \(\psi=-|\psi|\), flow equation (IV.18) takes the form
\[\frac{d|\psi|}{dl} = |\psi|g\bigg{[}-\frac{\psi^{2}}{(1+P)^{2}}-\frac{|\psi|}{(1+P)^{ 2}}\bigg{]}.\] (IV.24)
Thus, \(\psi^{*}=0\) is the _only_ RG fixed point, which is _stable_. Further, \(g^{*}=2\) at the RG fixed point, which is stable as before. With \(\psi^{*}=0\), Eq. (II.2) effectively decouples from Eq. (II.1); as a result, fluctuation corrections to \(\mu\) vanish, which means \(z_{b}=2\). However, the fluctuation corrections to \(\nu\) remains unaffected, giving \(z_{v}=3/2\) as before. Therefore \(z_{v}>z_{b}\), giving \(P^{*}\to 0\) at the RG fixed point. This gives _weak dynamic scaling_[13; 14; 15]. These in turn gives
\[\frac{d\Phi}{dl}=-\frac{\Phi}{2}<0,\] (IV.25)
in the long wavelength limit. This means \(\Phi\) flows to zero rapidly near the RG fixed point in the thermodynamic limit. Thus, the effects of the noise cross-corrections are _irrelevant_ in the RG sense when \(\lambda_{1}\lambda_{2}<0\): even if noise crosscorrelations are present microscopically, the long wavelength scaling properties of the steady states are same as those without it. A schematically drawn RG flow diagram in the \(|\psi|-\Phi\)-plane is shown in Fig. 3.
Unsurprisingly, the origin (0,0) is the only stable fixed point
\begin{table}
\begin{tabular}{c|c|c|c} \multicolumn{4}{c}{Case I fixed points and scaling exponents (\(g=2\))} \\ \hline \hline \(\psi=0\), \(\Phi=0\) & \(\psi=1\), \(\Phi=0\) & \(\psi=1\), \(\Phi=3/2\) & \(\psi=\) \\ Linearly unstable along \(\Phi\) & Linearly fully stable & Linearly stable along \(\Phi\) & Strong coupling, presumably fully stable \\ Linear theory & \(\chi_{b}\) & \(=\)\(\chi_{b}=-1\), \(z_{b}=3/2\) & \(z_{b}=3/2\) \\ \(-1/2\) & 3/2 & & \(P=1\) microscopically, \(\chi_{b}\) not known \\ \hline \hline \end{tabular}
\end{table}
Table 1: Fixed points in the \(\psi-\Phi\) plane and the associated scaling exponents (with \(g=2\)) in Case I (see text).
Figure 2: Conjectured global RG flows constructed by using Occam’s razor style arguments diagram in the \(\psi-\Phi\)-plane for Case I. Arrows show the RG flow directions. The filled square represents the unstable fixed point, and the filled circle in a stable fixed point. The red square is the speculated, presumably globally stable fixed point. The red, broken line is the separatrix, drawn schematically as an extension of (IV.23), which we do not expect to meet any of the axes at any finite distance from the origin.
in the flow diagram. The flows along both the \(\psi\) and \(\Phi\)-directions are towards the origin.
### Case III: Renormalisation group analysis
We now consider the case with \(\lambda_{1}=0\). Thus, not only \(v\) is autonomous, it follows a linear equation, and its scaling exponents are of course exactly known: we have \(\chi_{v}=-1/2,\,z=2\). In this case, the sign of \(\lambda_{2}\) has no significance. We first define a new dimensionless coupling constant \(\tilde{g}=\lambda_{2}^{2}D_{v}/\nu^{3}\), which plays the role of \(g\) here. The RG flow equations now read
\[\frac{d\nu}{dl} = \left[z-2\right],\] (IV.26) \[\frac{d\mu}{dl} = \mu\left[z-2+\frac{\tilde{g}}{2(1+P)P}+\frac{\tilde{g}(1-P)}{(1+ P)^{2}P}\right],\] (IV.27) \[\frac{dD_{v}}{dl} = D_{u}\left[z-2\chi_{v}-3\right],\] (IV.28) \[\frac{dD_{b}}{dl} = D_{b}\left[z-2\chi_{b}-3+\frac{\tilde{g}}{P(1+P)}-\frac{4\Phi \tilde{g}}{(1+P)^{3}}\right],\] (IV.29) \[\frac{d\lambda_{2}}{dl} = \lambda_{2}\left[\chi_{v}+z-1-\frac{\tilde{g}}{(1+P)^{2}}\right].\]
Since \(\lambda_{1}=0\), there are no diagrammatic corrections to \(D_{\chi}\). Hence, the flow of \(D_{\chi}\) follows the same equation (IV.15). We have noted above that \(z_{v}=2\) gives the dynamic exponent of \(v\). Does \(z_{b}=2\) for \(b\) as well? If it is so, then \(P^{*}\) must be finite at the RG fixed point. The flow of \(P\) can be calculated by using (IV.26) and (IV.27) as given above. We find
\[\frac{dP}{dl}=P\tilde{g}\left[\frac{1}{2P(1+P)}+\frac{1-P}{P(1+P)^{2}}\right].\] (IV.31)
This has a stable fixed point at \(P^{*}=3\), meaning \(\mu^{*}=3\nu^{*}\). This further means that like \(v\), even \(b\) has a dynamic exponent \(z_{b}=2\). Furthermore, the flow equation for \(\tilde{g}\) reads
\[\frac{d\tilde{g}}{dl}=\tilde{g}\left[1-\frac{2\tilde{g}}{(1+P)^{2}}\right].\] (IV.32)
Therefore, at the RG fixed point \(\tilde{g}^{*}=8\), using \(P^{*}=3\). Then, proceeding as for Case I, we find
\[\frac{d\Phi}{dl}=\Phi\tilde{g}\left[-\frac{1}{P(1+P)}+\frac{4\Phi}{(1+P)^{3} }\right].\] (IV.33)
Flow equation (IV.33) gives the following fixed points, which interestingly are qualitatively similar to Case I:
(i) We have \(\Phi^{*}=0\), a stable fixed point. At this RG fixed point, noise crosscorrelations are irrelevant in the RG sense, and the long wavelength scaling properties of the model is statistically identical to its zero noise crosscorrelation version [13; 14; 15]. In particular, \(\chi_{b}=-1/6\) and \(z_{b}=2\).
(ii) Then there is a linearly unstable fixed point \(\Phi^{*}=4/3\equiv\Phi_{c2}\). Thus, if the initial value \(\Phi(l=0)<4/3\equiv\Phi_{c2}\), \(\Phi(l)\) flows to zero, rendering noise crosscorrelations irrelevant in the RG sense. At this unstable fixed point, \(\chi_{b}(\Phi_{c2})=-1/2<\chi_{b}(\Phi=0)\). On the other hand, if \(\Phi(l=0)>\Phi_{c2}\), \(\Phi(l)\) grows indefinitely as \(l\) grows. Again as in Case I, this indicates an instability, induced by noise crosscorrelations. In fact, proceeding as in Case I, we can show that \(\Phi(l)\) diverges as \(l\rightarrow\tilde{l}_{c}\equiv 2/\Phi(l=0)\), a non-universal value.
Similar to Case I, a separatrix can be obtained that passes through \((8,4/3)\) in the \(\tilde{g}-\psi\) plane. Following the procedure outlined in Case I, we find
\[\frac{\delta\Phi}{\delta\tilde{g}}=0\] (IV.34)
as the equation for separatrix near the fixed point \((8,4/3)\), where \(\delta\tilde{g}\) and \(\delta\psi\) are small deviations of \(\tilde{g}\) and \(\psi\) from their fixed point values.
(iii) Again as in Case I, given that if \(\Phi(l=0)>\Phi_{c2}\), \(\Phi(l)\) grows, another (presumably stable) fixed point \(\Phi^{*}>4/3\) should exist, whose actually value cannot be obtained in the present one-loop perturbation theory. Based on our one-loop theory, no inference can be drawn about the scaling properties of this "strong coupling" state. However, given that if one has \(P=3\) microscopically, it remains so under mode eliminations, we still expect \(z_{v}=z_{b}=2\). By using arguments similar to Case I, we expect \(\chi_{b}<\chi_{b}(\Phi=0)\) at this fixed point. By using arguments similar to Case I above, we again expect an analogous hierarchy \(\chi_{b}(\Phi>\Phi_{c2})<\chi_{b}(\Phi=\Phi_{c2})<\chi_{b}(\Phi=0)\). Numerical approaches should provide qualitative results and additional physical insight about the strong coupling phase.
The different scaling exponents obtained in Case III are presented in a tabular form in Table 2 below.
Figure 3: Schematic RG flow diagram in the \(\psi-\Phi\)-plane for Case II. Arrows show the RG flow directions. The filled circle represents the globally stable fixed point. Here, noise cross-correlations are generally irrelevant (see text).
A schematically drawn RG flow diagram in the \(\tilde{g}-\Phi\)-plane is shown in Fig. 4. There is a stable fixed point at \(\tilde{g}=8\), \(\Phi=0\) and an unstable fixed point at \(\tilde{g}=8\), \(\Phi=4\). Further, it is clear from (IV.33) that the \(\Phi\)-axis (i.e., \(\tilde{g}=0\)) is a _marginal_ direction. Naturally, the origin, another fixed point, is unstable along the \(\tilde{g}\) direction, but marginal along the \(\Phi\)-direction.
Notice that in Case III, similar to Case I, the divergence of \(\Phi(l)\) for a sufficiently large initial value \(\Phi(l=0)\) is reminiscent of the divergence of the dimensionless coupling constant in the higher dimensional (\(d>2\)) KPZ equation in the long wavelength limit, resulting into a perturbatively inaccessible strong coupling rough phase. In an analogy to the rough phase in the \(d>2\) KPZ equation, we are tempted to speculate the perturbatively inaccessible steady states with a large but presumably finite \(\Phi\) as a type of strong coupling phase.
Similar to Case I, one may use Occam's razor type arguments draw the global RG flow lines, which match with the flow diagram in (4). Due to the obvious qualitative similarity between Case I and Case III, such a global RG flow diagram in the \(\tilde{g}-\Phi\) plane in Case III should have the same topology as the corresponding diagram for Case I in its \(\psi-\Phi\) plane as shown in Fig. 2.
## V Summary and outlook
In this work, we have studied the effects of noise cross-correlations on the steady states of a 1D coupled driven model. Specifically, one of the dynamical variables \(v\) follows the well-known Burgers equation, and evolves autonomously, being independent of the second dynamical field \(b\). The second dynamical field \(b\) is passively advected by the "Burgers velocity" \(v\), and follows an equation that closely resembles the well-known passive scalar model [24]. We have analysed the long wavelength properties of this model in the presence of finite noise crosscorrelations, whose effects depends upon the precise nature of the model. We consider three different cases, delineated by the nonlinear coupling constants. For instance in Case I with Galilean invariance appearing as an emergent symmetry, where the advective coupling constant \(\lambda_{2}\) in the \(b\)-equation has the same sign as the advective nonlinearity \(\lambda_{1}\) in the Burgers equation for \(v\), a sufficiently strong noise crosscorrelation amplitude above a finite threshold can destabilise the system, whereas its microscopic values weaker than the threshold render noise crosscorrelations irrelevant in the RG sense. In the latter case, the model is identical to the one without noise crosscorrelations in the long wavelength limit. In contrast, in Case II, where \(\lambda_{2}\) and \(\lambda_{1}\) have opposite signs, noise crosscorrelations are _generically irrelevant_ in the RG sense. We have also considered yet another case denoted Case III here, where \(\lambda_{1}=0\), making \(v\) follow the linear diffusion equation. In this case, similar to Case I, noise crosscorrelations with amplitudes greater than a threshold lead to instabilities, whereas when below the threshold, it is irrelevant in the RG sense. In the unstable cases, the eventual steady states cannot be obtained from our calculations. Numerical simulations of equivalent lattice models, or direct numerical solutions of the model equations can verify our perturbative results, and also shed light on the instabilities and the resulting unknown steady states. The existence of unstable fixed points in Case I and Case III, and the associated perturbatively inaccessible putative strong coupling phases have strong resemblance with the well-known roughening transition in the KPZ equation at \(d>2\). Quite interestingly, the scaling exponents of \(b\) at the unstable fixed point in both Case I and Case III are less than their values at zero noise crosscorrelations. This suggests that the field \(b\) actually _fluctuates less_ at the unstable critical point. The RG analysis fails to capture the strong coupling phases in Case I and Case III. Mode coupling methods may be useful in extracting the scaling exponents in these strong coupling phases [15]. It may however be noted that in Case III there are no parameter regimes
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \multicolumn{3}{|c|}{Case III fixed points and scaling exponents (\(\tilde{g}=8\))} \\ \hline \hline \(\Phi^{*}=0\) & \(\Phi^{*}=4/3\) & \(\Phi\to\infty\) \\ Linearly stable \(\chi_{b}=-1/6,z_{b}=3/2\) & Linearly unstable \(\chi_{b}=-1/2,z_{b}=3/2\) & Strong coupling \(z_{b}=3/2\) if \(P=3\) \\ \(3/2\) & 3/2 & microscopically, \(\chi_{b}\) \\ & not known \\ \hline \end{tabular}
\end{table}
Table 2: Fixed points and the associated scaling exponents (with \(\tilde{g}=8\)) in Case III (see text)
Figure 4: Schematic RG flow diagram in the \(\tilde{g}-\Phi\)-plane for Case I. Arrows show the RG flow directions. The filled square represents the unstable fixed point, and the filled circle represents a stable fixed point. The red broken line is the separatrix (IV.34).
where mode coupling theories may be straightforwardly applied to the strong coupling regime. This is because in Case III vertex corrections play a significant role for any nonzero \(\lambda_{2}\). This is in contrast to Case I, where if the model is Galilean invariant (i.e., \(\lambda_{1}=\lambda_{2}\)) microscopically, there are no relevant vertex corrections. On the whole, we thus see that the precise effects of the noise crosscorrelations depend quite sensitively on the details of the models under consideration.
In this work, we have confined ourselves to studying only conserved noises. Similar calculations as here can be performed with noise variances than scale differently with \(k\), e.g., long-range noises. We speculate similar effects will be visible even in that case.
We have assumed the noise crosscorrelations to be imaginary and odd in Fourier wavevector \(k\). What would happen if we chose a different structure for the noise crosscorrelations, e.g., real and even in \(k\)? Straightforward perturbation theory generates a term of the form \(\partial_{xx}v\) in the fluctuation-corrected (II.2). This is not surprising, since a real and even noise crosscorrelation implies that \(v\) and \(b\) have the same properties under parity inversions; see, e.g., discussions in Ref. [11]. This then no longer rules out a diffusive \(v\) term in (II.2) [11]. Given that \(v\) is autonomous, such a term in (II.2) effectively acts like another additive noise for \(b\), whose statistics is given by the statistics of \(v\). It would be interesting to study how noise crosscorrelations in that case affects the known scaling and stability of the NESS without it.
We have used a simple, purpose-built, minimal coupled 1D model to study the role of noise crosscorrelations. However, the question of the effects of noise crosscorrelations should be important in hosts of natural systems, going much beyond such simple driven 1D models. RG studies, for instance, on active XY models [25; 26], passive scalar models [27], can give interesting insights on the precise role of noise crosscorrelations on the NESS of more complex models. We hope our work here will provide impetus on further studies on this topic.
_Acknowledgement:-_ S.M. thanks the SERB, DST (India) for partial financial support through the TARE scheme [file no.: TAR/2021/000170] (2022).
## Appendix A Action functional
We give the actine functional \(\mathcal{S}\) corresponding to Eqs. (II.1) and (II.2). It is defined via the generating functional \(\mathcal{Z}\) given by
\[\mathcal{Z}=\int\mathcal{D}v\mathcal{D}b\mathcal{D}\tilde{v}\mathcal{D}\tilde {b}\exp(-\mathcal{S}), \tag{23}\]
where \(\mathcal{S}\) is given by
\[\mathcal{S} = -\int\frac{dk}{2\pi}\frac{d\omega}{2\pi}\left[D_{v}k^{2}|\tilde{ v}(k,\omega)|^{2}+D_{b}k^{2}|\tilde{b}(k,\omega)|^{2}+iD_{\times}k|k|\tilde{v}(k, \omega)\tilde{b}(-k,-\omega)\right] \tag{24}\] \[- \int dxdt\left[\tilde{v}\left(\partial_{t}v-\frac{\lambda_{1}}{ 2}\partial_{x}v^{2}-\nu\partial_{xx}v\right)+\tilde{b}\bigg{(}\partial_{t}b- \lambda_{2}\partial_{x}(vb)-\mu\partial_{xx}b\bigg{)}\right].\]
## Appendix B Details of the RG analysis
In this Section, we discuss some details of the momentum shell RG procedure applied on our model.
### Feynman diagrams
In this Section, we give the one-loop Feynman diagrams for the model parameters in (II.2). The corresponding one-loop diagrams are standard; see, e.g., Refs. [8].
Figure 5: One-loop Feynman diagram that corrects \(\mu\).
### Discrete recursion relations
\[D_{v}^{<} = D_{v}+\frac{\lambda_{1}^{2}D_{v}^{2}}{4\nu^{3}}\frac{\delta l}{ \Lambda\pi}, \tag{38}\] \[D_{b}^{<} = D_{b}+\left[\frac{\lambda_{2}^{2}D_{v}D_{b}}{\nu\mu(\nu+\mu)}- \frac{4\lambda_{2}^{2}D_{\times}^{2}}{(\nu+\mu)^{3}}\right]\frac{\delta l}{ \Lambda\pi},\] (39) \[D_{\times}^{<} = D_{\times},\] (40) \[\nu^{<} = \nu+\frac{\lambda_{1}^{2}D_{v}}{4\nu^{2}}\frac{\delta l}{\Lambda \pi},\] (41) \[\mu^{<} = \mu+\left[\frac{\lambda_{2}^{2}D_{v}}{2\nu(\nu+\mu)}+\frac{ \lambda_{2}^{2}D_{v}(\nu-\mu)}{\nu(\nu+\mu)^{2}}\right]\frac{\delta l}{\Lambda \pi},\] (42) \[\lambda_{1}^{<} = \lambda_{1},\] (43) \[\lambda_{2}^{<} = \lambda_{2}-\left[\frac{\lambda_{2}^{3}D_{v}}{\nu(\nu+\mu)^{2}}- \frac{\lambda_{2}^{2}\lambda_{1}(3\nu+\mu)D_{v}}{2\nu^{2}(\nu+\mu)^{2}}\right.\] (44) \[+ \left.\frac{\lambda_{2}^{2}\lambda_{1}D_{v}}{2\nu^{2}(\nu+\mu)} \right]\frac{\delta l}{\Lambda\pi}.\]
|
2302.10361
|
Ambipolar Heating of Magnetars
|
Magnetars, neutron stars thought to be with ultra-strong magnetic fields of
$10^{14 - 15}$ G, are observed to be much hotter than ordinary pulsars with
$\sim 10^{12}$ G, and additional heating sources are required. One possibility
is heating by the ambipolar diffusion in the stellar core. This scenario is
examined by calculating the models using the relativistic thermal evolutionary
code without making the isothermal approximation. The results show that this
scenario can be consistent with most of the observed magnetar temperature data.
|
Sachiko Tsuruta, Madeline J. Kelly, Ken'ichi Nomoto, Kanji Mori, Marcus Teter, Andrew C. Liebmann
|
2023-02-20T23:35:36Z
|
http://arxiv.org/abs/2302.10361v1
|
# Ambipolar Heating of Magnetars
###### Abstract
Magnetars, neutron stars thought to be with ultra-strong magnetic fields of \(10^{14-15}\) G, are observed to be much hotter than ordinary pulsars with \(\sim 10^{12}\) G, and additional heating sources are required. One possibility is heating by the ambipolar diffusion in the stellar core. This scenario is examined by calculating the models using the relativistic thermal evolutionary code without making the isothermal approximation. The results show that this scenario can be consistent with most of the observed magnetar temperature data.
Dense matter -- stars: magnetar -- X-rays: stars 0000-0002-4109-8002]Sachiko Tsuruta
0000-0002-4882-7880]Madeline J. Kelly
0000-0002-4882-7880]Ken'ichi Nomoto
0000-0002-4882-7880]Kanji Mori
0000-0002-4882-7880]Marcus Teter
## 1 Introduction
The soft gamma-ray repeaters (SGR) and the anomalous X-ray pulsars (AXP) are now considered to be magnetars, the same population of the ultra-strongly magnetized neutron stars with magnetic fields of the order of \(10^{14}\) - \(10^{15}\) G on the surface (e.g., Mereghetti, 2008; Thompson & Duncan, 2001; Heyl & Hernquist, 1998; Potekhin et al., 2015). Activities in these stars are powered by the dissipation of strong magnetic energy (e.g., Thompson & Duncan, 2001). Magnetars generally undergo long quiescent periods with persistent X-ray emission between a shorter recurrent phase of gamma-ray bursts (e.g., Mereghetti, 2008).
During the quiescent phase the star is in a nearly steady equilibrium state. The surface temperature of many magnetars during this phase has been measured. Figure 1 shows the measured surface luminosity (and hence surface temperature) of magnetars (taken from Vigano et al., 2013), which is compared with theoretical thermal evolution curves for ordinary neutron stars with magnetic fields of \(10^{12}\) G (Tsuruta et al., 2009).
In this figure, two upper thick solid curves are for \(1.4M_{\odot}\) neutron stars, with the lower curve for cooling only while the upper one includes the maximum vortex creep heating. Two curves between these thick solid curves show the vortex creep heating with intermediate strength. The hot dashed curve is for stars with the crusts contaminated by light elements. The lower three curves represent stars with \(1.5M_{\odot}\), \(1.6M_{\odot}\) and
\(1.8M_{\odot}\), respectively, in the order of decreasing luminosity. When the stellar mass reaches \(M_{tr}=1.45M_{\odot}\), the transition from neutron matter to hyperon-mixed matter takes place. Therefore, these are hyperon stars with the non-standard fast cooling. For the intermediate case of the \(1.5M_{\odot}\) and \(1.6M_{\odot}\) stars, superfluid suppression is effective. However, for the heaviest \(1.8M_{\odot}\) star, the central density is so high that the corresponding critical temperature for superfluidity becomes so low that the superfluid suppression disappears.
It is clear that most magnetars are hotter than ordinary neutron stars. The surface temperature becomes higher with stronger magnetic fields, but with cooling alone the temperature can not become as high as the observed magnetar temperatures when the surface magnetic fields are increased even to as high as \(10^{15}\) G (assuming the conventional magnetic field structure which becomes dipolar globally) (e.g., Heyl & Hernquist, 1998). With a certain special magnetic field configuration where ultra-strong magnetic fields of as high as \(\sim 10^{16}\) G are deposited in the equatorial plane near the inner edge of the inner crusts, the star was shown to become as hot as the observed magnetars (Vigano et al., 2013; Potekhin et al., 2015). However, it was since then pointed out that this special magnetic structure is unlikely to last long due to instability (Beloborodov & Li, 2016, hereafterBL16). Therefore, it is important to explore heating of magnetars with more conventional magnetic field structure.
In the following we only consider such a conventional magnetosphere case. There are several possible heating scenarios proposed for magnetars (e.g., BL16; Thompson & Duncan, 2001). The star can be heated in the interior, either within the stellar core or in the crustal layers. It can be heated from the outside from the magnetosphere also (see Thompson & Duncan, 2001,BL16).
Kaminker et al. (2006) considered the crustal heating of magnetars by using their thermal evolution code and explored the effect of the location of the heat source in the crusts and heating rate to the stellar surface temperature. It was concluded that a heating rate of \(\sim 3\times 10^{19}\) ergs cm\({}^{-3}\) s\({}^{-1}\) is required at depths of 300 m or less, to sustain the surface radiation luminosity of \(10^{35}\) ergs s\({}^{-1}\) required for magnetars. These authors, however, did not consider the physical mechanisms for the heating, although some comments were made.
Some other authors considered various heating mechanisms for magnetars (e.g., Thompson & Duncan, 2001, BL16). Recently BL16 showed that a variety of heating mechanisms possible in the stellar crusts, such as the Hall effect, Ohmic dissipation, etc., will fail to increase the surface temperature to as high as the observed magnetar data.
Recently as one of the possible mechanisms for heating magnetars, BL16 proposed the ambipolar process which takes place in magnetar's central core. In this process magnetic energy is dissipated by the ambipolar drift which is the motion of the electron-proton plasma through the approximately static neutron fluid in the stellar core. The drift is driven by the Lorentz force. The ambipolar diffusion driven by the Lorentz force is opposed by proton-neutron friction and pressure gradients. These authors, however, adopted a simple analytic approach with a Newtonian model for a constant temperature and density core using the isothermal approximation.
Under the isothermal approximation, the core from the center to a certain density \(\rho_{\rm b}\) is isothermal where the timescale of thermal conduction is assumed to be negligible (Tsuruta, 1964, 1979). BL16 adopted \(\rho_{\rm b}=10^{9}\) g cm\({}^{-3}\). Then the cooling and heating effects of each layer are instantaneously transported throughout the core. The thin outer envelope at
has a spacially constant luminosity. With this isothermal approximation, the surface luminosity follows instantaneously the change in the core temperature. However, in the early stage of the thermal evolution of the neutron star, the neutrino emission in the core is much faster than the thermal conduction so that the surface luminosity does not necessarily keep pace with the thermal evolution of the core (Nomoto & Tsuruta, 1981, 1987).
We will, therefore, investigate this internal heating due to the ambipolar process, by fully taking into account the finite timescale of heat conduction in the core. We use the magnetar evolutionary simulation code which fully includes general relativity (Thorne, 1977; Nomoto & Tsuruta, 1981, 1987) and realistic stellar physics with relevant equation of state (EOS).
In Section 2 we review the ambipolar diffusion process and then show how that will heat the stellar interior by the magnetic energy dissipated by this process. Section 3 introduces our physical model and summarizes our method and approach. Section 4 presents the results. The discussion and concluding remarks are given in sections 5 and 6.
## 2 Magnetar Heating by
Ambipolar Diffusion Process
The main process capable of dissipating magnetic energy in magnetar's core is diffusion
Figure 1: Theoretical thermal evolution curves for various ordinary pulsars are compared with the observed surface photon luminosity (hence surface temperature) data of various kind of neutrons stars taken from Vigano et al. (2013). The observed data points are grouped into five categories. Red are Magnetars (Mag), orange are High-B pulsars (HB), green are Rotation-Powered Pulsars (RPP), blue are Central Compact Objects (CCO) and purple are X-Ray Isolated Neutron Stars (XINS). The bars and crosses are detections. Arrows pointing down represent upper limits on measured luminosity, arrows pointing to the left represent upper limits on age. See Vigano et al. (2013) for more complete details.
through ambipolar drift (Goldreich & Reisenegger, 1992; Thompson & Duncan, 1996). Ambipolar drift is the motion of the electron-positron plasma through the (approximately static) neutron fluid. The drift is driven by the Lorentz force \(J\times B/c=(\nabla\times B)\times B/4\pi\) and tends to relieve the magnetic stresses that drive it. The drift is opposed by (i) friction against the neutron fluid and (ii) pressure perturbations it induces. The friction is due to nuclear collisions between neutrons and protons. (Electron-neutron collisions are negligible.)
The rate of proton-neutron (p-n) collisions per proton, \(\tau_{\rm pn}\), is (BL16)
\[\tau_{\rm pn}^{-1}\approx 5\times 10^{18}T_{9}^{2}(\rho/\rho_{\rm nuc})^{-1/3}Q _{\rm pn}~{}{\rm s}^{-1}, \tag{1}\]
where \(T_{9}\) is the temperature in the unit of \(10^{9}\) K, \(\rho\) is the density, and \(\rho_{\rm nuc}\approx 2.8\times 10^{14}\) g cm\({}^{-3}\) is the nuclear density. \(Q_{\rm pn}\) describes suppression of the rate of collisions among protons and neutrons. It refers to proton (p) superconductivity and neutron (n) superfluidity. If there exist no proton superconductivity and no neutron superfluidity \(Q_{\rm pn}=1\). If they are present, \(Q_{\rm pn}<1\) (see sections 5.2 and 5.3 for the details.).
Pressure perturbations are induced if \(\nabla\cdot(n_{e}{\bf v})\neq 0\), where \(n_{e}=n_{\rm p}\) are the electron (e) and proton (p) number density and \({\bf v}\) is the proton drift velocity. This compressive drift generates a change in \(n_{e}\) which changes the electron and proton pressures. That is related to chemical potentials of electron and proton \(\mu_{e}\) and \(\mu_{\rm p}\). Then the resultant pressure gradient is given as \(-n_{e}\nabla(\Delta\mu)\) where
\[\Delta\mu=\mu_{e}+\mu_{\rm p}-\mu_{\rm n}. \tag{2}\]
\(\mu_{\rm n}\), \(\mu_{\rm p}\) and \(\mu_{e}\) are, respectively, neutron, proton and electron chemical potential. Equation 2 describes a local deviation from chemical \(\beta\)-equilibrium e,p \(\longleftrightarrow n^{2}\). The chemical potentials include the rest mass of the species.
The ambipolar diffusion is then given as (Goldreich & Reisenegger, 1992)
\[(\nabla\times B)\times B/4\pi=n_{e}\nabla(\Delta\mu)+n_{e}m_{\rm p}^{*}{\bf v }/\tau_{\rm pn}, \tag{3}\]
where \(B\) is the core magnetic field and \(m_{\rm p}^{*}\) is proton effective mass. The reaction rate is written as \(dn_{e}/dt=-|\lambda\Delta\mu|\), where \(\lambda\) is related to the compressibility of the plasma and given by
\[\begin{array}{c}\lambda\approx 5\times 10^{33}T_{9}^{6}(\rho/\rho_{\rm nuc}) ^{2/3}HQ_{\lambda}\\ {\rm ergs}^{-1}~{}{\rm cm}^{-3}~{}{\rm s}^{-1}.\end{array} \tag{4}\]
\(Q_{\lambda}\) is suppression of \(\lambda\) due to neutron superfluidity. \(H\) refers to the enhancement due to the deviation from \(\beta\) equilibrium.
Two regimes are possible: a friction dominated regime where \(l\gg a\) and a pressure pillow regime with \(l\ll a\), where \(l\) is a characteristic scale of the field variation \(\Delta B\), and \(a\) is a characteristic length defined and given by Goldreich & Reisenegger (1992) as
\[a=(\tau_{\rm pn}n_{e}/\lambda m_{\rm p}^{\star})^{1/2}. \tag{5}\]
Further details are found in BL16.
### Magnetar Heating by Ambipolar Diffusion
To calculate the ambipolar heating, we used (BL16)
\[dq_{\rm h}/dt\approx-B_{1}^{2}b(dl_{1}/dt)/12\pi^{2}\] (6a) where \[dl_{1}/dt\approx-\tau_{\rm pn}B_{1}^{2}/(2\pi\rho_{\rm p}l_{1}) \quad{\rm for}~{}l_{1}\geq l_{\star}, \tag{6b}\] \[dl_{1}/dt\approx-\lambda B_{1}^{2}l_{1}/(4\pi n_{e}^{2}) \quad{\rm for}~{}l_{1}\leq l_{\star}. \tag{6c}\]
Here \(dq_{\rm h}/dt\) is ambipolar heating rate, \(l_{1}\) is the characteristic size of the ambipolar heating region. \(l_{\star}=l_{1}\) when \(l_{1}=a\sqrt{2}\). \(B_{1}=2B_{0}/\pi\), where \(B_{0}\) is the peak magnetic field in the core. Initially \(B=B_{0}\) sin \(bx\), and \(l_{1}\) and \(b\) are related by \(l_{1}=2/\pi b\).
Sec 3.4 and Appendix in BL16 give the details of their 1D (dimensional) model with the initial \(B=B_{0}\) sin \(bx\). Our simulation code is also 1D in the r direction. This (1D in the r direction) approach is essentially adopted in the 'exact' thermal evolution code by the experts of this trade, since the variables and parameters change mostly in the r direction, not in the angular directions. If the magnetosphere is radial the r direction flux in 1D is just multiplied by the whole surface area to obtain the total photon luminosity. In our current paper the dipole magnetosphere is adopted. Then, by integrating the 1D flux in each direction including the angular dependence of the dipolar field geometry, it was found that the net effect of the geometry is that the radial geometry case with the polar direction flux will be reduced by about 1/3. That is what our code does in this paper, to introduce the effect of magnetosphere geometry. For the 1D flux case we adopted the BL16 model explained in detail in that paper.
## 3 Magnetar Thermal Evolution Model
In our magnetar model, heating by the ambipolar diffusion under ultra-strong magnetic fields takes place in the central stellar core, while such ultra-strong magnetic fields also seriously alter the structure and property of the surrounding crustal layers. By taking into account these new features, we constructed the thermal evolutionary simulation code for magnetars, revising the latest version of our exact evolutionary code for ordinary neutron stars.
Our neutron star evolutionary code has been developed over years - for the details, see Nomoto & Tsuruta (1981, 1987); Tsuruta (1986, 1998, 2009); Tsuruta et al. (2009). The set of general relativistic basic stellar structure evolution equations developed by Thorne (1977) are solved simultaneously from the stelar center to the surface without making isothermal approximations. The evolutionary simulation code originally started by Nomoto & Tsuruta (1981, 1987), which has been continuously updated by adopting the most up-to-date microphysical input. See, e.g., Tsuruta (1998), Tsuruta et al. (2009), and Tsuruta (2018) for the details.
In the magnetar thermal evolutionary code constructed from the latest ordinary neutron star evolutionary code, the neutrino emissivity consists of all possible standard mechanisms, with the modified Urca, plasmon, pair neutrino, photoneutrino, bremsstrahlung, etc. The neutron superfluidity model with the critical temperature of log(\(T_{\rm crit}\)(K)) = 9.45 (e.g., Tsuruta et al., 2009) is adopted. For the Cooper pair breaking and formation contribution, we found that many earlier publications, such as Yakovlev et al. (1999), give only incomplete treatments. Therefore, we took the more recent, more updated version by Kolomeitsev & Voskresensky (2008). In addition, the ambipolar heating (6a) is adopted. In doing so, for the hotter earlier period, Equation (6b) was used, while, during the later cooler period, Equation(6c) was adopted.
This magnetar evolutionary simulation code was used for the evolution of the core from the center to the outer boundary chosen at \(M_{r}=M_{\rm core}\) where the matter density \(\rho\) is as low as \(\sim 10^{9}\) g cm\({}^{-3}\). (Here \(M_{r}\) is the baryon mass interior to the radius \(r\).) With this code, we obtain the evolutionary change in the core temperature \(T_{\rm core}\) at the outer boundary of \(M_{r}=M_{\rm core}\).
For the equation of state (EOS), the maximum mass of the neutron star has been found to be at least \(2M_{\odot}\)(Demorest et al., 2010). Takatsuka, Nishizaki, & Tamagaki (2008) constructed an advanced EOS model, which we refer to as HP8u, with the maximum mass going beyond \(2M_{\odot}\). This model is based on the universal many body interactions among both nucleons and hyperons (see Takatsuka, Nishizaki, & Tamagaki, 2008; Tsuruta, 2018 for the details). We used the HP8u EOS in our current work.
For this EOS, the core of lower mass, hotter stars consists mainly of neutrons and protons, while the major composition transforms to hyperons in heavier, cooler stars. The mass of the star at this transformation point is \(1.45M_{\odot}\). Since magnetars should be hot, in this paper we adopt hotter, less massive stars. Using the medium EOS HP8u (see, e.g., Tsuruta, 2018 for the details of this EOS model), we constructed a neutron star model with \(M=1.4M_{\odot}\), where \(M\) is the total baryon mass of the neutron star (see Section 5.4 for the details). For this star, the central density is 1.0 \(\times 10^{15}\) g cm\({}^{-3}\), and the radius is \(R=11.7\) km. We adopt the core mass, \(M_{\rm core}\), for this model at \(1-M_{\rm core}/M=4.7\times 10^{-8}\), where \(\rho_{b}=2.0\times 10^{9}\) g cm\({}^{-3}\).
For the envelope at \(M_{\rm core}\leq M_{r}\leq M\), the spatially constant luminosity is a good approximation because of low densities. For the magnetar envelope, the ultra-strong magnetic fields significantly increase thermal conductivity in the crustal layers, which results in significantly reduced difference between the core temperature and the surface temperature (Potekhin et al., 2015). BL16 calculated such magnetar envelope models and showed the results in their Figure 1. We approximate their relation between the core temperature, \(T_{\rm core}\), and the surface photon luminosity, \(L_{\gamma}^{\infty}\), and obtained the following equations for four set of \(B\) (\(3\times 10^{13}\) and \(10^{15}\) G) and chemical composition (heavy element dominated such as Fe and light element contaminated).
For the \(B=10^{15}\) G and Fe case:
\[\begin{split}\log_{10}[L_{\gamma}^{\infty}(\rm{ergs\ s^{-1}})]=\\ 1.96\times\log_{10}[T_{\rm core}(K)]+17.446\end{split} \tag{7a}\]
For the \(B=3\times 10^{13}\) G and Fe case:
\[\begin{split}\log_{10}[L_{\gamma}^{\infty}(\rm{ergs\ s^{-1}})]=\\ 2.03\times\log_{10}[T_{\rm core}(K)]+16.583\end{split} \tag{7b}\]
For the \(B=10^{15}\) G and light element case:
\[\begin{split}\log_{10}[L_{\gamma}^{\infty}(\rm{ergs\ s^{-1}})]=\\ 1.50\times\log_{10}[T_{\rm core}(K)]+21.869\end{split} \tag{7c}\]
For the \(B=3\times 10^{13}\) G and light element case:
\[\begin{split}\log_{10}[L_{\gamma}^{\infty}(\rm{ergs\ s^{-1}})]= \\ 1.50\times\log_{10}[T_{\rm core}(K)]+21.673\end{split} \tag{7d}\]
These equations are good approximation of Figure 1 of BL16.
## 4 Results
Using the magnetar thermal evolutionary code as described in the previous section, the thermal evolution of six representative cases are calculated. The properties of these cases are shown in Table 1. \(B\) is the core magnetic field, and \(b\) is a parameter defined in Section 2. As a representative case we chose \(B\) to be \(\sim 10^{16}\) G, because the earlier work (e.g., Heyl & Hernquist, 1998) shows that for magnetars the core magnetic field will be about 10 times the surface magnetic field which is estimated to be about \(10^{15}\) G. The first three are the cases adopted by BL16, while the last three are the additional choices with the different relevant combinations of \(B\) and \(b\) within the acceptable range.
The results for the six representative cases (see Table 1) are shown in Figure 2, where the core temperature vs. age is shown. During the earliest stages the core temperature is too high (\(T_{\rm core}\)\(\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}}\)\(10^{9}\) K), for the heating to effectively compete with the neutrino cooling, and the star cools essentially by the escaping neutrinos. However, the heating becomes sufficient to balance cooling as the star cools down to around that temperature. Thereafter the curve follows the plateau region while cooling is balanced by heating. After the evolution reaches the characteristic time for the magnetic energy dissipation by this process (BL16), the curve goes down again, approaching the cooling curve.
The ultra-strong fields of magnetars also affect strongly the crustal layers from the radial position at \(M_{\rm core}\) to the stellar surface, which alters significantly the relation between the core temperature and surface temperature. It depends on various microphysics of the crustal layers.
The most important is the composition. For instance, under the ultra-strong magnetar fields thermal conductivity is much higher for light elements such as Hydrogen, as compared with heavy elements (Potekhin et al., 2007). The dominant heavy element is Fe. But since magnetars are still relatively young, it is quite reasonable that the crusts are still contaminated by light elements in some cases. Equations 7a to 7d show the surface radiation as a function of core temperature for both cases, with two representative surface magnetic field strengths of \(10^{15}\) G and \(3\times 10^{13}\) G.
For the six representative cases in Table 1, thermal evolution (cooling and heating) curves are shown, as the surface photon luminosity vs. age relation for the Fe envelope case in Figure
\begin{table}
\begin{tabular}{l c c} Model Name & \(B\) (\(\times 10^{16}\) G) & \(b\) \\ \hline \hline BL16 (1) & 1 & \(\pi\)\(\times 10^{-5}\) \\ BL16 (2) & 1.5 & \(\pi\)\(\times 10^{-5}\) \\ BL16 (3) & 1.5 & \(10^{-6}\) \\ \hline Case 4 & 0.7 & \(\pi\)\(\times 10^{-5}\) \\ Case 5 & 0.8 & \(10^{-6}\) \\ Case 6 & 0.8 & \(\pi\)\(\times 10^{-5}\) \\ \hline \end{tabular}
\end{table}
Table 1: Properties of the six representative cases used in Figure 2. The core magnetic field strengths \(B\) and the parameter \(b\), as defined in Section 2, are shown for each case.
Figure 2: The core temperature vs time from our simulations, for six representative cases in Table 1. The first three cases are the same as in BL16, to facilitate comparison with the BL16 curves. The last three choices are of our own. See Table 1 for the details of the cases.
3, while the case for the light element contaminated envelope is shown in Figure 4. The surface magnetic field used is \(10^{15}\) G. For comparison the observed surface luminosity data of various magnetars with different ages, taken from Vigano et al. (2013), are also shown.
First consider the evolution of the central temperature. Our core temperature vs. age relation is shown in Figure 2 for the six representative cases in Table 1. Similar core temperature evolution for the isothermal model was obtained by BL16 for the top three cases of our Table 1, and that is shown in their Figure 3. Comparing Figure 3 of BL16 and our Figure 2, qualitatively it may appear that the effect of the ambipolar heating on the core temperature evolution is similar between their isothermal model and our model. However, some important difference is noted.
For instance, in our case the curves are generally more smooth, especially during the decaying stages. In our model the transition from the earlier to the later period is smooth, not abrupt. More importantly the plateau and decaying phases in our case is more gradual and longer (compere their Figure 3 and our Figure 2). That makes the temperature _higher_ in our model during the decaying phase. _That is important because that is the critical location of most observed temperature data_.
Now we consider the thermal evolution behavior expressed as the evolution of the surface photon luminosity (and hence the surface temperature) vs. age. We calculated the surface temperature evolution for \(B\) = \(10^{15}\) G, and the results are shown in Figures 3 and 4. Figure 4 shows that in the the light element case _the surface temperature becomes significantly higher_. Compare Figure 3 and Figure 4. Note that the Fe envelope case (Figure 3), where the temperature is lower, still agrees with cooler observed data, but not some hotter ones. On the other hand, in Figure 4 with the light element case, due to the higher surface temperature, theoretical curves cover more higher temperature observed data. In this figure, to avoid overcrowding, only the \(1\sigma\) errors are shown. When \(3\sigma\) errors are included, the data points of only a few hottest sources are still off the curves.
Here the major reasons are summarized: First of all, the star still cools with escaping neutrinos from the core for magnetars due to their relatively younger ages, and hence the evolution is determined by the core temperature. For the same core temperature, the surface temperature is higher in the light element case than the case with only heavy elements, due to the enhanced crustal thermal conductivity for light elements with ultrastrong field magnetars.
In conclusion, combining Figure 3 for the Fe crust case and Figure 4 for the light element contaminated case, the observed magnetar temperature data can mostly agree with the theoretical predictions from the ambipolar heating.
## 5 Discussion
### Comparison with Earlier Work
BL16 did not convert their core temperature evolution results (shown in their Figure 3) to the surface temperature evolution case. Instead, these authors converted the observed surface temperature data to the corresponding core temperature data in their Figure 3 which are shown as an green box. By comparing that with their core temperature evolution curves, it is noted that the observed temperature is too high for their ambipolar heating scenario, because the green box lies above their ambipolar heating curves. The reason is that when these authors converted the observed surface temperature data to the core temperature data they did not include the light element contaminated crust case. If they had done so, the lower end of their green box would have gone down considerably, getting closer to their heating curves. Their abstract does note that if the light el
ement case is included, the observed data are consistent with the ambipolar heating. By examining BL16's Figure 1 where the surface radiation vs. core temperature relations are shown, it is clear that for the same surface luminosity (and hence surface temperature), the corresponding core temperature should be significantly _lower_ for the light element case (the red curves) than the Fe cases (blue curves) in their Figure 1. That means the bottom of the green box in their Figure 3 should be much lower when the light element case is included.
Another significant difference is that BL16 adopted the modified Urca (their Murca) alone, while our neutrino emission includes the effects of neutron superfluidity. The Cooper pair breaking and formation contribution, which we included, increases the neutrino emission immediately below the critical temperature \(T_{\rm crit}=10^{9.45}(K)\). However, this Cooper pair emission decreases rapidly to below the superfluid suppressed case, which is _below_ the Murca alone case. By the time the core temperature reaches the typical temperatures of the magnetars, at around \(3\times 10^{8}\) K, the neutrino emission insignificantly _below_ the Murca alone case. See, e.g., Figure 2 of BL16, where a similar neutron superfluid model (red curves) is shown. Due to our overall reduced neutrino emission at around the age of magnetars, our heating lasts longer.
Another reason for the difference between BL16 and our results is that when we adopt our more realistic numerical evolutionary simulation code the decaying phase of the heating curve is more gradual, not abrupt which happens in the BL16 case. (Compare their Figure 3 and our Figure 2). That also results in the surface temperature of our model higher for longer
Figure 3: Thermal evolution of magnetars with the ambipolar heating is compared with magnetar observation data. The six cases in Table 1 for models with the major crustal composition of Fe, are shown as the surface photon luminosity vs age. The bars and crosses are detections. The horizontal arrows are the upper limit to the age. The theoretical curves are consistent with cooler magnetar data.
periods during the decaying phase where most of the magnetar observational data are located.
### Effects of Neutron Superfluidity
\(Q_{\rm pn}\) describes suppression of the rate of collisions among protons and neutrons. We set \(Q_{pn}=1\) for the effect of neutron superfluidity on the ambipolar heating since numerical information is not available. However, we investigated its qualitative net effect to our results, and reached the following conclusion: Heating rate \(dq_{h}/dt\) in Eq.(6a) is proportional to \(dl_{1}/dt\), which goes as \(\tau_{pn}\) (Eq. (6b)). \(\tau_{pn}\) goes as \(1/Q_{pn}\) (Eq.(1)). With superfluidity, \(Q_{pn}<1\). Then \(\tau_{pn}\) is larger with superfluidity. Then, ambipolar heating \(dq_{h}/dt\) gets higher because it goes as \(dl_{1}/dt\) (Eq.(6a)), which goes as \(\tau_{pn}\) (Eq.(6b)). That means the ambipolar heating will increase with neutron superfluidity. (Note that BL16 also reached the same conclusion.) Therefore, if neutron superfluid is included in our heating calculations, heating will increase. That means heating will last longer, and our final conclusion on the validity of the ambipolar heating will become even stronger.
### Effects of Proton Superconductivity
The effect of superconductivity should be considered if the field \(B<H_{c2}\) (Sinha & Sedrakian (2015), here referred to as Sinha15). For our neutron star model, the central density \(\rho_{c}\) is \(10^{15}\) gm cm\({}^{-3}\). Table 1 in Sinha15 shows that for that density, \(H_{c2}<0.03(\sim 3\times 10^{14}\)G). Since our core \(B\sim 10^{16}G>>H_{c2}\) in the central core,
Figure 4: The same as Figure 3, except that the crustal layers of the star are now contaminated by light elements. The theoretical curves are now consistent with many of hotter magnetar data. In this figure, to avoid overcrowding, only the \(1\sigma\) errors are shown. When \(3\sigma\) errors are included, the data points of only a few hottest sources are still off the curves.
the effect of superconductivity should be negligible. Note that \(H_{c2}\) depends on density. BL16 also note that superconductivity is negligible for mangneters. (Near the core boundary, the density drops to near the nuclear density and then \(H_{c2}>10^{16}\)G (see Sinha15, Table 1), but neutron star core is almost constant density. Therefore the volume where the density drops to the nuclear density region near the core boundary is negligible, compared with the whole core volume. Therefore, the contribution from near the edge of the core is negligible.)
### Effects of Stellar Model
When the stellar mass gets beyond \(1.45M_{\odot}\), our EOS model includes hyperons. Our current paper is to test if a magnetar model CAN get hot enough to be consistent with the observed magnetar data with the ambipolar heating. We showed, for the low mass \(1.4M_{\odot}\) star, that is yes. With the EOS we adopted (which is perfectly relevant), this happens to be still a neutron star. Hyperon-mixed stars are heavier and cooler, and with our EOS model it is harder to heat more massive and cooler hyperon-mixed stars enough to the level of many of the observed hotter magnetar data. We showed that at least for less massive stars (which in our model happens to have no hyperons), this high heating is possible. In some other relevant EOS models where transformation to hyperons takes place at lower density and mass (e.g., Raduta, Sedrakian, & Weber, 2018), low mass stars may contain hyperons already. For these stars the same conclusion applies. Whether hyperons are present or not is not the issue in our current particular problem. It shows that, at least for lower mass hotter stars, the ambipolar heating is consistent with most of the observed magnetar data.
## 6 Conclusions
We investigated the possibility of ambipolar heating in the stellar core as the source of magnetar's high temperatures. Our results are:
(i) If both the heavy element crust case and the light element contaminated crust case are considered, the ambipolar heating can be consistent with most of observed magnetar temperatures.
(ii) If neutrons are in the superfluid state, the ambipolar heating increases, and the heating phase can last longer, to be consistent with the observed magnetar temperatures see Section 5.2).
(iii) The adoption of the relativistic thermal evolution simulation code with a realistic magnetar model results in the evolution becoming more smooth and gradual. That helps higher temperatures during the decaying phase.
By adopting an isothermal, non-relativistic model with an analytic approach, BL16 predicted that, if the stellar crust is contaminated by light elements and the core magnetic field can be as strong as \(10^{16}\) G, the ambipolar heating may heat the magnetars to the observed temperatures. At the same time, these authors pointed out that such a high temperature phase may not last long enough.
Our results mostly confirm their estimates. However, it may be pointed out that the magnetars may last long enough especially if the neutrons in the interior are in a superfluid state, which is quite possible. Also it is quite possible that the magnetic field in the interior can be as high as \(10^{16}\) G (e.g., see Potekhin et al., 2015; Heyl & Hernquist, 1998). Also it is quite reasonable that magnetars still contain light elements due to their relatively young age.
Our conclusion is that the ambipolar heating can be consistent with the observed high temperature of magnetars.
## Acknowledgements
We thank the referee for constructive suggestions and comments. This work has been supported by the Kavli IPMU and the World Premier International Research Center Initia
tive (WPI), MEXT, Japan, and the Japan Society for the Promotion of Science (JSPS) KAKENHI grants JP17K05382, JP20K04024, JP21H04499, and JP21K20369.
|
2308.05051
|
PAT: Position-Aware Transformer for Dense Multi-Label Action Detection
|
We present PAT, a transformer-based network that learns complex temporal
co-occurrence action dependencies in a video by exploiting multi-scale temporal
features. In existing methods, the self-attention mechanism in transformers
loses the temporal positional information, which is essential for robust action
detection. To address this issue, we (i) embed relative positional encoding in
the self-attention mechanism and (ii) exploit multi-scale temporal
relationships by designing a novel non hierarchical network, in contrast to the
recent transformer-based approaches that use a hierarchical structure. We argue
that joining the self-attention mechanism with multiple sub-sampling processes
in the hierarchical approaches results in increased loss of positional
information. We evaluate the performance of our proposed approach on two
challenging dense multi-label benchmark datasets, and show that PAT improves
the current state-of-the-art result by 1.1% and 0.6% mAP on the Charades and
MultiTHUMOS datasets, respectively, thereby achieving the new state-of-the-art
mAP at 26.5% and 44.6%, respectively. We also perform extensive ablation
studies to examine the impact of the different components of our proposed
network.
|
Faegheh Sardari, Armin Mustafa, Philip J. B. Jackson, Adrian Hilton
|
2023-08-09T16:29:31Z
|
http://arxiv.org/abs/2308.05051v1
|
# PAT: Position-Aware Transformer for Dense Multi-Label Action Detection
###### Abstract
We present PAT, a transformer-based network that learns complex temporal co-occurrence action dependencies in a video by exploiting multi-scale temporal features. In existing methods, the self-attention mechanism in transformers loses the temporal positional information, which is essential for robust action detection. To address this issue, we (i) embed relative positional encoding in the self-attention mechanism and (ii) exploit multi-scale temporal relationships by designing a novel non-hierarchical network, in contrast to the recent transformer-based approaches that use a hierarchical structure. We argue that joining the self-attention mechanism with multiple sub-sampling processes in the hierarchical approaches results in increased loss of positional information. We evaluate the performance of our proposed approach on two challenging dense multi-label benchmark datasets, and show that PAT improves the current state-of-the-art result by \(1.1\%\) and \(0.6\%\) mAP on the Charades and MultiTHUMOS datasets, respectively, thereby achieving the new state-of-the-art mAP at \(26.5\%\) and \(44.6\%\), respectively. We also perform extensive ablation studies to examine the impact of the different components of our proposed network.
## 1 Introduction
Action or event detection aims to determine the boundaries of different actions/events occurring in an untrimmed video, and plays a crucial role in various important computer vision applications, such as video summarization, highlighting, and captioning. Despite the recent advances in different areas of video understanding, dense multi-label action detection is still an unsolved problem and considered as one of the most challenging video analysis tasks since the videos are untrimmed, and include several actions with different time durations that can have overlap (See Fig. 1). To carry out this task, we require to learn complex short and long term temporal relationships amongst different actions in a video which is a challenging problem [7, 15].
Most previous dense multi-label action detection approaches capture temporal dependencies through temporal convolutional networks [26, 12, 15]. However, with the success of transformer networks over the convolutional networks for modeling complex and sequential relationships [35, 9, 10, 24, 36], recently, a few methods, such as [33, 5, 7], leverage the self-attention mechanism and propose transformer-based approaches where they achieve state-of-the-art performance. Authors in [33, 5] design their network by modeling explicitly temporal cross-class relations. In [33], there are two transformer modules such that one of them investigates action relationships for each temporal moment, and another one learns temporal dependencies for each action type. However, these approaches are not computationally efficient and their complexity grows with the number of action classes. To overcome this, Dai et al. [7] design a hierarchical network that learns temporal-action dependencies from multi-scale temporal features. Their network contains several transformer layers such that the output of each layer is down-sampled and given as input into its subsequent layer. As stated in [30, 18, 11],
Figure 1: A sample video and its corresponding action annotations from the Charades dataset [31] where the video includes several action types with different time spans, from short to long, and in each time step, multiple actions can occur at the same time.
the self-attention mechanism in the transformer is order-invariant and loses positional information, and when the self-attention is embedded in a hierarchical structure, the issue becomes worse as using multiple down-sampling processes results in increased loss of positional information, especially in top layers. In this paper, we tackle these issues by introducing PAT, a position-aware transformer network for dense action detection. PAT consists of three main modules: fine detection, coarse detection, and classification. The fine detection module learns fine-grained action dependencies from the full temporal resolution of the video sequence for coarse detection and classification modules. The coarse detection module captures various ranges of coarse action dependencies from the fine-grained features using a non-hierarchical structure, which preserves the positional information. To further leverage the positional information, PAT incorporates a learnable relative positional encoding [29] in the transformer layers of both fine and coarse detection models. Finally, the classification module estimates the probabilities of different action classes for every timestamp in the input video using both fine and coarse-grained action dependencies. Our key contributions can be summarized as follows:
* For the first time, we introduce the idea of leveraging positional information in transformers for action detection
* We design a novel non-hierarchical transformer-based network that preserves positional information when learning multi-scale temporal action dependencies
* We evaluate the proposed method's performance on two challenging benchmark dense action detection datasets where we outperform the current state-of-the-art result by \(1.1\%\) and \(0.6\%\) per-frame mean average precision (mAP) on Charades and MultiTHUMOS respectively, thereby achieving the new state-of-the-art mAP at \(26.5\%\) and \(44.6\%\), respectively
* We perform extensive ablation studies to evaluate our network design
## 2 Related Works
Although action detection [4, 22, 20, 22, 25, 40, 34, 38, 3] has been studied significantly in computer vision, few works [27, 8, 26, 6, 15] have explored it in a dense multi-labelled setup where instances of different actions or events can overlap in different parts of a video. In this section, we review the action detection approaches by focusing on a dense-labelled setting.
To detect the boundaries of different actions, the authors in [4, 21, 23, 19] propose anchor-based methods where they first generate several proposals for each frame of video by using multi-scale anchor boxes, and then refine them to achieve the final action boundaries. However, these approaches are not usually applied for a dense multi-label scenario, as to model effectively the dense action distributions, they need a large amount of anchors [7]. To overcome this, some works, such as [27, 8, 26, 6, 15], design anchor-free approaches for dense action detection. Piergiovanni and Ryoo [27] propose a network that represents an untrimmed video into multi-activity events. They design multiple temporal Gaussian filters which are applied separately on the video frame features while a soft-attention mechanism is employed to combine the output of the filters to generate a global representation. Later in [26], they improve their work by proposing a temporal convolutional network using Gaussian filters as kernels to perform the temporal representation in a more efficient and effective way. Although they design networks to address complex multi-label action detection, the proposed models are not able to encode long-term dependencies and mostly focus on local relationships, while our proposed network is able to capture different ranges of temporal features from short to long. Khatapitiya and Ryoo [15] propose a two-stream network to capture long term information such that one of the streams learns the most informative frame of a long video through a dynamic sub-sampling with a ratio of 4, and the other one learns the fine-grained contexts of the video from the full resolution. Although their results are promising, it cannot be adapted easily to use more temporal resolutions as it requires a dedicated Convolutional Neural Network (CNN), X3D [12], for each resolution, whereas in our proposed method, a different resolution can be processed easily by adding an extra branch containing a few transformer blocks in the coarse detection module.
**Transformer-based Approaches -** With the success of transformer networks in modeling complex relationships and capturing short and long term dependencies [35, 9, 10, 24, 36], some works, such as [6, 33, 7], develop transformer-based approaches for dense action detection task. Tirupattur et al. [33] design a model with two transformer branches, one branch applies self-attention across all action classes for each time step to learn the relationships amongst actions, and another branch uses self-attention across time frames to model the temporal dependencies, and the output of two branches are combined for action classification. Although this method outperforms state-of-the-art results, the method's computational complexity increases with the number of action classes. Similar to [15] that benefits different temporal resolutions, Dai et al. [7] extract multi-scale features. They design a transformer-based hierarchical structure and provide multi-resolution temporal features through several sub-sampling processes. However, as the self-attention mechanism does not preserve the temporal position information [30, 18, 11], joining it with mul
tiple sub-sampling processes makes the network lose more positional information while preserving this information is essential for action detection. In contrast, our position-aware transformer network PAT has been designed to retain such temporal cues.
## 3 Position-Aware Transformer (PAT)
**Problem Definition -** Our aim is to detect different actions/events in a densely-labelled untrimmed video. We define the action detection problem under this setting as [15, 33, 7]. For an untrimmed video sequence with a length of \(T\), each timestamp \(t\) has a ground truth action label \(G_{t}=\{g_{t,c}\in\{0,1\}\}_{c=1}^{C}\), where \(C\) is the maximum number of action classes in the dataset, and the network requires to estimate action class probabilities \(Y_{t}=\{y_{t,c}\in[0,1]\}_{c=1}^{C}\) for each timestamp.
### Proposed Network
Our proposed method PAT is a transformer-based network designed to exploit different granularities of complex temporal dependencies for action detection. The PAT network includes a video encoder E that encodes an input video sequence into a sequence of input tokens, and three main components: fine detection module FDM, coarse detection module CDM, and classification module CLASM arranged as shown in Fig. 2. FDM processes an input sequence in its original temporal resolution to obtain a fine-grained action representation for both CDM and CLASM modules. The CDM module learns different ranges of temporal action dependencies amongst the fine-grained features through extracting and combining multi-scale temporal features. CLASM estimates class probabilities from the output of both FDM and CDM modules.
**Video Encoder (E) -** To process an input video, PAT needs to convert it into a sequence of tokens. To perform this, similar to the previous action detection approaches [33, 7, 40], we first divide the L-frame input video \(V\in\mathrm{I\!R}^{\mathrm{L\times Ch\times W\times H}}\) into T non-overlapped segments \(S=\{S_{t}\}_{t=1}^{T}\), where \(S_{t}\in\mathrm{I\!R}^{2\times\mathrm{Ch\times W\times H}}\), \(Z=L/T\), and \(Ch\), \(W\), and \(H\) define number of channels, width, and height of each video frame respectively. Then, the video encoder E that is a pre-trained convolutional network is employed on each segment to generate its corresponding token \(I_{t}=E(S_{t})\), where \(I_{t}\in\mathrm{I\!R}^{\mathrm{D}}\).
**Relative Positional Transformer (RPT) Block -** To design FDM, and CDM, we employ our proposed transformer block RPT (see Fig. 3). The RPT block comprises a transformer layer with relative positional embedding followed by a local relational LR component containing two linear layers and one 1D temporal convolutional layer as in [7] to enhance the output of the transformer layer.
As already pointed out in Section 1, the transformer self-attention mechanism loses the order of temporal information while preserving this information is essential for action detection, where we need to localise events precisely in a video sequence. To solve this issue, Vaswani et al. [35] propose to add the absolute positional embedding to the input tokens. However, in our experiments, we observed that using the absolute positional embedding decreases the method's performance significantly (see Section 4.1). This has also been observed in [7, 40]. The decrease in performance may be attributed to breaking the translation-invariant property of the method. In action detection, we expect the proposed method to be translation-invariant, \(i\)._e_. the network learns the same representation for the same video frames in two temporally shifted videos, regardless of how much they are shifted, while the absolute encoding can break this property as it adds different positional encodings to the same frames in the shifted video inputs. To overcome this, we propose to use relative positional encoding [29] in the transformer layers of our RPT block. The relative positional encoding employs a relative pairwise distance between every two tokens and is translation-invariant. In addition, as the embedding is performed in each transformer layer and is passed into the subsequent layer, the positional information can flow to the classification module where the final estimations are provided.
We briefly formulate the transformer layer in the RPT block. In the H-head self-attention layer of RPT, for each head \(h\in\{1,2,...,H\}\), the input sequence \(X\in\mathrm{I\!R}^{\mathrm{N\times D^{\circ}}}\) is first transferred into query \(Q_{h}\), key \(K_{h}\), and value \(V_{h}\) through linear operations
\[Q_{h}=XW_{h}^{q},\,\,\,K_{h}=XW_{h}^{k},\,\,\,V_{h}=XW_{h}^{v}, \tag{1}\]
where \(Q_{h}\), \(K_{h}\), \(V_{h}\in\mathrm{I\!R}^{\mathrm{N\times D_{h}}}\), \(W_{h}^{q},\,\,W_{h}^{k},\,\,W_{h}^{v}\in\mathrm{I\!R}^{\mathrm{D^{\circ}\times D _{h}}}\) refer the weights of linear operations, and \(D_{h}=\frac{D^{\circ}}{H}\). Then, the self-attention with relative positional embedding is computed for each head as
\[A_{h}=softmax(\frac{Q_{h}K_{h}^{T}+P_{h}^{\triangleright}}{\sqrt{D_{h}}})V_{h}, \tag{2}\]
\[P_{h}^{\triangleright}(n,m)=\sum_{d=1}^{D_{h}}Q_{h}(n,d)\Omega_{d}(n-m), \tag{3}\]
where \(P_{h}^{\triangleright}\in\mathrm{I\!R}^{\mathrm{N\times N}}\), \(n,m\in\{1,2,...,N\}\), and \(\Omega_{d}\) operates as \(D_{h}\) different embeddings for time intervals based on the queries [29]. To compute \(P_{h}^{\triangleright}\), we use the memory-efficient method proposed by Huang et al. [13].
Finally, the self-attention of all heads are concatenated and fed into a linear layer to output sequence \(O\)
\[A=concat(A_{1},A_{2},...,A_{m}), \tag{4}\]
\[O=AW^{o}+X, \tag{5}\]
where \(A\in\mathrm{I\!R}^{\mathrm{N\times D^{\diamond}}}\) and \(W^{o}\in\mathrm{I\!R}^{\mathrm{D^{\diamond}\times D^{\diamond}}}\).
**Fine Detection Module (FDM) -** The FDM module aims to obtain a fine-grained temporal action dependency representation of the video from the input video sequence for the CDM and CLASM modules. FDM includes a 1D temporal convolutional layer followed by \(B\) RPT blocks. The convolution layer \(\Lambda^{\bullet}\) has a kernel size of three and a stride of one to map all the input tokens \(I\in\mathrm{I\!R}^{\mathrm{T\times D}}\) into a lower dimension \(D^{*}\), and then the RPT blocks are applied to learn the fine-grained dependencies \(I^{\odot}\).
\[I^{\odot}=RPT_{1:B}^{FDM}(\Lambda^{\bullet}(I)), \tag{6}\]
where \(I^{\odot}\in\mathrm{I\!R}^{\mathrm{T\times D^{\star}}}\) and \(D^{*}<D\).
**Coarse Detection Module (CDM) -** In the CMD module, we aim to learn a coarse temporal action dependency representation of the video. To achieve this, one solution is to extract and combine multi-scale temporal features through a hierarchical structure, such as the proposed method in [7, 40] (see Fig. 4. a). However, as we already explained in Section 1, using multiple sub-sampling processes in the hierarchical structure results in losing more positional information in the top layers of the network. Our CMD module has been designed to overcome this issue by extracting different scales of features from the same full-scale fine-grained information and through only one sub-sampling process, (see Fig. 4. b). In Section 4.1, we show that our novel non-hierarchical design to extract multi-scale features outperforms significantly a hierarchical structure.
The CMD module has \(F\) granularity branches such that each branch learns a different scale of temporal features. In the \(i^{th}\) branch, first a 1D temporal convolutional layer \(\Lambda_{i}^{\diamond}\) with a kernel size of three and a stride of \(2^{i}\) is applied on the fine-grained inputs received from the preceding module FCM as
\[I^{g_{i}}=\Lambda_{i}^{\diamond}(I^{\odot}), \tag{7}\]
where \(I^{g_{i}}\in\mathrm{I\!R}^{\mathrm{T^{i}\times D^{\star}}}\), \(i\in\{1,2,...,F\}\), and \(T^{i}=\frac{T}{2^{i}}\). Then, the down-sampled features are given into \(B\) RPT transformer blocks to exploit the temporal dependencies amongst them
\[\bar{I}^{g_{i}}=RPT_{1:B}^{CDM_{i}}(I^{g_{i}}), \tag{8}\]
where \(\bar{I}^{g_{i}}\in\mathrm{I\!R}^{\mathrm{T^{i}\times D^{\star}}}\). Note, to extract all the scales of features, the striding process (sub-sampling) is used only
Figure 3: Architecture of the proposed RPT block. For brevity, the computation of the heads are not shown separately.
Figure 2: The overall schema of the proposed network PAT including (i) video encoder E, (ii) fine detection module FDM, (iii) Coarse detection module CDM, and (iv) classification module CLASM.
once and as the relative positional information has been already embedded in the fine-grained information, after the striding process, the sub-sample features keep the temporal positional cues.
In the CLASM module, action class probabilities are estimated for each input token generated by the video encoder E. Therefore, the CMD module requires obtaining a coarse dependency representation at the original temporal length. To do this, we up-sample and combine different scales of coarse features to provide a final coarse representation \(I^{\otimes}\) as
\[\hat{I}^{g_{i}}=UpSample(\bar{I}^{g_{i}}), \tag{9}\]
\[I^{\otimes}=\sum_{i=1}^{F}\hat{I}^{g_{i}}, \tag{10}\]
where \(\hat{I}^{g_{i}},I^{\otimes}\in\mathrm{I\!R}^{\mathrm{T\times D^{*}}}\) and linear interpolation is employed for up-sampling.
**Classification Module (CLASM) -** This module obtains the action class probabilities for action detection from the fine and coarse contexts. To this, two convolution blocks \(CLAS^{\otimes}\) and \(CLAS^{\otimes}\) that include two 1D convolution filters with kernel size one and stride one are applied on the fine and coarse features separately to predict \(C\) action class probabilities for each temporal moment
\[Y^{\phi}=Sig(CLAS^{\phi}(\hat{I}^{\phi})), \tag{11}\]
where \(Y^{\phi}\in\mathrm{I\!R}^{\mathrm{T\times C}}\), \(\phi\in\{\odot,\odot\}\), and \(Sig\) refers to sigmoid activation function. Then, at inference, the final estimation is computed by combining them as
\[\hat{Y}=\sum_{\phi}\alpha_{\phi}Y^{\phi}, \tag{12}\]
where \(\hat{Y}\in\mathrm{I\!R}^{\mathrm{T\times C}}\) and \(\alpha_{\odot}+\ \alpha_{\otimes}=1\).
### Network Optimization
To optimize action detection models, binary cross entropy (BCE) is usually used as in [33, 7, 40, 34]. However, in the multi-label setting, the number of positive labels may become more than the number of negative ones. This unbalanced number of positive and negative labels can result in poor performance in the action detection task if we employ BCE for training, since it does not have any control on the contribution of positive and negative samples. To overcome this, we propose to adapt Asymmetric loss \(\mathcal{L}_{asl}\)[28] for multi-label action detection. Therfore, the total loss \(\mathcal{L}_{total}\) is computed as
\[\mathcal{L}_{total}=\frac{1}{T}\sum_{\phi}\sum_{t=1}^{T}\sum_{c=1}^{C}\alpha_{ \phi}\mathcal{L}_{asl}(g_{t,c},y_{t,c}^{\phi}), \tag{13}\]
\[\mathcal{L}_{asl}(g_{t,c},y_{t,c}^{\phi})=-g_{t,c}\mathcal{L}_{+}-(1-g_{t,c}) \mathcal{L}_{-}, \tag{14}\]
\[\mathcal{L}_{+}=(1-y_{t,c}^{\phi})^{\gamma_{+}}log(y_{t,c}^{\phi}), \tag{15}\]
\[\mathcal{L}_{-}=(\hat{y}_{t,c}^{\phi})^{\gamma_{-}}log(1-\hat{y}_{t,c}), \tag{16}\]
\[\hat{y}_{t,c}^{\phi}=max(y_{t,c}^{\phi}-\delta,0), \tag{17}\]
where \(g_{t,c}\) indicates the ground truth label of action class \(c\) in temporal step \(t\), and \(y_{t,c}^{\phi}\) is its corresponding class probability estimated by Eq. 11. \(\gamma_{+}\) and \(\gamma_{-}\) are focusing parameters for positive and negative labels respectively and if we choose \(\gamma_{+}<\gamma_{-}\), we are able to increase the contribution of positive samples. Furthermore, Eq. 17 applies another asymmetric mechanism by discarding the very easy negative samples through setting the threshold parameter \(\delta\). In Section 4.1, we show that optimizing the proposed network through Asymmetric loss instead of BCE improves the method's performance.
## 4 Experimental Results
**Datasets -** There are several benchmark datasets for action detection, but only a few of them provide dense multi-label annotations. For instance, videos in ActivityNet [1] have only one action type per timestamp. We present the results of PAT on two challenging dense multi-label benchmark datasets, Charades [31] and MultiTHUMOS [39].
Charades [31] is a large dataset including \(9,848\) videos of daily activities of 267 persons. It contains \(66,500\) temporal interval annotations for 157 action classes while there is a high overlap amongst the action instances of different
Figure 4: The proposed hierarchical structure in [7, 40] vs. our proposed non-hierarchical design in fine and coarse detection modules to extract multi-scale features for action detection.
action categories. To evaluate our method on Charades, we follow previous methods [15, 33, 7] and use the same training and testing set as in [31].
MultiTHUMOS contains the same set of 413 videos as in THUMOS'14 dataset [14]. However, MultiTHUMOS is more challenging than THUMOS'14 since (i) the annotations have been extended from 20 action classes to 65, and (ii) in contrast to sparse-lable frame-level annotations in THUMOS'14, MultiTHUMOS has dense multi-label action annotations. To obtain the results on this dataset, we use the same standard training and testing splits applied by previous methods [33, 7]. Following state-of-the-art methods [27, 26, 15, 33, 7], we evaluate our method on these datasets by standard per-frame mAP metric.
**Implementation Details -** Similar to the proposed method in [7], during both training and inference, PAT uses a fixed number of \(T=256\) input tokens. For training, we randomly sample a clip containing \(T\) consecutive tokens from a video sequence. At inference, we follow previous work [33, 15] and make the predictions for a full video sequence. Each input token is provided by applying the video encoder E on an 8-frame segment to extract a feature vector with dimension \(D=1024\). The video encoder E is implemented by using a pre-trained I3D [2]1 while its fully connected layers are replaced with a global average pooling layer and its parameters are frozen. In the convolutional layer of FDM, the input features are mapped into \(D^{*}=512\) dimensional feature vectors. Note, the feature dimension \(D^{*}=512\) is fixed for the rest of the network. FDM and each granularity branch of CDM have \(B=3\) RPT blocks with \(H=8\) multi-head attention heads, and the number of granularity branches in CMD is set to \(F=3\) as we found that with these parameters, PAT obtains the best performance. The contributing factors for fine-grained (\(\alpha_{\odot}\)) and coarse-grained (\(\alpha_{\oplus}\)) features in the CLASM module are set empirically to \(\{\alpha_{\odot}=0.1,\alpha_{\oplus}=0.9\}\) and \(\{\alpha_{\odot}=0.7,\alpha_{\oplus}=0.3\}\) for Charades and MultiTHUMOS respectively. In Asymmetric loss, we use factors of \(\gamma_{+}=1\) and \(\gamma_{-}=3\) for the impact of positive and negative samples respectively, and threshold parameter \(\delta=0.1\), which are determined through trial and error.
Footnote 1: Video encoder E is pre-trained on Kinetic-400 [16] and training set of Charades for MultiTHUMOS and Charades respectively.
Our experiments were performed under Pytorch on an NVIDIA GeForce RTX 3090 GPU, and we trained our model using the Adam optimiser [17] with an initial learning rate of 0.0001 and batch size 3 for 25 and 300 epochs for Charades and MultiTHUMOS datasets respectively. The learning rate was decreased by a factor of 10 every 7 and 130 epochs for Charades and MultiTHUMOS respectively. Note, using different training settings for Charades and MultiTHUMOS is due to their different size.
### Ablation Studies
In this section, we examine our design decisions for the proposed network and learning paradigm.
**Effect of FDM and CDM Modules -** Here, we aim to evaluate the impact of the fine and coarse detection modules (FDM and CDM) in the final results of PAT. Table 1 shows per-frame mAP on the Charades and MultiTHUMOS datasets as we remove each or both of FDM and CDM modules. To obtain the results of the network when both modules are dropped, we use directly the sequence of input tokens generated by the video encoder (I3D) for action detection. Table 1 shows that using only input tokens generated by I3D network is not enough for effective action detection and employing fine and coarse-grained temporal features obtained by FDM and CDM improves the performance by \(9.7\%\) and \(7.9\%\) per-frame mAP on Charades and MultiTHUMOS respectively. It also shows that both FDM and CDM modules have an important contribution to the final results as by removing FDM and CDM, our results deteriorate by \(2.4\%\) and \(3.4\%\) per-frame mAP on average on both datasets respectively. Table 1 also shows that for different datasets that have different action types, the contribution of fine and coarse features might be different which is the reason we use the contribution factors \(\{\alpha_{\odot},\alpha_{\oplus}\}\) to combine the prediction results of FDM and CDM in the CLASM module at the inference.
**Effect of Structure Design to Extract Multi-Scale Features -** In this section, we examine the design of PAT with two other variants to capture fine-grained and coarse-grained features. In the first variant PAT-\(v_{1}\), the CDM module uses the hierarchical structure to extract the multi-scale features while the rest of its architecture is the same as PAT. In the second variant PAT-\(v_{2}\), the CDM module has a non-hierarchical structure, the same as PAT, but the FDM module and all granularity branches in CDM learn their features from input tokens.
Table 2 shows that when CMD applies a hierarchical structure to learn the coarse-grained features, PAT-\(v_{1}\), the method's performance drops \(1.4\%\) and \(0.6\%\) on Charades and MultiTHUMOS respectively. This proves
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Module} & \multicolumn{2}{c}{mAP(\%)} \\ \cline{2-3} & Charades & MultiTHUMOS \\ \hline CLASM & 16.8 & 36.7 \\ FDM, CLASM & 23.8 & 40.5 \\ CDM, CLASM & 26.2 & 40.1 \\ \hline FDM, CDM, CLASM & **26.5** & **44.6** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ablation studies on FDM and CDM modules of PAT on the Charades and MultiTHUMOS dataset using RGB videos in terms of per-frame mAP metric.
the contribution of our novel non-hierarchical transformer-based design which preserves positional information when exploiting the multi-scale features. Furthermore, in case we apply a non-hierarchical CMD, PAT and PAT-\(v_{2}\), if the CMD module extracts the multi-scale features from the fine-grained context instead of the input tokens as in PAT, we achieve the best performance at \(26.5\%\) and \(44.6\%\) per-frame mAP on Charades and MultiTHUMOS respectively.
**Impact of Relative Positional Encoding -** Table 3 shows the performance of PAT when different positional encodings are applied. It can be observed that employing the relative positional encoding [29, 13] embedded in the RPT block improves the method's performance by \(0.3\%\) per-frame mAP on both datasets, while adding absolute positional encoding [35] into the input tokens deteriorates the method's performance significantly.
**Impact of Loss Function -** Here, we examine the effect of BCE and Asymmetric [28] losses for training. As shown in Table 4, applying the Asymmetric loss [28] to optimize PAT improves the performance by \(0.5\%\) and \(0.2\%\) per-frame mAP on Charades and MultiTHUMOS respectively.
**Discussion and Analysis -** The ablation studies show that leveraging positional information in the transformer layers has an important contribution in the final results of the network where extracting the multi-scale temporal features through our proposed non-hierarchical design in CMD outperforms a hierarchical structure by \(1.0\%\) mAP on average on both datasets (PAT vs PAT-v1), and embedding the relative position encoding in the RPT block improves the performance by \(0.3\%\) mAP on both datasets. Our further ablations also reveal the effect of the Asymmetric loss in optimizing of PAT where it increases the performance by \(0.3\%\) mAP on average on both datasets.
### State-of-the-Art Comparison
In this section, we compare the performance of the proposed method with the state-of-the-art action detection approaches including both transformer-based methods and the methods that do not use self-attention. Both quantitative and qualitative results are obtained for this section.
Table 5 provides comparative results on the benchmark datasets Charades and MultiTHUMOS based on the standard per-frame mAP metric. Table 5 shows that our proposed method outperforms the current state-of-the-art result by \(1.1\%\) and \(0.6\%\) on Charades and MultiTHUMOS respectively and achieves a new state-of-the-art per-frame mAP results at \(26.5\%\) and \(44.6\%\) on Charades and MultiTHUMOS respectively.
We also evaluate the performance of our proposed method by action-conditional metrics including Action-Conditional Precision \(P_{AC}\), Action-Conditional Recall \(R_{AC}\), Action-Conditional F1-Score \(F1_{AC}\), and Action-Conditional Mean Average Precision \(mAP_{AC}\), as introduced in [33]. The aim of these metrics is to measure the ability of the network to learn both co-occurrence and temporal dependencies of different action classes. The metrics are measured throughout a temporal window with a size of \(\tau\). As shown by the results on Charades in Table 6, the proposed method PAT achieves state-of-the-art results on all action-conditional metrics, specifically, it improves the state-of-the-art results significantly on \(R_{AC}\) and \(F1_{AC}\) by \(10.6\%\) and \(7.7\%\), \(10.8\%\) and \(7.5\%\), and \(10.8\%\) and \(7.3\%\) where \(\tau\) is 0, 20, and 40 respectively.
Fig. 5 displays qualitative results of PAT on a test video sample of Charades and compares them with the outputs of MS-TCT [7]. Amongst the state-of-the-art methods, we applied MS-TCT [7] and MLAD [33] on the video sample, since their code is available, useable and compatible with our hardware. However, as the MLAD could not predict any of the actions, we reported only the results of MS-TCT. The results in Fig. 5 show that our proposed method's action predictions have a better overlap with the ground-truth labels, and our method detected more action instances in the video than MS-TCT, PAT predicted all action types ex
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Loss} & \multicolumn{2}{c}{mAP(\%)} \\ \cline{2-3} & Charades & MultiTHUMOS \\ \hline BCE & 26.0 & 44.4 \\ \hline Asymmetric [28] & **26.5** & **44.6** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation studies on the loss function applied for training PAT on the Charades and MultiTHUMOS datasets using RGB videos in terms of per-frame mAP metric.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Design} & \multicolumn{2}{c}{mAP(\%)} \\ \cline{2-3} & Charades & MultiTHUMOS \\ \hline PAT-\(v_{1}\) (Hierarchical) & 25.1 & 44.0 \\ PAT-\(v_{2}\) & 26.1 & 44.2 \\ \hline PAT & **26.5** & **44.6** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation studies on structure design of the proposed method on the Charades and MultiTHUMOS datasets using RGB videos in terms of per-frame mAP metric.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Positional Encoding} & \multicolumn{2}{c}{mAP(\%)} \\ \cline{2-3} & Charades & MultiTHUMOS \\ \hline No encoding & 26.2 & 44.3 \\ Absolute & 25.3 & 43.5 \\ \hline Relative & **26.5** & **44.6** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation studies on positional encoding used in PAT on the Charades and MultiTHUMOS dataset using RGB videos in terms of per-frame mAP metric.
cept "_Taking a bag_" while MS-TCT could not detect "_Taking a picture_", "_Taking a bag_", and "_Walking_".
## 5 Conclusion
In this work, we introduced a novel transformer-based network PAT that exploits different ranges of temporal dependencies for action detection. The proposed method has been designed to benefit from preserving temporal positional information in learning multi-granularity features by (i) embedding the relative positional encoding in its transformer layers and (ii) a non-hierarchical design. We evaluated PAT on two densely-labelled challenging benchmark action detection datasets, on which we achieved new state-of-the-art results, and our ablation studies demonstrated the effectiveness of different components of our proposed network. For future work, we will investigate adapting our network to learn spatial and temporal dependencies from raw pixels and also use audio information to improve the performance of action detection.
## Acknowledgement
This research is supported by UKRI EPSRC Platform Grant EP/P022529/1, and EPSRC BBC Prosperity Partnership AI4ME: Future Personalised Object-Based Media Experiences Delivered at Scale Anywhere EP/V038087/1.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{\(\tau=0\)} & \multicolumn{4}{c}{\(\tau=20\)} & \multicolumn{4}{c}{\(\tau=40\)} \\ \cline{2-13} & \(P_{AC}\) & \(R_{AC}\) & \(F1_{AC}\) & \(mAP_{AC}\) & \(P_{AC}\) & \(R_{AC}\) & \(F1_{AC}\) & \(mAP_{AC}\) & \(P_{AC}\) & \(R_{AC}\) & \(F1_{AC}\) & \(mAP_{AC}\) \\ \hline I3D[2]* & 14.3 & 1.3 & 2.1 & 15.2 & 12.7 & 1.9 & 2.9 & 21.4 & 14.9 & 2.0 & 3.1 & 20.3 \\ CF [33]* & 10.3 & 1.0 & 1.6 & 15.8 & 9.0 & 1.5 & 2.2 & 22.2 & 10.7 & 1.6 & 2.4 & 21.0 \\ MLAD [33]�1 & 19.3 & 7.2 & 8.9 & 28.9 & 18.9 & 8.9 & 10.5 & 35.7 & 19.6 & 9.0 & 10.8 & 34.8 \\ MS-TCT [7]�1 & 26.3 & 15.5 & 19.5 & 30.7 & 27.6 & 18.4 & 22.1 & 37.6 & 27.9 & 18.3 & 22.1 & 36.4 \\ \hline PAT�1 & **28.3** & **26.1** & **27.2** & **32.0** & **30.0** & **29.2** & **29.6** & **37.8** & **30.0** & **29.1** & **29.4** & **36.7** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Action detection results on Charades dataset based on the action-conditional metrics [33], \(P_{AC}\), \(R_{AC}\), \(F1_{AC}\), and \(mAP_{AC}\). \(\tau\) refers the temporal window size. The same as [7, 33], both RGB and optical flow are used for obtaining the results. The ✓ symbol highlights the transformer-based approaches, and \(*\) indicates the results are taken from [33].
Figure 5: Visualization of action predictions by our proposed method PAT and MS-TCT [7] on a test video sample of Charades including 7 different action types.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{GFLOPs} & \multirow{2}{*}{Backbone} & \multicolumn{2}{c}{mAP(\%)} \\ \cline{3-5} & & & & Charades & MultiTHUMOS \\ \hline R-C3D [37] & ICCV 2017 & - & C3D & 12.7 & - \\ SuperEvent [27] & CVPR 2018 & 0.8 & I3D & 18.6 & 36.4 \\ TGM [26] & ICML 2019 & 1.2 & I3D & 20.6 & 37.2 \\ PDAN [6]�1 * & WACV 2021 & 3.2 & I3D & 23.7 & 40.2 \\ CoarseFine [15] & CVPR 2021 & - & X3D & 25.1 & - \\ MLAD [33]�1 & CVPR 2021 & 44.8 & I3D & 18.4 & 42.2 \\ CTRN [5]�1 & BMVC 2021 & - & I3D & 25.3 & 44.0 \\ PointTAD [32] & NeurIPS 2022 & - & I3D & 21.0 & 39.8 \\ MS-TCT [7]�1 & CVPR 2022 & 6.6 & I3D & 25.4 & 43.1 \\ \hline PAT�1 & & 8.5 & I3D & **26.5** & **44.6** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Action detection results on Charades and MultiTHUMOS datasets using RGB videos in terms of per-frame mAP. The ✓ symbol highlights the transformer-based approaches, and \(*\) indicates the results are taken from [7].
|
2303.07534
|
Stochastic nutrient-plankton models
|
We analyze plankton-nutrient food chain models composed of phytoplankton,
herbivorous zooplankton and a limiting nutrient. These models have played a key
role in understanding the dynamics of plankton in the oceanic layer. Given the
strong environmental and seasonal fluctuations that are present in the oceanic
layer, we propose a stochastic model for which we are able to fully classify
the longterm behavior of the dynamics. In order to achieve this we had to
develop new analytical techniques, as the system does not satisfy the regular
dissipativity conditions and the analysis is more subtle than in other
population dynamics models.
|
Alexandru Hening, Nguyen Trong Hieu, Dang Hai Nguyen, Nhu Ngoc Nguyen
|
2023-03-13T23:40:07Z
|
http://arxiv.org/abs/2303.07534v1
|
# Stochastic nutrient-plankton models
###### Abstract.
We analyze plankton-nutrient food chain models composed of phytoplankton, herbivorous zooplankton and a limiting nutrient. These models have played a key role in understanding the dynamics of plankton in the oceanic layer. Given the strong environmental and seasonal fluctuations that are present in the oceanic layer, we propose a stochastic model for which we are able to fully classify the longterm behavior of the dynamics. In order to achieve this we had to develop new analytical techniques, as the system does not satisfy the regular dissipativity conditions and the analysis is more subtle than in other population dynamics models.
**Keywords.** nutrient-plankton model, switching diffusion, ergodicity, invariant measure
## 1. Introduction
The oceans of the world are populated by small, free floating or weakly swimming, organisms called plankton. More specifically, plankton can be divided into phytoplankton, which are plants, and zooplankton, which are animals that consume the phytoplankton. These tiny organisms have a significant impact on the various food chains present in the oceans as they form the bottom of the food chains. In addition, they also seem to play a role in the Earth's carbon cycle. Because it is hard to empirically measure the amount of plankton, it is important to build simple mathematical models that will allow us to better understand the dynamics of plankton.
The analysis of mathematical models for plankton dynamics can be traced to Hallam [14, 15, 16], who obtained stability and persistence results for nutrient controlled plankton models. Since then, people have studied the dynamics of models that include phytoplankton, zooplankton and a nutrient that is consumed by the phytoplankton. This nutrient can be regenerated due to the bacterial decomposition of dead phytoplankton and zooplankton. In this paper we assume that the nutrient recycling is instantaneous, and therefore neglect the time required to regenerate the nutrient from dead plankton.
In our model, which first appeared in [17] and was generalized in [18], the limiting nutrient has a constant input concentration \(N^{0}\) while the nutrient, phytoplankton and zooplankton have constant washout rates \(D,D_{1}\) and \(D_{2}\). It is important to include the washout rates because they describe the removal due to washout, sinking, or harvesting of biotic mass from the ecosystem.
The models we study can be seen as describing the dynamics of the zooplankton-phytoplankton-nutrient trio within lakes or oceans. Since water masses have nutrient residence times of years [19], one must consider the regeneration of nutrient due to bacterial decomposition of dead plankton. We assume that the zooplankton only feeds on phytoplankton and that parts of the dead phytoplankton and zooplankton are instantaneously recycled into the nutrient. We are then able to find two thresholds, which depend on the model parameters, that completely characterize the persistence or extinction of the two types of plankton.
Natural ecosystems will be influenced by random environmental fluctuations. These fluctuations will have a significant impact on the dynamics of the various species. As a result, in order to have a realistic model of the species dynamics in an ecosystem it is key to include environmental fluctuations in the mathematical framework. It is well known that environmental fluctuations can have a significant impact on the long term behavior: in certain cases coexistence can be reversed into extinction while in others extinction becomes coexistence [1, 17, 18]. A successful
Introduction
The study of the stochastic differential equations (SDE) is a fundamental problem in the theory of stochastic differential equations (SDE). The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation. The SDE is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, and it is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic differential equation, which is a stochastic
uptake rate of zooplankton, \(\alpha_{4}\) is the nutrient recycling rate from dead phytoplankton, and \(\alpha_{5}\) is the nutrient recycling rate from dead zooplankton. This model has been studied in [10] where the author found sufficient conditions for extinction and persistence. It is natural to generalize the functional responses from (1.1) so that nonlinear interactions can be captured. One way is by looking at the dynamics of the type
\[\begin{split}\frac{dX}{dt}(t)&=\Lambda-F_{1}(X(t),Y (t))X(t)Y(t)-\alpha_{1}X(t)+\alpha_{4}Y(t)+\alpha_{5}Z(t)\\ \frac{dY}{dt}(t)&=F_{1}(X(t),Y(t))X(t)Y(t)-F_{2}(Y(t ),Z(t))Y(t)Z(t)-\alpha_{2}Y(t)\\ \frac{dZ}{dt}(t)&=F_{2}(Y(t),Z(t))Y(t)Z(t)-\alpha_{3} Z(t)\end{split} \tag{1.2}\]
where \(F_{1}(x,y),F_{2}(y,z)\) can now be non-constant functions. The last step is introducing the environmental white-noise fluctuations, which turns the system of ODE (1.2) into the system of SDE
\[\begin{split} dX(t)&=[\Lambda-F_{1}(X(t),Y(t))X(t)Y (t)-\alpha_{1}X(t)+\alpha_{4}Y(t)+\alpha_{5}Z(t)]dt+\sigma_{1}X(t)dW_{1}(t)\\ dY(t)&=[F_{1}(X(t),Y(t))X(t)Y(t)-F_{2}(Y(t),Z(t))Y( t)Z(t)-\alpha_{2}Y(t)]dt+\sigma_{2}Y(t)dW_{2}(t)\\ dZ(t)&=[F_{2}(Y(t),Z(t))Y(t)Z(t)-\alpha_{3}Z(t)]dt+ \sigma_{3}Z(t)dW_{3}(t)\end{split} \tag{1.3}\]
where \((W_{1}(t),W_{2}(t),W_{3}(t))\) is a standard Brownian motion on \(\mathbb{R}^{3}\) on a complete probability space \((\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geq 0},\mathbb{P})\) with a filtration \(\{\mathcal{F}_{t}\}_{t\geq 0}\) satisfying the usual conditions. Throughout this paper, \(\mathbb{R}^{3}_{+}=\{(x,y,z)\in\mathbb{R}^{3}:x,y,z\geq 0\}\), \(\mathbb{R}^{3,\circ}_{+}=\{(x,y,z)\in\mathbb{R}^{3}:x,y,z>0\}\). Let \(\mathbf{S}(t):=(X(t),Y(t),Z(t))\) and let \(\mathbf{s}=(x,y,z)\in\mathbb{R}^{3}_{+}\) denote the initial conditions, that is \(\mathbf{S}(0):=(X(0),Y(0),Z(0))=\mathbf{s}\). We denote by \(\mathcal{L}\) the generator of the diffusion process \(\mathbf{S}\) from (1.3). We will also use \(\mathbb{P}_{\mathbf{s}}\), \(\mathbb{E}_{\mathbf{s}}\) to indicate the initial value of the solutions.
The following assumption is held throughout the paper.
**Assumption 1.1**.: _The following conditions hold._
1. \(\alpha_{4}<\alpha_{2}\) _and_ \(\alpha_{5}<\alpha_{3}\)_._
2. \(F_{1}(\cdot)\) _and_ \(F_{2}(\cdot)\) _are functions on_ \(\mathbb{R}^{2}_{+}\) _bounded by_ \(L>0\)_. Suppose_ \(F_{1}(u,v)u,F_{2}(u,v)u\) _are Lipschitz functions whose Lipschitz coefficients are bounded by_ \(L.\)__
3. \(F_{1}(u,0)u\) _is nondecreasing._
_Remark 1.1_.: Assumption 1.1 (1) is natural since \(\alpha_{4}\) (resp. \(\alpha_{5}\)) is the nutrient recycling rate from dead phytoplankton (resp. zooplankton) that must be always less than \(\alpha_{2}\) (resp. \(\alpha_{3}\)), the death rate and the washout rate of the phytoplankton (resp. zooplankton). Assumption 1.1 (2) and (3) are mild and satisfied by almost all the models from the literature.
The first theorem tells us that we can bound the moments of the process, and that the process stays in compact sets with large probability.
**Theorem 1.1**.: _For any initial value \(\mathbf{s}=(x,y,z)\in\mathbb{R}^{3}_{+}\), there exists a unique a global solution \(\mathbf{S}(t)\) to (1.3) such that \(\mathbb{P}_{\mathbf{s}}\{\mathbf{S}(t)\in\mathbb{R}^{3}_{+},\ \forall t\geq 0\}=1.\) Moreover, \(X(t)>0\) for all \(t>0\) with probability 1 and if \(Y(0)=0\) (resp. \(Z(0)=0\)) then \(Y(t)=0\) (resp. \(Z(t)=0\)) for all \(t\geq 0\) with probability 1. We also have that \(\mathbf{S}(t)\) is a Markov-Feller process on \(\mathbb{R}^{3}\). Furthermore, there are \(q_{0}>1\), \(\alpha_{0}>0\) such that for any \(q\in[1,q_{0}]\),_
\[\mathbb{E}_{\mathbf{s}}(1+X(t)+Y(t)+Z(t))^{q}\leq(1+x+y+z)^{q}e^{-\alpha_{0}t }+C_{q_{0}},\ \forall\mathbf{s}\in\mathbb{R}^{3}_{+}. \tag{1.4}\]
_In addition, there exists \(\overline{K}>0\) such that_
\[\mathbb{E}_{\mathbf{s}}(1+X(t)+Y(t)+Z(t))^{2}\leq e^{\overline{K}t}(1+x+y+z)^ {2},\ \forall\mathbf{s}\in\mathbb{R}^{3}_{+}. \tag{1.5}\]
_Finally, for any \(\varepsilon>0,H>0,T>0\), there exists \(\widetilde{K}(\varepsilon,H,T)>0\) such that_
\[\mathbb{P}_{\mathbf{s}}\left\{X(t)+Y(t)+Z(t)\leq\widetilde{K}(\varepsilon,H, T),\ \forall 0\leq t\leq T\right\}\geq 1-\varepsilon\ \text{given}\ |\mathbf{s}|\leq H. \tag{1.6}\]
Let \(\widehat{X}\) be the solution of the following equation
\[d\widehat{X}(t)=[\Lambda-\alpha_{1}\widehat{X}(t)]dt+\sigma_{1}\widehat{X}(t)dW_{ 1}(t). \tag{1.7}\]
One can show that this one-dimensional SDE has a unique invariant measure \(\mu_{1}\) on \([0,\infty)\), which is an inverse Gamma distribution (see Lemma 2.2). Define
\[\lambda_{1}:=\int_{[0,\infty)}F_{1}(u,0)u\mu_{1}(du)-\alpha_{2}-\frac{\sigma_{ 2}^{2}}{2}.\]
_Remark 1.2_.: We note that when \(F_{1}(\cdot,\cdot)=a\) is a constant function, and we are in the simplified setting where the deterministic part is the one from (1.1), we get
\[\lambda_{1}=a\frac{\Lambda}{\alpha_{1}}-\alpha_{2}-\frac{\sigma_{2}^{2}}{2}.\]
The next result tells us that if \(\lambda_{1}\) is negative then both the phytoplankton and the zooplankton go extinct with probability 1. Furthermore, it also gives the exact exponential rates of extinction.
**Theorem 1.2**.: _If \(\lambda_{1}<0\) then for any \(\mathbf{S}(0)=\mathbf{s}\in\mathbb{R}_{+}^{3,\circ}\) we have with probability 1 that_
\[\lim_{t\to\infty}\frac{\ln Y(t)}{t}=\lambda_{1}\text{ and }\lim_{t\to\infty} \frac{\ln Z(t)}{t}=-\alpha_{3}-\frac{\sigma_{3}^{2}}{2}. \tag{1.8}\]
We are wondering that what will happen if \(\lambda_{1}>0\). We consider a system in the absence of zooplankton that is given by
\[d\overline{X}(t) =[\Lambda-F_{1}(\overline{X}(t),\overline{Y}(t))\overline{X}(t) \overline{Y}(t)-\alpha_{1}\overline{X}(t)+\alpha_{4}\overline{Y}(t)]dt+\sigma_{ 1}\overline{X}(t)dW_{1}(t)\] \[d\overline{Y}(t) =[F_{1}(\overline{X}(t),\overline{Y}(t))\overline{X}(t)\overline {Y}(t)-\alpha_{2}\overline{Y}(t)]dt+\sigma_{2}\overline{Y}(t)dW_{2}(t). \tag{1.9}\]
If \(\lambda_{1}>0\), the next proposition shows that the phytoplankton-nutrient system (1.9) is persistent and has a unique invariant probability measure. We will use subscripts in \(\mathbb{E}_{x,y}\) to indicate initial values of equation (1.9).
**Proposition 1.1**.: _Let \((\overline{X},\overline{Y})\) be the solution to (1.9). If \(\lambda_{1}>0\) then for sufficiently small \(\theta>0\) there exist constants \(K_{\theta},\gamma_{\theta}>0\) such that_
\[\mathbb{E}_{x,y}[(\overline{Y}(t))^{-\theta}]\leq K_{\theta}e^{-\gamma_{ \theta}t}y^{-\theta}+K_{\theta},\;\forall x\geq 0,y>0. \tag{1.10}\]
_As a result of the nondegeneracy of the diffusion process \((\overline{X}(t),\overline{Y}(t))\), there exists a unique invariant measure \(\mu_{12}\) of \((\overline{X}(t),\overline{Y}(t))\) on \(\mathbb{R}_{+}^{2,\circ}\)._
Therefore, if \(\lambda_{1}>0\), we can define the invasion rate of the zooplankton into \(\mu_{12}\) via
\[\lambda_{2}:=\int_{\mathbb{R}_{12+}^{\circ}}F_{2}(u,v)u\mu_{12}(dudv)-\alpha_ {3}-\frac{\sigma_{3}^{2}}{2}.\]
The normalized random occupation measure is given by
\[\widetilde{\Pi}_{t}^{\mathbf{s}}(\cdot):=\frac{1}{t}\int_{0}^{t}\mathbf{1}_{ \{\mathbf{S}(u)\in\cdot\}}\,du,\]
where the superscript \(\mathbf{s}\) indicates the corresponding initial condition. Finally, we are able to show that if \(\lambda_{1}>0\) then the sign of \(\lambda_{2}\) determines the extinction/persistence of the zooplankton.
**Theorem 1.3**.: _If \(\lambda_{1}>0\) and \(\lambda_{2}<0\) then, for any \(\mathbf{s}\in\mathbb{R}_{+}^{3,\circ}\) with probability one_
\[\lim_{t\to\infty}\frac{\ln Z(t)}{t}=\lambda_{2} \tag{1.11}\]
_and with probability one the family of normalized random occupation measures \((\widetilde{\Pi}_{t}^{\mathbf{s}})_{t>0}\) converges weakly to \(\mu_{12}\)._
**Theorem 1.4**.: _If \(\lambda_{1}>0\) and \(\lambda_{2}>0\) then there exists a unique invariant measure \(\mu^{\circ}\) on \(\mathbb{R}^{3,\circ}_{+}\). Furthermore, for any \(\mathbf{s}\in\mathbb{R}^{3,\circ}_{+}\)_
\[\lim_{t\to\infty}t^{\widetilde{q}-1}\|P_{t}(\mathbf{s},\cdot)-\mu^{\circ}( \cdot)\|_{TV}=0,\;\forall 1\leq\widetilde{q}<q_{0}, \tag{1.12}\]
_where \(\|\cdot\|\) is the total variation metric and \(P_{t}(\mathbf{s},\cdot)=\mathbb{P}_{\mathbf{s}}(\mathbf{S}(t)\in\cdot)\) is the transition probability of the process \(\mathbf{S}(t)\)._
The complete characterization of the underlying system is summarized in the following table.
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline \(\lambda_{1}<0\) (Theorem 1.2) & The phytoplankton \(Y(t)\) and the zooplankton \(Z(t)\) go extinct exponentially fast with probability \(1\); the nutrient \(X(t)\) converges weakly to the solution \(\widehat{X}(t)\) of (1.7). \\ \hline \(\lambda_{1}>0\), \(\lambda_{2}<0\) (Theorem 1.3) & The zooplankton \(Z(t)\) goes extinct exponentially fast with probability \(1\); the nutrient-phytoplankton subsystem \((X(t),Y(t))\) converges weakly to the solution \((\overline{X}(t),\overline{Y}(t))\) of (1.9). \\ \hline \(\lambda_{1}>0\), \(\lambda_{2}>0\) (Theorem 1.4) & Coexistence: the process \((X(t),Y(t),Z(t))\) has a unique invariant measure \(\mu^{\circ}\) on \(\mathbb{R}^{3,\circ}_{+}\), and the transition probability converges to \(\mu^{\circ}\) with polynomial rate. \\ \hline \end{tabular}
### Sketch of proof, technical difficulties and novel approaches
General results for extinction and persistence of Kolmogorov SDE systems appear in [11]. However, those results cannot be applied to the nutrient-plankton model 1.3. This is because the dissipativity/boundedness condition [11, (1.2) in Assumption 1.1] is not satisfied for 1.3. This condition was used to prove that the process would return quickly into compact sets as well as the tightness of the random occupation measures. Because of this we had to develop new methods in order to get sharp extinction and persistence results. We present the main ideas and difficulties for the proofs of Theorem 1.3 and Theorem 1.4 below.
The first ingredient in determining whether a species persists or goes extinct is looking at its long term growth rate at small densities. This is sometimes called the invasion rate. It turns out that these invasion rates can be computed as the external Lyapunov exponents, i.e. the log-growth rates averaged with respect to certain invariant measures which are supported on the boundary; see [11, 12] for an exposition of the concept of invasion rate. For our models, the key invasion rates are \(\lambda_{1}\) and \(\lambda_{2}\) and we can show that extinction/persistence of the phytoplankton and the zooplankton is determined by the signs of \(\lambda_{1}\) and \(\lambda_{2}\). Due to the lack of boundedness/dissipativity, we cannot obtain an exponential convergence rate in the case \(\lambda_{1}>0\) and \(\lambda_{2}>0\). Instead, we follow the techniques from [1] to obtain a polynomial rate of convergence.
The hardest part is proving the extinction result (Theorem 1.3). Without the tightness of the family of random occupation measures \((\widetilde{\Pi}^{\mathbf{s}}_{t}(\cdot))_{t>0}\), the methods from [11] do not work. We develop a new coupling method to compare the solution near the boundary (when \(Z(t)\) is small) and the solution on the boundary (when \(Z(t)=0\)). While the comparison in a finite interval is standard, it is not sufficient to obtain the desired result which requires the two solutions to be close with a large probability in the infinite interval \([0,\infty)\). In order to overcome this obstacle, we construct a coupled system \((X(t),Y(t),\overline{X}(t),\overline{Y}(t),\overline{Z}(t))\) where \((X(t),Y(t))\) is the solution on the boundary (\(Z(t)=0\)) and \((\overline{X}(t),\overline{Y}(t),\overline{Z}(t))\) has initial value close enough to the initial value of \((X(t),Y(t),Z(t))\). The process \((\overline{X}(t),\overline{Y}(t),\overline{Z}(t))\) after a change of measure is the solution to (1.3) up to a "separating" time \(\tau\) and we show that the separating time is infinity with a large probability.
Standard coupling methods often define \(\tau\) as \(\tau:=\inf\{t\geq 0:\overline{Z}(t)\geq\delta\}\) for some small \(\delta\). However, this definition does not work on an infinite interval. Instead, we will define the "separation" time \(\tau\) as the time \(Z(t)\) exceeds an exponential decay \(\tau:=\inf\{t\geq 0:\overline{Z}(t)\geq\delta e^{-\gamma_{0}t}\}\). With this definition, it becomes much more difficult to show that \(\tau=\infty\) with a large probability. The idea to
tackle this difficulty is based on the strong correlation between \(|X(t)-\overline{X}(t)|+|Y(t)-\overline{Y}(t)|\) being small and \(\overline{Z}(t)\) decaying exponentially fast. If \(|X(t)-\overline{X}(t)|+|Y(t)-\overline{Y}(t)|\) is small for a long time then \(\overline{Z}(t)\) is still bounded by an exponential decay and when \(\overline{Z}(t)\) is bounded by an exponential decay, one can establish a good bound for \(|X(t)-\overline{X}(t)|+|Y(t)-\overline{Y}(t)|\) for the infinite interval \([0,\infty)\).
## 2. Proofs of Theorems 1.1 and 1.2
Proofs of Theorem 1.1.: The existence and uniqueness of solutions can be proved similarly to [23, Appendix B]. The proof for the Markov-Feller property of \((\mathbf{S}(t))\) can be found in [23]. Therefore, the following is devoted to proofs of (1.4), (1.5), and (1.6).
Denote \(\alpha_{0}:=\frac{1}{3}\min\{\alpha_{1},\alpha_{2}-\alpha_{4},\alpha_{3}- \alpha_{5}\}\), and let \(q_{0}\in(1,2)\) be such that \((q_{0}-1)(\sigma_{1}^{2}\vee\sigma_{2}^{2}\vee\sigma_{3}^{2})\leq\alpha_{0}\). Define \(U^{q}(\mathbf{s})=(1+x+y+z)^{q}\). For \(0<q\leq q_{0}\), we have
\[\begin{split}\mathcal{L}U^{q}(\mathbf{s})=&[U^{q}] _{x}(\mathbf{s})\big{(}\Lambda-F_{1}(x,y)xy-\alpha_{1}x+\alpha_{4}y+\alpha_{5 }z\big{)}\\ &+[U^{q}]_{y}(\mathbf{s})\big{(}F_{1}(x,y)xy-F_{2}(y,z)yz-\alpha_ {2}y\big{)}\\ &+[U^{q}]_{z}(\mathbf{s})\big{(}F_{2}(y,z)yz-\alpha_{3}z\big{)}\\ &+[U^{q}]_{xx}(\mathbf{s})\frac{\sigma_{1}^{2}x^{2}}{2}+[U^{q}]_ {yy}(\mathbf{s})\frac{\sigma_{2}^{2}y^{2}}{2}+[U^{q}]_{zz}(\mathbf{s})\frac{ \sigma_{3}^{2}z^{2}}{2}\\ \leq& q\big{(}\Lambda-\alpha_{1}x-(\alpha_{2}-\alpha_ {4})y-(\alpha_{3}-\alpha_{5})z\big{)}(1+x+y+z)^{q-1}\\ &+\frac{q(q-1)}{2}(1+x+y+z)^{q-2}(\sigma_{1}^{2}x^{2}+\sigma_{2} ^{2}y^{2}+\sigma_{3}^{2}y^{2})\\ \leq& q\left(\Lambda(1+x+y+z)^{q-1}-2\alpha_{0}(1+x+ y+z)^{q}\right).\end{split} \tag{2.1}\]
Since for any \(0\leq q\leq q_{0}\),
\[C_{q}:=\sup_{\mathbf{s}=(x,y,z)\in\mathbb{R}_{+}^{3}}q\left(\Lambda(1+x+y+z)^{ q-1}-(2-q)\alpha_{0}(1+x+y+z)^{q}\right)<\infty,\]
we obtain that
\[\mathcal{L}U^{q}(\mathbf{s})\leq C_{q}-q\alpha_{0}U^{q}(\mathbf{s}),\,\forall \mathbf{s}\in\mathbb{R}_{+}^{3}. \tag{2.2}\]
Let \(\overline{\tau}_{n}=\inf\{t\geq 0:U(\mathbf{S}(t))\geq n\}\). Because of (2.2) and Ito's formula, we have
\[\begin{split}\mathbb{E}_{\mathbf{s}}e^{\alpha_{0}(t\wedge \overline{\tau}_{n})}U^{q}(\mathbf{S}(t\wedge\overline{\tau}_{n}))\leq& U^{q}(\mathbf{s})+\mathbb{E}_{\mathbf{s}}\left(\int_{0}^{t\wedge \overline{\tau}_{n}}C_{q_{0}}e^{q\alpha_{0}s}ds\right)\\ \leq& U^{q}(\mathbf{s})+C_{q_{0}}\int_{0}^{t}e^{ \alpha_{0}s}ds\\ \leq& U^{q}(\mathbf{s})+\frac{C_{q_{0}}}{\alpha_{0}}e^{ \alpha_{0}t}.\end{split} \tag{2.3}\]
Dividing both sides of (2.3) by \(e^{q\alpha_{0}t}\) and letting \(n\to\infty\), we obtain (1.4).
Similarly, with some elementary estimates as the process of getting (2.1), we have
\[[\mathcal{L}U^{2}](\mathbf{s})\leq\overline{K}\,U^{2},\forall\mathbf{s}\in \mathbb{R}_{+}^{3}. \tag{2.4}\]
Thus, from (2.4) and Dynkin's formula, we get
\[\mathbb{E}_{\mathbf{s}}e^{-\overline{K}t}U^{2}(\mathbf{S}(t\wedge\overline{ \tau}_{n}))\leq\mathbb{E}_{\mathbf{s}}e^{-\overline{K}(t\wedge\overline{\tau}_ {n})}U^{2}(\mathbf{S}(t\wedge\overline{\tau}_{n}))\leq U^{2}(\mathbf{s}). \tag{2.5}\]
Letting \(n\to\infty\), we can derive (1.5) from Lebesgue's dominated convergence theorem. (1.6) can also be obtained easily from (2.5).
The remaining of this section is devoted to the proof of Theorem 1.2. We start with the following auxiliary Lemmas 2.1 and 2.2, and Proposition 2.1. The first lemma establishes some estimates (in probability) for \(\ln Y(t)\) and \(\ln Z(t)\) in finite time intervals given initial conditions belonging in a compact set. The second lemma states the ergodicity of the process on the boundary corresponding \(y=z=0\). Proposition 2.1 will show that if \(\lambda_{1}<0\) and solutions start with small \(Y(0)\) and \(Z(0)\), then \(Y(t)\) and \(Z(t)\) converges to \(0\) (exponentially fast) with large probability.
**Lemma 2.1**.: _For any \(\varepsilon>0\), \(H>0\) and \(T>0\), there exists \(K_{\varepsilon,H,T}>0\) such that_
\[\mathbb{P}_{\mathbf{s}}\left\{|\ln Z(t)-\ln z|\vee|\ln Y(t)-\ln y|\leq K_{ \varepsilon,H,T},\;\forall 0\leq t\leq T\right\}\geq 1-\varepsilon\text{ if }\mathbf{s}\in[0,H]\times(0,1)^{2}.\]
Proof.: In view of (1.6), there exists \(K_{1}:=K_{1}(\varepsilon,H,T)\) such that
\[\mathbb{P}_{\mathbf{s}}\left\{F_{1}(X(t),Y(t))X(t)+F_{2}(Y(t),Z(t))Y(t)\leq K _{1}\text{ for all }0\leq t\leq T\right\}\geq 1-\frac{\varepsilon}{2}\text{ given }\mathbf{s}\in[0,H]\times(0,1)^{2}. \tag{2.6}\]
Moreover, there is \(K_{2}:=K_{2}(\varepsilon,T)>0\) such that
\[\mathbb{P}\left\{|\sigma_{2}W_{2}(t)|+|\sigma_{3}W_{3}(t)|\leq K_{2}\text{ for all }0\leq t\leq T\right\}\geq 1-\frac{\varepsilon}{2}. \tag{2.7}\]
On the other hand, we deduce from Ito's formula that
\[\left\{\begin{array}{ll}\ln Y(t)-\ln y=&\int_{0}^{t}F_{1}(X(s),Y(s))X(s)ds- \int_{0}^{t}F_{2}(Y(s),Z(s))Z(s)ds-(\alpha_{2}+\frac{\sigma_{2}^{2}}{2})t+ \sigma_{2}W_{2}(t),\\ \ln Z(t)-\ln z=&\int_{0}^{t}F_{2}(Y(s),Z(s))Y(s)ds-(\alpha_{3}+\frac{\sigma_{ 3}^{2}}{2})t+\sigma_{3}W_{3}(t).\end{array}\right. \tag{2.8}\]
Applying (2.6) and (2.7) into (2.8) we can easily obtain the desired result.
**Lemma 2.2**.: _For any \(\theta>0\), any initial condition \(x\geq 0\), there exists a unique solution \(\widehat{X}_{x}^{\theta}(t)\) to_
\[d\widehat{X}^{\theta}(t)=[\Lambda+\theta-\alpha_{1}\widehat{X}^{\theta}(t)]dt +\sigma_{1}\widehat{X}^{\theta}(t)dW_{1}(t),\quad\widehat{X}^{\theta}(0)=x. \tag{2.9}\]
_The solution process \(\widehat{X}^{\theta}\) has a unique invariant probability measure \(\mu^{\theta}\) on \([0,\infty)\), which is an inverse Gamma distribution with density \(g_{\theta}(u)=\frac{\beta_{\theta}^{\alpha}}{\Gamma(\alpha)}u^{-\alpha-1}\exp \left(-\frac{\beta_{\theta}}{u}\right),u>0\), with \(\alpha=1+2\frac{\alpha_{1}}{\sigma_{1}^{2}}\) and \(\beta_{\theta}=\frac{2(\Lambda+\theta)}{\sigma_{1}^{2}}\). In particular \(\int_{[0,\infty)}u\mu^{\theta}(du)=\frac{\beta_{\theta}}{\alpha-1}=\frac{ \Lambda+\theta}{\alpha_{1}}\). Furthermore, \(r(u)=u^{q_{0}}\) is \(\mu^{\theta}\)-integrable._
_Note that \(\mu^{0}=\mu_{1}\), is the unique invariant probability measure of (1.7). Define_
\[\ell_{\theta}:=\int_{[0,\infty)}F_{1}(u,0)u\mu^{\theta}(du),\]
_then we have_
\[\lim_{\theta\to 0^{+}}\ell_{\theta}=\int_{[0,\infty)}F_{1}(u,0)u\mu_{1}(du)=\ell_{0 }=\lambda_{1}+\alpha_{2}+\frac{\sigma_{2}^{2}}{2}. \tag{2.10}\]
Proof.: The proof is almost identical to that of [20, Lemma 4.1] and is therefore omitted.
**Proposition 2.1**.: _Suppose \(\lambda_{1}<0\). For any \(H>0\) and \(\varepsilon\in(0,1)\), there exists \(\delta=\delta(\varepsilon,H)>0\) such that_
\[\mathbb{P}_{\mathbf{s}}\left\{\lim_{t\to\infty}\frac{\ln Y(t)}{t}=\lambda_{1} \text{ and }\lim_{t\to\infty}\frac{\ln Z(t)}{t}=-\alpha_{3}-\frac{\sigma_{3}^{2}}{2} \right\}\geq 1-\varepsilon\text{ given }\mathbf{s}\in[0,H]\times(0,\delta)^{2}. \tag{2.11}\]
Proof.: Let \(\Delta_{0}=\frac{1}{3}\left(\left(\alpha_{3}+\frac{\sigma_{3}^{2}}{2}\right) \wedge|\lambda_{1}|\right)\). In view of Lemma 2.2 (and (2.10)), we can choose (and then fix) a \(\theta\in(0,\alpha_{4}\wedge\alpha_{5}\wedge\frac{\alpha_{4}\Delta_{0}}{L})\) such that
\[\ell_{\theta}\leq\lambda_{1}+\Delta_{0}+\alpha_{2}+\frac{\sigma_{2}^{2}}{2}.\]
Define \(\xi:=\inf\{t\geq 0:\alpha_{4}Y(t)+\alpha_{5}Z(t)\geq\theta\}\). Because of standard comparison theorems [11], we have that given \(X(0)=x\in[0,H]\), \(X(t)\leq X_{H}^{\theta}(t)\,\forall 0\leq t\leq\xi\) with probability \(1\). For \(t\leq\xi\), we have
\[\ln Y(t)-\ln y= \int_{0}^{t}(F_{1}(X(s),Y(s))X(s)-F_{2}(Y(s),Z(s))Z(s))ds-\left( \alpha_{2}+\frac{\sigma_{2}^{2}}{2}\right)t+\sigma_{2}W_{2}(t)\] \[\leq \int_{0}^{t}F_{1}(X(s),0)X(s)ds-\left(\alpha_{2}+\frac{\sigma_{2} ^{2}}{2}\right)t+\sigma_{2}W_{2}(t)\] \[+\int_{0}^{t}[F_{1}(X(s),Y(s))X(s)-F_{1}(X(s),0)X(s)]ds\] \[\leq \int_{0}^{t}F_{1}(X_{H}^{\theta}(t),0)X_{H}^{\theta}(t)ds-\left( \alpha_{2}+\frac{\sigma_{2}^{2}}{2}\right)t+\sigma_{2}W_{2}(t)+\frac{L\theta} {\alpha_{4}}t; \tag{2.12}\]
and
\[\ln Z(t)-\ln z= \int_{0}^{t}F_{2}(Y(s),Z(s))Y(s)ds-(\alpha_{3}+\frac{\sigma_{3}^{ 2}}{2})t+\sigma_{3}W_{3}(t)\] \[\leq L\int_{0}^{t}Y(s)ds-(\alpha_{3}+\frac{\sigma_{3}^{2}}{2})t+ \sigma_{3}W_{3}(t)\] \[\leq \frac{L\theta}{\alpha_{4}}-(\alpha_{3}+\frac{\sigma_{3}^{2}}{2}) t+\sigma_{3}W_{3}(t)\] \[\leq -2\Delta_{0}t+\sigma_{3}W_{3}(t). \tag{2.13}\]
In view of the ergodicity of \(\widehat{X}^{\theta}\) and law of large numbers for martingales, we can find \(T>0\) and a set \(\widetilde{\Omega}_{1}\subset\Omega\) such that \(\mathbb{P}\{\widetilde{\Omega}_{1}\}\geq 1-\frac{\varepsilon}{2}\) and for \(\omega\in\widetilde{\Omega}_{1}\), we have the following two estimates:
\[\int_{0}^{t}F_{1}(\widehat{X}_{H}^{\theta}(s),0)\widehat{X}_{H}^{\theta}(s)ds +\sigma_{2}W_{2}(t)\leq(\ell_{\theta}+\Delta_{0})t\leq\big{(}\lambda_{1}+ \alpha_{2}+\frac{\sigma_{2}^{2}}{2}+2\Delta_{0}\big{)}t,\;t\geq T, \tag{2.14}\]
and
\[\sigma_{3}W_{3}(t)\leq\Delta_{0}t,\;t\geq T. \tag{2.15}\]
In view of Lemma 2.1, for any \(\varepsilon>0\), \(H>0\), \(T>0\), we can choose \(\overline{K}=\overline{K}_{\varepsilon,H,T}\) such that \(\mathbb{P}_{\mathbf{s}}(\widetilde{\Omega}_{2})\geq 1-\varepsilon\) given \(\mathbf{s}\in[0,H]\times[0,1]^{2}\) where
\[\widetilde{\Omega}_{2}:=\left\{|\ln Z(t)-\ln z|\vee|\ln Y(t)-\ln y|\leq \overline{K},\;\forall t\in[0,T]\right\}.\]
Let \(\delta=\frac{\theta}{3(\alpha_{4}\vee\alpha_{5})}e^{-\overline{K}}.\) Then, for \(\omega\in\widetilde{\Omega}_{2}\) and \(y\lor z\leq\delta\), we have
\[Y(t)\leq ye^{\overline{K}}\leq\frac{\theta}{3(\alpha_{4}\vee\alpha_{5})}\text { and }Z(t)\leq ze^{\overline{K}}\leq\frac{\theta}{3(\alpha_{4}\vee\alpha_{5})}\text{ for any }t\leq T. \tag{2.16}\]
As a result, we must have \(\xi>T\) for \(\omega\in\widetilde{\Omega}_{2}\).
Now, considering \(y\lor z\leq\delta,x\leq H\) and \(\omega\in\widetilde{\Omega}_{1}\cap\widetilde{\Omega}_{2}\), we have from (2.12) and (2.14) that
\[\ln Y(t)\leq\ln y+(\lambda_{1}+2\Delta_{0})t\leq\ln y-\Delta_{0}t\leq\ln y\leq \ln\delta,\text{ for }T\leq t\leq\xi, \tag{2.17}\]
and from (2.13) and (2.15) that
\[\ln Z(t)\leq\ln z-\Delta_{0}t\leq\ln z\leq\ln\delta,\text{ for }T\leq t\leq\xi. \tag{2.18}\]
As a result of (2.16), (2.17), (2.18) and definition of \(\delta\), \(\alpha_{4}Y(t)+\alpha_{5}Z(t)<\theta\) for any \(0\leq t\leq\xi\) and \(\omega\in\widetilde{\Omega}_{1}\cap\widetilde{\Omega}_{2}.\) Therefore, we must have \(\xi=\infty\).
Given that \(\xi=\infty\) in \(\widetilde{\Omega}_{1}\cap\widetilde{\Omega}_{2}\)., we can see from (2.17) and (2.18) that
\[\limsup_{t\to\infty}\frac{\ln Y(t)}{t}\leq\lambda_{1}+2\Delta_{0}\leq-\Delta_{0} <0\text{ and }\limsup_{t\to\infty}\frac{\ln Z(t)}{t}\leq-\Delta_{0}<0,\omega\in \widetilde{\Omega}_{1}\cap\widetilde{\Omega}_{2}.\]
These limits imply that there is no invariant measure on \(\mathbb{R}^{3,\circ}_{+}\). By a similar proof or a reference to [11, Theorem 2.2], there is no invariant measure on \(\mathbb{R}^{\circ}_{12+}\) either. As a result, \(\boldsymbol{\nu}_{1}:=\mu_{1}\times\boldsymbol{\delta}^{*}\times\boldsymbol{ \delta}^{*}\) is the unique invariant measure of \(\{\mathbf{S}(t)\}\), where \(\boldsymbol{\delta}^{*}\) is the Dirac measure with mass at \(0\).
On the other hand, with probability \(1\), any weak-limit (if it exists) of \(\widetilde{\Pi}^{\mathbf{s}}_{t}\) (\(:=\frac{1}{t}\int_{0}^{t}\boldsymbol{1}_{\{\mathbf{S}(u)\in\cdot\}}\,du\)) as \(t\to\infty\) is a unique invariant measure of \(\{\mathbf{S}(t)\}\); see e.g. [1, Theorem 4.2]. For \(y\lor z\leq\delta,x\leq H\) and \(\omega\in\widetilde{\Omega}_{1}\cap\widetilde{\Omega}_{2}\), because \(\lim_{t\to\infty}(Y(t)+Z(t))=0\) and
\[\limsup_{t\to\infty}\frac{1}{t}\int_{0}^{t}X^{q_{0}}(s)ds\leq\lim_{t\to\infty} \frac{1}{t}\int_{0}^{t}(\widehat{X}^{\theta}_{H}(s))^{q_{0}}ds=\int_{[0, \infty)}u^{q_{0}}\mu^{\theta}(du)<\infty, \tag{2.19}\]
we get that \(\{\widetilde{\Pi}^{\mathbf{s}}_{t}\}\) is tight for \(\omega\in\widetilde{\Omega}_{1}\cap\widetilde{\Omega}_{2}\) and subsequently, its limit must be \(\boldsymbol{\nu}_{1}\), the unique invariant probability measure on \(\mathbb{R}^{3}_{+}\). This weak convergence together with the integrability (2.19) imply that
\[\lim_{t\to\infty}\left(\frac{1}{t}\int_{0}^{t}F_{1}(X(s),Y(s))X(s)ds-\left( \alpha_{2}+\frac{\sigma_{2}^{2}}{2}\right)\right)=\int_{\mathbb{R}^{3}_{+}}F_ {1}(u,v)u\mu_{1}(du,dv)-\left(\alpha_{2}+\frac{\sigma_{2}^{2}}{2}\right)= \lambda_{1}<0. \tag{2.20}\]
Applying (2.20) into (2.12) we obtain that
\[\lim_{t\to\infty}\frac{\ln Y(t)}{t}=\lambda_{1}<0,\omega\in\widetilde{\Omega} _{1}\cap\widetilde{\Omega}_{2},\text{ for any }\mathbf{s}=(x,y,z)\text{ with }0\leq x\leq H,y+z\leq\delta. \tag{2.21}\]
Because \(Y(t)\) tends to \(0\) at the exponential rate \(\lambda_{1}\), we have from the first equality of (2.13) and the boundedness of \(F_{2}\) that
\[\lim_{t\to\infty}\frac{\ln Z(t)}{t}=\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}F_ {2}(Y(s),Z(s))Y(s)ds-(\alpha_{3}+\frac{\sigma_{3}^{2}}{2})+\lim_{t\to\infty} \frac{\sigma_{3}W_{3}(t)}{t}=-(\alpha_{3}+\frac{\sigma_{3}^{2}}{2}). \tag{2.22}\]
The proof is complete.
Now, we are ready to prove Theorem 1.2.
Proof of Theorem 1.2.: Note again that \(\boldsymbol{\nu}_{1}\) (\(:=\mu_{1}\times\boldsymbol{\delta}^{*}\times\boldsymbol{\delta}^{*}\)) is the unique invariant probability measure on the boundary and therefore, the only invariant probability measure in \(\mathbb{R}^{3}_{+}\) because \((X(t),Y(t),Z(t))\) has no invariant probability measure in \(\mathbb{R}^{3,\circ}_{+}\). Let \(H\) be sufficiently large such that \(\mu_{1}((0,H))>1-\varepsilon\) and then let \(\delta>0\) satisfy (2.11).
Thanks to Theorem 1.1, the family \(\left\{\check{\Pi}^{\mathbf{s}}_{t}(\cdot):=\frac{1}{t}\int_{0}^{t}\mathbb{P}_ {\mathbf{s}}\left\{(X(u),Y(u),Z(u))\in\cdot\right\}du,t\geq 0\right\}\) is tight in \(\mathbb{R}^{3}_{+}\). Since any weak-limit of \(\check{\Pi}^{\mathbf{s}}_{t}\) as \(t\to\infty\) must be an invariant probability measure of \(\{\mathbf{S}(t)\}\), (see e.g. [1, Theorem 9.9]), we have that \(\check{\Pi}^{\mathbf{s}}_{t}\) converges weakly to \(\boldsymbol{\nu}_{1}\) (\(=\mu_{1}\times\boldsymbol{\delta}^{*}\times\boldsymbol{\delta}^{*}\)) as \(t\to\infty\). Thus, there exists a \(\check{T}=\check{T}(\mathbf{s},\varepsilon)>0\) such that
\[\check{\Pi}^{\check{T}}_{\mathbf{s}}((0,H)\times(0,\delta)\times(0,\delta))>1-\varepsilon,\]
or equivalently,
\[\frac{1}{\check{T}}\int_{0}^{\check{T}}\mathbb{P}_{\mathbf{s}}\{(X(t),Y(t),Z(t) )\in(0,H)\times(0,\delta)\times(0,\delta)\}dt>1-\varepsilon.\]
As a result,
\[\mathbb{P}_{\mathbf{s}}\{\widehat{\tau}\leq\check{T}\}>1-\varepsilon,\]
where \(\widehat{\tau}=\inf\{t\geq 0:(X(t),Y(t),Z(t))\in(0,H)\times(0,\delta)\times(0, \delta)\}\). Using the strong Markov property and (2.11), we deduce that
\[\begin{split}\mathbb{P}_{\mathbf{s}}&\left\{\lim_{t \rightarrow\infty}\frac{\ln Y(t)}{t}=\lambda_{1}\text{ and }\lim_{t\rightarrow\infty}\frac{\ln Z(t)}{t}=-\alpha_{3}-\frac{\sigma_{3}^{ 2}}{2}\right\}\\ &\geq(1-\varepsilon)(1-\varepsilon)>1-2\varepsilon,\text{ for all }\mathbf{s}\in\mathbb{R}_{+}^{3,\circ}.\end{split} \tag{2.23}\]
Letting \(\varepsilon\to 0\) we obtain the desired result.
## 3. Proof of Theorem 1.3
We begin with a proof for Proposition 1.1.
Proof of Proposition 1.1.: Let
\[\Delta_{1}:=\frac{\lambda_{1}}{5}>0,\text{ and }n^{*}\text{ be the smallest integer satisfying }\Delta_{1}(n^{*}-1)\geq\alpha_{2}+\frac{\sigma_{2}^{2}}{2}. \tag{3.1}\]
Because \(\int_{\mathbb{R}_{+}}F_{1}(u,0)u\mu_{1}(du)=\lambda_{1}+\alpha_{2}+\frac{ \sigma_{2}^{2}}{2}\) and \(F_{1}(u,0)u\) is an increasing function we have \(\lim_{u\rightarrow\infty}F_{1}(u,0)u>\lambda_{1}+\alpha_{2}+\frac{\sigma_{2}^ {2}}{2}\). Moreover, there exist \(M>\lambda_{1}+\alpha_{2}+\frac{\sigma_{2}^{2}}{2}\) such that
\[\int_{\mathbb{R}_{+}}\widetilde{F}_{1,M}(u,0)\mu_{1}(du)\geq\lambda_{1}+ \alpha_{2}+\frac{\sigma_{2}^{2}}{2}-\frac{\Delta_{1}}{2}\text{ where }\widetilde{F}_{1,M}(u,v):=(F_{1}(u,v)u)\lor M, \tag{3.2}\]
and \(H>0\) such that
\[F_{1}(u,0)u\geq\lambda_{1}+\alpha_{2}+\frac{\sigma_{2}^{2}}{2}-\Delta_{1}\text { for any }u\geq H. \tag{3.3}\]
From (3.2) and the ergodicity of \(\widehat{X}\), we have
\[\lim_{t\rightarrow\infty}\mathbb{E}\frac{1}{t}\int_{0}^{t}\widetilde{F}_{1,M} (\widehat{X}_{0}(s),0)ds\geq\lambda_{1}+\alpha_{2}+\frac{\sigma_{2}^{2}}{2}- \frac{\Delta_{1}}{2},\]
where \(\widehat{X}_{0}(s)\) is the solution to (1.7) with initial condition \(0\). As a result, there exists \(T>0\) such that
\[\mathbb{E}\frac{1}{t}\int_{0}^{t}\widetilde{F}_{1,M}(\widehat{X}_{0}(s),0)ds \geq\lambda_{1}+\alpha_{2}+\frac{\sigma_{2}^{2}}{2}-\Delta_{1},\;\forall t \geq T.\]
Because of the uniqueness of the solution, \(\widehat{X}_{x}(s)\geq\widehat{X}_{0}(s),s\geq 0\) almost surely for any \(x\geq 0\), where \(\widehat{X}_{x}(s)\) is the solution to (1.7) with initial condition \(x\). Then, thanks to the monotone increasing property of \(\widetilde{F}_{1,M}(u,0)\) (inherited from that property of \(F_{1}(u,0)u\)), we have
\[\mathbb{E}\frac{1}{t}\int_{0}^{t}\widetilde{F}_{1,M}(\widehat{X}_{x}(s),0)ds \geq\lambda_{1}+\alpha_{2}+\frac{\sigma_{2}^{2}}{2}-\Delta_{1},\;\forall t \geq T,x\geq 0.\]
Note that \((\widehat{X}(t),0)\) is the solution \((\overline{X},\overline{Y})\) to (1.9) with initial value \(\overline{Y}(0)=0\). Because of the Feller-Markov property of \((\overline{X},\overline{Y})\), there exists \(0<\delta_{0}<\frac{\Delta_{1}}{L}\) such that for any \((x,y)\in[0,H]\times(0,\delta_{0}]\), we have
\[\mathbb{E}_{x,y}\frac{1}{t}\int_{0}^{t}\widetilde{F}_{1,M}(\overline{X}(s), \overline{Y}(s))\overline{X}(s)ds\geq\lambda_{1}+\alpha_{2}+\frac{\sigma_{2}^{ 2}}{2}-2\Delta_{1},\;\forall T\leq t\leq n^{*}T, \tag{3.4}\]
where the subscript in \(\mathbb{E}_{x,y}\) indicates the initial condition of \((\overline{X},\overline{Y})\).
Now, let
\[\phi_{x,y,t}(\theta):=\ln\mathbb{E}_{x,y}\exp\left\{-\theta\Big{(}\int_{0}^{t }\widetilde{F}_{1,M}(\overline{X}(s),\overline{Y}(s))\overline{X}(s)ds-\alpha _{2}t-\frac{\sigma_{2}^{2}}{2}t+\sigma_{2}W_{2}(t)\Big{)}\right\},\]
be the log-Laplace transform of the random variable
\[-\Big{(}\int_{0}^{t}\widetilde{F}_{1,M}(\overline{X}(s),\overline{Y}(s)) \overline{X}(s)ds-\alpha_{2}t-\frac{\sigma_{2}^{2}}{2}t+\sigma_{2}W_{2}(t)\Big{)}.\]
Because of the boundedness of \(\widetilde{F}_{1,M}\), by a property of the log-Laplace transform, see [11, Lemma 3.5], we have that the \(\phi_{x,y,t}(\theta)\) is twice differentiable (in \(\theta\)) on \([0,\frac{1}{2})\), with
\[\frac{d\phi_{x,y,t}}{d\theta}(0)=\mathbb{E}_{x,y}\left\{-\Big{(}\int_{0}^{t} \widetilde{F}_{1,M}(\overline{X}(s),\overline{Y}(s))\overline{X}(s)ds- \alpha_{2}t-\frac{\sigma_{2}^{2}}{2}t+\sigma_{2}W_{2}(t)\Big{)}\right\}, \tag{3.5}\]
and
\[\sup_{|\theta|<1,t\leq n^{*}T}\frac{d^{2}\phi_{x,y,t}}{d\theta^{2}}(0)\leq K_ {\phi}, \tag{3.6}\]
for some constant \(K_{\phi}=K_{\phi}(M,n^{*}T)\). Because of (3.4) and (3.5), one has
\[\frac{d\phi_{x,y,t}}{d\theta}(0)\leq-\left(\lambda_{1}-2\Delta_{1}\right)t. \tag{3.7}\]
From (3.6) and (3.7), we can have a Taylor expansion as follows
\[\phi_{x,y,t}(\theta)\leq \phi_{x,y,t}(0)+\theta\frac{d\phi_{x,y,t}}{d\theta}(0)+\theta^{2} \sup_{|\theta|<1}\frac{d^{2}\phi_{x,y,t}}{d\theta^{2}}(0)\] \[\leq 0-\theta\left(\lambda_{1}-2\Delta_{1}\right)t+\theta^{2}K_{\phi},\;\forall t\in[T,n^{*}T]. \tag{3.8}\]
Because \(\lambda_{1}-2\Delta_{1}\geq 3\Delta_{1}\), we can pick a \(\theta>0\) such that
\[-\theta\left(\lambda_{1}-2\Delta_{1}\right)T+\theta^{2}K_{\phi}\leq-2\Delta_{1 }\theta\text{ and }\theta<\frac{\Delta_{1}}{\sigma_{2}^{2}}. \tag{3.9}\]
With this chosen \(\theta\), we have from (3.8) that \(\phi_{x,y,t}(\theta)\leq-2\Delta_{1}\theta t,\;\forall t\in[T,n^{*}T]\). This implies
\[\begin{split}\frac{\mathbb{E}_{x,y}[(\overline{Y}(t))^{-\theta} ]}{y^{-\theta}}=&\mathbb{E}_{x,y}\exp\left\{-\theta\Big{(}\int_{ 0}^{t}F_{1}(\overline{X}(s),\overline{Y}(s))\overline{X}(s)ds-\alpha_{2}t- \frac{\sigma_{2}^{2}}{2}t+\sigma_{2}W_{2}(t)\Big{)}\right\}\\ \leq&\mathbb{E}_{x,y}\exp\left\{-\theta\Big{(}\int_{ 0}^{t}\widetilde{F}_{1,M}(\overline{X}(s),\overline{Y}(s))\overline{X}(s)ds- \alpha_{2}t-\frac{\sigma_{2}^{2}}{2}t+\sigma_{2}W_{2}(t)\Big{)}\right\}\\ =&\exp(\phi_{x,y,t}(\theta))\leq e^{-2\Delta_{1} \theta T},\;\forall t\in[T,n^{*}T].\end{split} \tag{3.10}\]
On the other hand, note that \(F_{1}(u,0)u\geq\lambda_{1}+\alpha_{2}+\frac{\sigma_{2}^{2}}{2}-\Delta_{1},u\geq H\) and \(|F_{1}(u,v)u-F_{1}(u,0)u|\leq Lv\), imply
\[F_{1}(u,v)u\geq\lambda_{1}+\alpha_{2}+\frac{\sigma_{2}^{2}}{2}-2\Delta_{1}\text { if }u\geq H\text{ and }v\leq\frac{\Delta_{1}}{L}. \tag{3.11}\]
Because of (3.11), (3.1), and \(\theta<\frac{\Delta_{1}}{\sigma_{2}^{2}}\) (due to (3.9)), we have
\[\begin{split} d(\overline{Y}(t))^{-\theta}=&-\theta( \overline{Y}(t))^{-\theta}\left(F_{1}(\overline{X}(t),\overline{Y}(t)) \overline{X}(t)-\alpha_{2}-(\theta+1)\frac{\sigma_{2}^{2}}{2}\right)dt-\theta \sigma_{2}(\overline{Y}(t))^{-\theta}dW_{2}(t)\\ \leq&-2\theta\Delta_{1}(\overline{Y}(t))^{-\theta} -\theta\sigma_{2}(\overline{Y}(t))^{-\theta}dW_{2}(t)\text{ if }\overline{X}(t)\geq H,\overline{Y}(t)\leq\frac{\Delta_{1}}{L}.\end{split} \tag{3.12}\]
An use of Ito's formula shows that
\[de^{2\theta\Delta_{1}t}(\overline{Y}(t))^{-\theta}\leq\theta\sigma_{2}e^{2 \theta\Delta_{1}t}(\overline{Y}(t))^{-\theta}dW_{2}(t)\text{ if }\overline{X}(t)\geq H,\overline{Y}(t)\leq\frac{\Delta_{1}}{L}. \tag{3.13}\]
Let \(\eta:=(n^{*}T)\wedge\inf\{t\geq 0:\overline{X}(t)\leq H\text{ or }\overline{Y}(t)\geq\delta_{0}\}\). It is noted that \(\delta_{0}\) is chosen to be less than \(\frac{\Delta_{1}}{L}\). From (3.13) and an application of Dynkin's formula, we have
\[\mathbb{E}e^{2\theta\Delta_{1}(t\wedge\eta)}(\overline{Y}(t\wedge\eta))^{- \theta}\leq y^{-\theta},t\geq 0. \tag{3.14}\]
From the first line of (3.12) and the fact that \(F_{1}(u,v)u\geq 0\), we get
\[d(\overline{Y}(t))^{-\theta}\leq\theta\left(\alpha_{2}+(\theta+1)\frac{\sigma _{2}^{2}}{2}\right)dt-\theta\sigma_{2}(\overline{Y}(t))^{-\theta}dW_{2}(t).\]
Using arguments similar to the ones used in the process of getting (1.5) from (2.4) in the proof of Theorem 1.1 (using appropriate stopping times until that \((\overline{Y}(t))^{-\theta}\) is still bounded by \(n\) and then letting \(n\to\infty\)), yields
\[\mathbb{E}_{x,y}(\overline{Y}(t))^{-\theta}\leq e^{\theta\left(\alpha_{2}+( \theta+1)\frac{\sigma_{2}^{2}}{2}\right)t}y^{-\theta},\ t\geq 0,x\geq 0,y>0. \tag{3.15}\]
We have the following three estimates using the strong Markov property of \((\overline{X}(t),\overline{Y}(t))\). Firstly we note that
\[\mathbb{E}_{x,y}\mathbf{1}_{\{(n^{*}-1)T\leq\eta\leq n^{*}T, \overline{Y}(\eta)\leq\delta_{0}\}}(\overline{Y}(n^{*}T))^{-\theta}\] \[\leq \mathbb{E}_{x,y}\mathbf{1}_{\{(n^{*}-1)T\leq\eta\leq n^{*}T, \overline{Y}(\eta)<\delta_{0}\}}\mathbb{E}_{X(\eta),\overline{Y}(\eta)}( \overline{Y}(n^{*}T-\eta))^{-\theta}\] \[\leq e^{\theta\left(\alpha_{2}+(1-\theta)\frac{\sigma_{2}^{2}}{2} \right)T}\mathbb{E}_{x,y}\mathbf{1}_{\{(n^{*}-1)T\leq\eta\leq n^{*}T,\overline {Y}(\eta)\leq\delta_{0}\}}(\overline{Y}(\eta))^{-\theta}\] \[\leq e^{\theta\left(\alpha_{2}+(1-\theta)\frac{\sigma_{2}^{2}}{2} \right)T}e^{-2\Delta_{1}\theta(n^{*}-1)T}\mathbb{E}_{x,y}\mathbf{1}_{\{\eta \leq T,\overline{Y}(\eta)\leq\delta_{0}\}}e^{2\Delta_{1}\theta\eta}(\overline {Y}(\eta))^{-\theta}\] \[\leq e^{-\Delta_{1}\theta T}y^{-\theta}\mathbb{E}_{x,y}\mathbf{1}_{\{( n^{*}-1)T\leq\eta\leq n^{*}T,\overline{Y}(\eta)<\delta_{0}\}}e^{2\Delta_{1} \theta\eta}(\overline{Y}(\eta))^{-\theta}, \tag{3.16}\]
where the last inequality is due to (3.1). Secondly, we get
\[\mathbb{E}_{x,y}\mathbf{1}_{\{\eta\leq(n^{*}-1)T,\overline{Y}( \eta)<\delta_{0}\}}(\overline{Y}(n^{*}T))^{-\theta}\] \[\leq \mathbb{E}_{x,y}\mathbf{1}_{\{\eta\leq(n^{*}-1)T,\overline{Y}( \eta)\leq\delta_{0}\}}\mathbb{E}_{X(\eta),\overline{Y}(\eta)}(\overline{Y}(n^{* }T-\eta))^{-\theta}\] \[\leq \mathbb{E}_{x,y}\mathbf{1}_{\{\eta\leq T,\overline{Y}(\eta)\leq \delta_{0}\}}(\overline{Y}(\eta))^{-\theta}\exp\{-2\Delta_{1}\theta(n^{*}T- \eta)\}\] \[\leq e^{-2\Delta_{1}\theta T}\mathbb{E}_{x,y}\mathbf{1}_{\{\eta\leq T,\overline{Y}(\eta)\leq\delta_{0}\}}(\overline{Y}(\eta))^{-\theta}\] \[\leq e^{-2\Delta_{1}\theta T}\mathbb{E}_{x,y}e^{2\Delta_{1}\theta \eta}\mathbf{1}_{\{\eta\leq T,\overline{Y}(\eta)<\delta_{0}\}}(\overline{Y}( \eta))^{-\theta}, \tag{3.17}\]
where in the third line we used (3.10). Finally,
\[\mathbb{E}_{x,y}\mathbf{1}_{\{\overline{Y}(\eta)\leq\delta_{0}\}} (\overline{Y}(n^{*}T))^{-\theta}\leq \mathbb{E}_{x,y}\mathbf{1}_{\{\overline{Y}(\eta)\leq\delta_{0}\} }\mathbb{E}_{X(\eta),\overline{Y}(\eta)}(\overline{Y}(n^{*}T-\eta))^{-\theta}\] \[\leq \mathbb{E}_{x,y}\delta_{0}^{\theta}e^{\theta\left(\alpha_{2}+( \theta+1)\frac{\sigma^{2}}{2}\right)(n^{*}T-\eta)}\] \[\leq \widehat{K}:=\delta_{0}^{\theta}e^{\theta\left(\alpha_{2}+(\theta +1)\frac{\sigma^{2}}{2}\right)n^{*}T}. \tag{3.18}\]
Adding (3.16), (3.17) and (3.18) side by side we have
\[\mathbb{E}_{x,y}(\overline{Y}(n^{*}T))^{-\theta}\leq e^{-\Delta_{1}\theta T} \mathbb{E}_{x,y}e^{2\theta\Delta_{1}\eta}(\overline{Y}(\eta))^{-\theta}+ \widehat{K}\leq e^{-\Delta_{1}\theta T}y^{-\theta}+\widehat{K}, \tag{3.19}\]
where the last inequality follows from (3.14).
By the Markov property, we can recursively apply (3.19) to show that
\[\mathbb{E}_{x,y}(\overline{Y}(kn^{*}T))^{-\theta}\leq\widehat{K}\sum_{i=1}^{k} \kappa^{k-1}+\kappa^{k}y^{-\theta}\leq\frac{\widehat{K}}{1-\kappa}+\kappa^{k}y ^{-\theta},\text{ where }\kappa:=e^{-\Delta_{1}\theta T}<1.\]
This and (3.15) imply that
\[\mathbb{E}_{x,y}(\overline{Y}(t))^{-\theta}\leq e^{\theta(\alpha_{2}+\frac{ \sigma^{2}}{2}(\theta+1)n^{*}T}\left(\frac{\widehat{K}}{1-\kappa}+\kappa^{k}y^{ -\theta}\right)\ \forall t\in[kn^{*}T,(k+1)n^{*}T],\]
which is equivalent to (1.10).
The rest of this section is devoted to proving Theorem 1.3. Before constructing suitable coupling systems, we need the following bound for the growth rate of the solution on the boundary corresponding to \(z=0\).
**Lemma 3.1**.: _For any \(\varepsilon\in(0,1)\), \(\delta>0\), there exists \(M_{0}=M_{0}(\varepsilon,\delta,x,y)\) such that_
\[\mathbb{P}_{x,y}\left\{\overline{X}(t)+\overline{Y}(t)+(\overline{X}(t))^{-1} +(\overline{Y}(t))^{-1}\leq M_{0}e^{\delta t},\ \forall t\geq 0\right\}\geq 1-\varepsilon.\]
Proof.: Pick \(\theta>0\) satisfying (1.10) and let \(\overline{V}(x,y)=x+y+y^{-\theta}\). In view of (1.4) and (1.10), we have
\[\mathbb{E}_{x,y}\overline{V}(\overline{X}(t),\overline{Y}(t))\leq\overline{ C}_{x,y},\ \forall t\geq 0, \tag{3.20}\]
for some constant \(\overline{C}_{x,y}\) independent of \(t\). Ito's formula yields
\[d\overline{V}(\overline{X}(t),\overline{Y}(t))= \Lambda-\alpha_{1}\overline{X}(t)-(\alpha_{2}-\alpha_{4})\overline {Y}(t)-\theta(\overline{Y}(t))^{-\theta}\left(F_{1}(\overline{X}(t),\overline {Y}(t)\overline{X}(t)-\alpha_{2}-(\theta+1)\frac{\sigma_{2}^{2}}{2}\right)dt\] \[+\sigma_{1}\overline{X}(t)dW_{1}(t)+\sigma_{2}\overline{Y}(t)dW_{ 2}(t)-\theta\sigma_{2}\overline{Y}^{-\theta}dW_{2}(t)\] \[\leq A_{0}\overline{V}(\overline{X}(t),\overline{Y}(t))dt+\sigma_{1} \overline{X}(t)dW_{1}(t)+\sigma_{2}\overline{Y}(t)dW_{2}(t)-\theta\sigma_{2}( \overline{Y}(t))^{-\theta}dW_{2}(t) \tag{3.21}\]
for some \(A_{0}>0\). For any \(c>0\), let \(\overline{\tau}_{c}:=\inf\{t\geq 0:\overline{V}(\overline{X}(t),\overline{Y}(t)) \geq c\}.\) Equation (3.21) together with an application of Dynkin's formula implies that
\[\mathbb{E}_{x,y}e^{-A_{0}(\overline{\tau}_{c}/\kappa t)}\overline{V}( \overline{X}(\overline{\tau}_{c}\wedge t),\overline{Y}(\overline{\tau}_{c} \wedge t))\leq\overline{V}(x,y),\quad\forall t\geq 0.\]
As a result
\[\mathbb{E}_{x,y}\overline{V}(\overline{X}(\overline{\tau}_{c}\wedge t), \overline{Y}(\overline{\tau}_{c}\wedge t))\leq\overline{V}(x,y)e^{A_{0}t}, \quad\forall t\geq 0.\]
Therefore, for any \(c>0\), applying Markov's inequality we have
\[\mathbb{P}\left\{\sup_{t\in[0,1]}\overline{V}(\overline{X}(t),\overline{Y}(t) )\geq c\right\}\leq\frac{1}{c}\mathbb{E}_{x,y}\overline{V}(\overline{X}( \overline{\tau}_{c}\wedge 1),\overline{Y}(\overline{\tau}_{c}\wedge 1))\leq \frac{e^{A_{0}}}{c}\overline{V}(x,y). \tag{3.22}\]
For \(\varepsilon>0\), \(\delta>0\), pick \(M_{0}\) sufficiently large such that \(\frac{e^{A_{0}}\overline{C}_{x,y}}{M_{0}}\sum_{n=1}^{\infty}e^{-\delta\theta n }<\varepsilon\). By the Markov property of \((\overline{X},\overline{Y})\), (3.20), and (3.22), we have
\[\mathbb{P}\left\{\sup_{t\in[n,n+1]}\overline{V}(\overline{X}(t),\overline{Y}( t))>M_{0}e^{\delta n}\right\}\leq\frac{e^{A_{0}}}{M_{0}e^{\delta n}} \mathbb{E}_{x,y}\overline{V}(\overline{X}(n),\overline{Y}(n))\leq\frac{e^{A_ {0}}\overline{C}_{x,y}}{M_{0}e^{\delta n}};\]
which leads to
\[\mathbb{P}\left\{\sup_{t\in[n,n+1]}\overline{V}(\overline{X}(t),\overline{Y}(t ))\leq M_{0}e^{\delta\theta n},\ \text{for all}\ n\in\mathbb{Z}_{+}\right\}>1-\sum_{n=1}^{\infty}\frac{\overline{ C}_{x,y}}{M_{0}e^{\delta\theta n}}. \tag{3.23}\]
From (3.23) and the definition of \(M_{0}\), we obtain the desired result.
Next, we need to bound \(X^{-1}(t)\). Using the variation of constants formula (see [10, Chapter 3]), we can write \(\overline{X}(t)\) in the form
\[\overline{X}(t)=\Phi^{-1}(t)\left[\int_{0}^{t}\Phi(s)\left(\Lambda-F_{1}( \overline{X}(s),\overline{Y}(s))\overline{Y}(s)+\alpha_{4}\overline{Y}(s)\right) ds\right], \tag{3.24}\]
where
\[\Phi(t):=\exp\left\{\left(\alpha_{1}+\frac{\sigma_{1}^{2}}{2}\right)t-\sigma_{1}W_{ 1}(t)\right\}.\]
In view of (3.23), for any \(\varepsilon>0\), there exists \(M_{2}=M_{2}(\varepsilon,\delta,x,y)>0\) such that
\[\mathbb{P}_{x,y}\left\{\overline{Y}(t)\leq M_{2}e^{\delta\theta t},\,\forall\, t\geq 0\right\}\geq 1-\frac{\varepsilon}{2}. \tag{3.25}\]
It is easily seen that there is \(M_{3}=M_{3}(\varepsilon,\delta)>0\) such that
\[\mathbb{P}\left\{\sigma_{1}|W_{1}(t)|\leq M_{3}e^{\delta\theta t},\,\,\forall t \geq 0\right\}\geq 1-\frac{\varepsilon}{2}.\]
On the other hand, given that \(\sigma_{1}|W_{1}(t)|\leq M_{3}e^{\delta\theta t}\) and \(\Lambda-F_{1}(\overline{X}(t),\overline{Y}(t)\overline{Y}(t)+\alpha_{4} \overline{Y}(t)\leq\Lambda+\alpha_{4}\overline{Y}(t)\leq\alpha_{4}M_{2}e^{ \delta\theta t}+\Lambda\), one can see from (3.25) that
\[\overline{X}(t)\geq\frac{e^{-2\delta\theta t}}{M_{4}}\geq\frac{e^{-2\delta t} }{M_{4}}\text{ for some constant }M_{4}\text{ depending on }M_{2},M_{3}.\]
Combining this with (3.23) concludes the proof (after re-assigning \(\delta:=\delta\theta\)).
Since \(F_{1}(u,v)u\) and \(F_{2}(v,w)v\) are Lipschitz and \(F_{1}\) and \(F_{2}\) are bounded, there exists \(c_{0}>0\) such that
\[(u_{1}-u_{2})[(\Lambda-F_{1}(u_{1},v_{1})u_{1}v_{1}-\alpha_{1}u_{ 1}+\alpha_{4}v_{1})-(\Lambda-F_{1}(u_{2},v_{2})u_{2}v_{2}-\alpha_{1}u_{2}+ \alpha_{4}v_{2})]\] \[\quad+(v_{1}-v_{2})[F_{1}(u_{1},v_{1})u_{1}v_{1}-\alpha_{2}v_{1}- (F_{1}(u_{2},v_{2})u_{2}v_{2}-F_{2}(v_{2},w)v_{2}w-\alpha_{2}v_{2}]\] \[\quad+\sigma_{1}^{2}(u_{1}-u_{2})^{2}+\sigma_{2}^{2}(v_{1}-v_{2} )^{2}\] \[\leq \frac{1}{2}\left(c_{0}(1+u_{1}+v_{1}+u_{2}+v_{2})^{2}[(u_{1}-u_{2 })^{2}+(v_{1}-v_{2})^{2}]+c_{0}w_{2}^{2}\right),\quad\forall u_{1},u_{2},v_{1},v_{2},w\geq 0. \tag{3.26}\]
Let
\[\gamma_{0}:=-\frac{\lambda_{2}}{3}>0,\text{ and }\widetilde{N}>\gamma_{0}+( \sigma_{1}^{2}\vee\sigma_{2}^{2})+c_{0}, \tag{3.27}\]
and consider the coupling system:
\[\left\{\begin{array}{ll}d\overline{X}(t)=&[\Lambda-F_{1}(\overline{X}(t), \overline{Y}(t))\overline{X}(t)\overline{Y}(t)-\alpha_{1}\overline{X}(t)+ \alpha_{4}\overline{Y}(t)]dt+\sigma_{1}\overline{X}(t)dW_{1}(t)\\ d\overline{Y}(t)=&[F_{1}(\overline{X}(t),\overline{Y}(t))\overline{X}(t) \overline{Y}(t)-\alpha_{2}\overline{Y}(t)]dt+\sigma_{2}\overline{Y}(t)dW_{2}( t)\\ d\widetilde{X}(t)=&[\Lambda-F_{1}(\widetilde{X}(t),\widetilde{Y}(t)) \widetilde{X}(t)\widetilde{Y}(t)-\alpha_{1}\widetilde{X}(t)+\alpha_{4} \widetilde{Y}(t)+\alpha_{5}\widetilde{Z}(t)]dt+\sigma_{1}\widetilde{X}(t)dW_{1 }(t)\\ &-\widetilde{N}(1+\overline{X}(t)+\widetilde{X}(t)+\overline{Y}(t))^{2}( \overline{X}(t)-\widetilde{X}(t))dt\\ d\widetilde{Y}(t)=&[F_{1}(\widetilde{X}(t),\widetilde{Y}(t))\widetilde{X}(t) \widetilde{Y}(t)-F_{2}(\widetilde{Y}(t),\widetilde{Z}(t))\widetilde{Y}(t) \widetilde{Z}(t)-\alpha_{2}\widetilde{Y}(t)]dt+\sigma_{2}\widetilde{Y}(t)dW_{2} (t)\\ &-\widetilde{N}(1+\overline{X}(t)+\widetilde{X}(t)+\overline{Y}(t))^{2}( \overline{Y}(t)-\widetilde{Y}(t))dt\\ d\widetilde{Z}(t)=&[F_{2}(\widetilde{Y}(t),\widetilde{Z}(t))\widetilde{Y}(t )\widetilde{Z}(t)-\alpha_{3}\widetilde{Z}(t)]dt+\sigma_{3}\widetilde{Z}(t)dW_{3 }(t).\end{array}\right. \tag{3.28}\]
_Remark 3.1_.: Because the methods in existing work (such as those from [11]) do not work, this coupled system is introduced to compare the solution near the boundary (when \(Z(t)\) is small) and the solution on the boundary (when \(Z(t)=0\)). Based on (3.26), the term \(-\widetilde{N}(1+\overline{X}(t)+\widetilde{X}(t)+\overline{Y}(t))^{2}( \overline{X}(t)-\widetilde{X}(t))\) and \(-\widetilde{N}(1+\overline{X}(t)+\widetilde{X}(t)+\overline{Y}(t)+ \widetilde{Y}(t))^{2}(\overline{Y}(t)-\widetilde{Y}(t))\) on the coupled equations of \(d\widetilde{X}(t)\) and \(d\widetilde{Y}(t)\) in (3.28) respectively are needed to make sure that \((\widetilde{X}(t),\widetilde{Y}(t))\) will approach \((\overline{X}(t),\overline{Y}(t))\) with a large probability. We note that although the comparison in a finite interval is standard, one cannot use it to obtain the desired result which requires the two solutions to be close with a large probability in the infinite interval \([0,\infty)\).
The next proposition will quantify how close \((\overline{X}(t),\overline{Y}(t))\) and \((\widetilde{X}(t),\widetilde{Y}(t))\) are when \(\overline{Z}(t)\) is small.
**Proposition 3.1**.: _For \(\delta>0\), let_
\[\widetilde{\tau}_{\delta}:=\inf\left\{t\geq 0:\widetilde{Z}(t)\geq\delta e^{- \gamma_{0}t}\right\}.\]
_There is a constant \(\widetilde{C}\) independent of \(|\overline{x}-\widetilde{x}|\), \(|\overline{y}-\widetilde{y}|\) and \(\delta\) such that_
\[\mathbb{E}\sup_{0\leq t\leq\widetilde{\tau}_{\delta}}e^{2\gamma_{0}t}[( \overline{X}(t)-\widetilde{X}(t))^{2}+(\overline{Y}(t)-\widetilde{Y}(t))^{2} ]\leq\widetilde{C}((\overline{x}-\widetilde{x})^{2}+(\overline{y}-\widetilde{ y})^{2}+\delta^{2}). \tag{3.29}\]
_Moreover, there are \(\widetilde{M}_{\varepsilon,x,y},\widetilde{m}_{\varepsilon,x,y}>0\) (depending only on \(\varepsilon,x,y\)) such that_
\[\mathbb{P}_{x,y,\overline{x}}\left\{\int_{0}^{\widetilde{\tau}_{\delta}}(|v_ {1}(t)|^{2}+|v_{2}(t)|^{2})dt\geq\widetilde{M}_{\varepsilon,x,y}((\overline{x }-\widetilde{x})^{2}+(\overline{y}-\widetilde{y})^{2}+\delta^{2})\right\}\leq\varepsilon, \tag{3.30}\]
_as long as \((\overline{x}-\widetilde{x})^{2}+(\overline{y}-\widetilde{y})^{2}+\delta^{2} \leq\widetilde{m}_{\varepsilon,x,y}\), where_
\[v_{1}(t)=\tfrac{\widetilde{N}(1+\overline{X}(t)+\widetilde{X}(t)+\overline{Y} (t))^{2}(\overline{X}(t)-\widetilde{X}(t))}{\sigma_{1}\widetilde{X}(t)},\]
_and_
\[v_{2}(t)=\tfrac{\widetilde{N}(1+\overline{X}(t)+\widetilde{X}(t)+\overline{Y} (t))^{2}(\overline{Y}(t)-\widetilde{Y}(t))}{\sigma_{1}\widetilde{Y}(t)}.\]
Proof.: Applying Ito's formula to (3.28) and using (3.26), we have
\[\begin{split}& d[(\overline{X}(t)-\widetilde{X}(t))^{2}+( \overline{Y}(t)-\widetilde{Y}(t))^{2}]\\ \leq&\left(-(2\widetilde{N}-c_{0})(1+\overline{X}(t) +\widetilde{X}(t)+\overline{Y}(t)+\widetilde{Y}(t))((\overline{X}(t)- \widetilde{X}(t))^{2}+(\overline{Y}(t)-\widetilde{Y}(t))^{2})\right)dt\\ &+c_{0}|\widetilde{Z}(t)|^{2}dt+2\sigma_{1}(\overline{X}(t)- \widetilde{X}(t))^{2}dW_{1}(t)+2\sigma_{2}(\overline{Y}(t)-\widetilde{Y}(t))^ {2}dW_{2}(t).\end{split} \tag{3.31}\]
Here and thereafter, \(C\) is a generic constant, whose value can be different in different lines, but which is independent of \(|\overline{x}-\widetilde{x}|,|\overline{y}-\widetilde{y}|\) and \(\delta\). By Ito's formula and Cauchy's inequality we have from (3.31) that
\[\begin{split}& de^{4\gamma_{0}t}[(\overline{X}(t)-\widetilde{X}(t) )^{2}+(\overline{Y}(t)-\widetilde{Y}(t))^{2}]^{2}\\ \leq&-\left(4\widetilde{N}-4\gamma_{0}-4(\sigma_{1} ^{2}\vee\sigma_{2}^{2})-4c_{0}\right)e^{4\gamma_{0}t}[(\overline{X}(t)- \widetilde{X}(t))^{2}+(\overline{Y}(t)-\widetilde{Y}(t))^{2}]+2c_{0}e^{4\gamma _{0}t}|\widetilde{Z}(t)|^{4}dt\\ &+2e^{4\gamma_{0}t}[(\overline{X}(t)-\widetilde{X}(t))^{2}+( \overline{Y}(t)-\widetilde{Y}(t))^{2}]^{2}\left(\sigma_{1}(\overline{X}(t)- \widetilde{X}(t))^{2}dW_{1}(t)+\sigma_{2}(\overline{Y}(t)-\widetilde{Y}(t))^ {2}dW_{2}(t)\right).\end{split}\]
Then, by introducing suitable stopping times and passing to the limit, as was done in the process of getting (1.5) from (2.4) in the proof of Theorem 1.1, one can obtain
\[\begin{split}&\left(4\widetilde{N}-4\gamma_{0}-4(\sigma_{1}^{2} \vee\sigma_{2}^{2})-4c_{0}\right)\mathbb{E}\int_{0}^{t\wedge\widetilde{\tau}_{ \delta}}e^{4\gamma_{0}s}[(\overline{X}(s)-\widetilde{X}(s))^{2}+(\overline{Y}(s )-\widetilde{Y}(s))^{2}]^{2}\\ \leq& 2\mathbb{E}\int_{0}^{t\wedge\widetilde{\tau}_{ \delta}}c_{0}e^{4\gamma_{0}s}|\widetilde{Z}(s)|^{4}ds+((\overline{x}- \widetilde{x})^{2}+(\overline{y}-\widetilde{y})^{2})^{2}.\end{split}\]
This leads to
\[\mathbb{E}\int_{0}^{t\wedge\widetilde{\tau}_{\delta}}e^{4\gamma_{0}s}[( \overline{X}(s)-\widetilde{X}(s))^{2}+(\overline{Y}(s)-\widetilde{Y}(s))^{2}] ^{2}\leq C((\overline{x}-\widetilde{x})^{4}+(\overline{y}-\widetilde{y})^{4}+ \delta^{4}). \tag{3.32}\]
Moreover, we have from (3.31) and Ito's formula that
\[\begin{split}& de^{2\gamma_{0}t}[(\overline{X}(t)-\widetilde{X}(t) )^{2}+(\overline{Y}(t)-\widetilde{Y}(t))^{2}]\\ &\leq-(2\lambda-2\gamma_{0}-c_{0})e^{\gamma_{0}t}[(\overline{X}( t)-\widetilde{X}(t))^{2}+(\overline{Y}(t)-\widetilde{Y}(t))^{2}]+c_{0}e^{2\gamma_{0}t}| \widetilde{Z}(t)|^{2}dt\\ &\qquad+e^{2\gamma_{0}t}\left(\sigma_{1}(\overline{X}(t)- \widetilde{X}(t))^{2}dW_{1}(t)+\sigma_{2}(\overline{Y}(t)-\widetilde{Y}(t))^ {2}dW_{2}(t)\right).\end{split}\]
From this, we get that
\[\begin{split}&\mathbb{E}\sup_{t\leq T\wedge\widetilde{\tau}_{\delta}}e ^{2\gamma_{0}t}[(\overline{X}(t)-\widetilde{X}(t))^{2}+(\overline{Y}(t)- \widetilde{Y}(t))^{2}]\\ \leq&\mathbb{E}\int_{0}^{T\wedge\widetilde{\tau}_{ \delta}}c_{0}e^{2\gamma_{0}s}|\widetilde{Z}(s))|^{2}ds\\ &+\mathbb{E}\sup_{t\leq T\wedge\widetilde{\tau}_{\delta}}\int_{0} ^{t}\left\{e^{2\gamma_{0}s}\left(\sigma_{1}(\overline{X}(s)-\widetilde{X}(s)) ^{2}dW_{1}(s)+\sigma_{2}(\overline{Y}(s)-\widetilde{Y}(s))^{2}dW_{2}(s)\right) \right\}.\end{split} \tag{3.33}\]
In view of the Burkholder-Davis-Gundy inequality, we have
\[\begin{split}&\mathbb{E}\sup_{t\leq T\wedge\widetilde{\tau}_{ \delta}}\int_{0}^{t}\left\{e^{2\gamma_{0}s}\left(\sigma_{1}(\overline{X}(s)- \widetilde{X}(s))^{2}dW_{1}(s)+\sigma_{2}(\overline{Y}(s)-\widetilde{Y}(s))^{ 2}dW_{2}(s)\right)\right\}\\ \leq& C\left[\mathbb{E}\int_{0}^{t\wedge\widetilde{ \tau}_{\delta}}e^{4\gamma_{0}s}[(\overline{X}(s)-\widetilde{X}(s))^{2}+( \overline{Y}(s)-\widetilde{Y}(s))^{2}]^{2}\right]^{\frac{1}{2}}\\ \leq& C((\overline{x}-\widetilde{x})^{2}+(\overline{ y}-\widetilde{y})^{2}+\delta^{2}),\end{split} \tag{3.34}\]
where in the last line we used (3.32). In addition, since \(|Z(s)|\leq\delta e^{-\gamma_{0}s}\) for any \(s\leq\widetilde{\tau}_{\delta}\), it can be seen that
\[\mathbb{E}\int_{0}^{T\wedge\widetilde{\tau}_{\delta}}c_{0}e^{2\gamma_{0}s}| \widetilde{Z}(s))|^{2}ds\leq C\delta^{2}. \tag{3.35}\]
Using (3.34) and (3.35) in (3.33), we obtain (3.29).
We next prove (3.30). In view of Lemma 3.1, there is \(M_{\varepsilon,x,y}\) such that
\[\mathbb{P}_{x,y}\bigg{(}\widetilde{\Omega}_{3}:=\left\{[1+\overline{X}(t)+ \overline{X}(t)^{-1}+\overline{Y}(t)+\overline{Y}^{-1}(t)]\leq M_{\varepsilon,x,y},\;\forall t\geq 0\right\}\bigg{)}\geq 1-\frac{\varepsilon}{2}. \tag{3.36}\]
By virtue of (3.29), there is \(\widetilde{C}_{0}\) independent of \((\overline{x}-\widetilde{x})^{2}+(\overline{y}-\widetilde{y})^{2}+\delta^{2}\) such that
\[\begin{split}\mathbb{P}_{x,y,\widetilde{\mathbf{x}}}& \left(\widetilde{\Omega}_{4}:=\left\{e^{2\gamma_{0}t}[(\overline{X}(t)- \widetilde{X}(t))^{2}+(\overline{Y}(t)-\widetilde{Y}(t))^{2}]\leq\frac{ \widetilde{C}_{0}((\overline{x}-\widetilde{x})^{2}+(\overline{y}-\widetilde{y} )^{2}+\delta^{2})}{\varepsilon},\;\forall 0\leq t\leq\widetilde{\tau}_{\delta} \right\}\right)\\ &\geq 1-\frac{\varepsilon}{2}.\end{split} \tag{3.37}\]
For \(t\leq\tau_{\delta}\), if \(\overline{X}(t)\geq M_{\varepsilon,x,y}^{-1}e^{-\gamma_{0}t/4}\) and \((\overline{X}(t)-\widetilde{X}(t))\leq\frac{1}{2}M_{\varepsilon,x,y}^{-1}e^{ -\gamma_{0}t/4}\), then we have
\[\frac{1}{\widetilde{X}(t)}\leq\frac{1}{\overline{X}(t)+(\overline{X}(t)- \widetilde{X}(t))}\leq\frac{1}{M_{\varepsilon,x,y}{}^{-1}e^{-\gamma_{0}t/4}+( \overline{X}(t)-\widetilde{X}(t))}\leq 2M_{\varepsilon,x,y}e^{\gamma_{0}t/4}. \tag{3.38}\]
Likewise,
\[\frac{1}{\widetilde{Y}(t)}\leq 2M_{\varepsilon,x,y}e^{\gamma_{0}t/4}\text{ if provided } \overline{Y}(t)\geq M_{\varepsilon,x,y}^{-1}e^{-\gamma_{0}t/4}\text{ and }(\overline{Y}(t)-\widetilde{Y}(t))\leq\frac{1}{2}M_{\varepsilon,x,y}^{-1}e^{ -\gamma_{0}t/4}. \tag{3.39}\]
Observe that if \((\overline{x}-\widetilde{x})^{2}+(\overline{y}-\widetilde{y})^{2}+\delta^{2} \leq\frac{\varepsilon}{4\widetilde{C}_{0}M_{\varepsilon,x,y}^{2}}\) then for all \(\omega\in\widetilde{\Omega}_{3}\),
\[(\overline{X}(t)-\widetilde{X}(t))\vee(\overline{Y}(t)-\widetilde{Y}(t)) \leq\left(\frac{\widetilde{C}_{0}((\overline{x}-\widetilde{x})^{2}+(\overline {y}-\widetilde{y})^{2}+\delta^{2})}{\varepsilon}e^{-2\gamma_{0}t}\right)^{- \frac{1}{2}}\leq\frac{1}{2}M_{\varepsilon,x,y}^{-1}e^{-\gamma_{0}t/4}.\]
This together with (3.38) and (3.39) implies that for all \(\omega\in\widetilde{\Omega}_{3}\cap\widetilde{\Omega}_{4}\),
\[\frac{1}{\widetilde{X}(t)}\vee\frac{1}{\widetilde{Y}(t)}\leq 2M_{\varepsilon,x,y,1}e^{ \gamma_{0}t/4}\text{ provided that }(\overline{x}-\widetilde{x})^{2}+(\overline{y}-\widetilde{y})^{2}+\delta^{2} \leq\frac{\varepsilon}{2\widetilde{C}_{0}M_{\varepsilon,x,y}}. \tag{3.40}\]
Note that
\[|v_{1}(t)|^{2}+|v_{2}(t)|^{2}\leq\frac{4\lambda}{\sigma_{1}^{2}\vee\sigma_{2}^{2}} \left(\widetilde{X}^{-2}(t)\wedge\widetilde{Y}^{-2}(t)\right)[3+\overline{X}(t)+ \overline{Y}(t)]^{4}\left((\overline{X}(t)-\widetilde{X}(t))+(\overline{Y}(t )-\widetilde{Y}(t))\right)^{2}. \tag{3.41}\]
Combining (3.36), (3.37), (3.40), and (3.41), we have, when \((\overline{x}-\widetilde{x})^{2}+(\overline{y}-\widetilde{y})^{2}+\delta^{2} \leq\frac{\varepsilon}{2\widetilde{C}_{0}M_{\varepsilon,x,y}}\), that
\[\mathbb{P}\left\{|v_{1}(t)|^{2}+|v_{2}(t)|^{2}\leq M_{\varepsilon,x,y}^{\prime }\frac{\widetilde{C}_{0}((\overline{x}-\widetilde{x})^{2}+(\overline{y}- \widetilde{y})^{2}+\delta^{2})}{\varepsilon}e^{-\gamma_{0}t/2}\text{ for all }0\leq t\leq \widetilde{\tau}_{\delta}\right\}\geq 1-\varepsilon,\]
for some \(M_{\varepsilon,x,y}^{\prime}\). This implies (3.30). The proof is complete.
In the next lemma, we will show that \(Z(t)\) converges to \(0\) (exponentially fast) whenever the solution starts in a neighborhood of the boundary corresponding to \(z=0\).
**Lemma 3.2**.: _For any \((x,y)\in\mathbb{R}_{+}^{2,\circ}\) and \(\varepsilon\in(0,1)\), there exists \(\varsigma=\varsigma(x,y,\varepsilon)\) such that_
\[\mathbb{P}_{\widetilde{\mathbf{s}}}\left\{\lim_{t\to\infty}\frac{\ln Z(t)}{t} =\lambda_{2}<0\right\}>1-\varepsilon,\]
_for all \(\widetilde{\mathbf{s}}=(\widetilde{x},\widetilde{y},\widetilde{z})\) satisfying \((\widetilde{x}-x)^{2}+(\widetilde{y}-y)^{2}+\widetilde{z}^{2}\leq\varsigma^{2}\)._
Proof.: First, we choose \(\delta=\delta(\varepsilon,x,y)>0\) such that
\[2\widetilde{M}_{\varepsilon,x,y}\delta^{2}\leq\varepsilon\text{ and }2 \varepsilon^{2}\widetilde{M}_{\varepsilon,x,y}2\delta\leq\varepsilon, \tag{3.42}\]
where \(\widetilde{M}_{\varepsilon,x,y}\) is determined as in (3.30). Define
\[\Omega_{1}:=\left\{e^{2\gamma_{0}t}[(\overline{X}(t)-\widetilde{X}(t))^{2}+( \overline{Y}(t)-\widetilde{Y}(t))^{2}]\leq\frac{\widetilde{C}_{0}((\overline {x}-\widetilde{x})^{2}+(\overline{y}-\widetilde{y})^{2}+\delta^{2})}{ \varepsilon}\right\}.\]
Because of the egodicity, we have
\[\mathbb{P}_{x,y}\left\{\frac{1}{t}\int_{0}^{t}F_{2}(\overline{Y}(t),0) \overline{Y}(t)dt=\lambda_{2}+\alpha_{3}+\frac{\sigma_{3}^{2}}{2}\right\}=1.\]
Therefore, we can find \(T>0\) such that \(\mathbb{P}_{x,y}(\Omega_{2})>1-\varepsilon\) where
\[\Omega_{2}:=\left\{\frac{1}{t}\int_{0}^{t}F_{2}(\overline{Y}(t),0)\overline{Y }(t)dt-\alpha_{3}-\frac{\sigma_{3}^{2}}{2}\leq\lambda_{2}+\gamma_{0},\;\forall t \geq T\right\}.\]
In view of (1.6), we can find \(\widetilde{D}_{x,y,\varepsilon,T}>0\) such that \(\mathbb{P}_{x,y}(\Omega_{3})\geq 1-\varepsilon\) where
\[\Omega_{3}:=\left\{\int_{0}^{t}F_{2}(\overline{Y}(t),0)\overline{Y}(t)ds\leq \widetilde{D}_{x,y,\varepsilon,T},\;\forall t\leq T\right\}.\]
By the exponential martingale inequality, see e.g. [14], we have \(\mathbb{P}(\Omega_{4})\geq 1-\varepsilon\) where
\[\Omega_{4}:=\left\{\sigma_{3}W(t)\leq\frac{2}{\gamma_{0}}|\ln\varepsilon|+ \gamma_{0}t,\;\forall\,t\geq 0\right\}.\]
For \(0\leq t\leq T\wedge\widetilde{\tau}_{\delta}\), \(\omega\in\cap_{i=1}^{4}\Omega_{i}\), we have
\[\begin{split}\ln\widetilde{Z}(t)=&\ln\widetilde{z}+ \int_{0}^{t}F_{2}(\widetilde{Y}(s),\widetilde{Z}(s))\widetilde{Y}(s)ds-\left( \alpha_{3}-\frac{\sigma_{3}^{2}}{2}\right)t+\sigma_{3}W(t)\\ \leq&\ln\widetilde{z}+\int_{0}^{t}F_{2}(\overline{Y} (s),0)\overline{Y}(s)ds-\left(\alpha_{3}-\frac{\sigma_{3}^{2}}{2}\right)t+ \sigma_{3}W(t)+L\int_{0}^{t}|(\widetilde{Z}(t))^{2}+(\overline{Y}(s)- \widetilde{Y}(s))^{2}|^{\frac{1}{2}}ds\\ \leq&\ln\widetilde{z}+\frac{2}{\gamma_{0}}|\ln \varepsilon|+\widetilde{D}_{x,y,\varepsilon,T}+L\frac{\sigma^{2}}{4\gamma_{0}}+ L\int_{0}^{t}e^{-2\gamma_{0}s}\frac{\widetilde{C}_{0}((\overline{x}-\widetilde{x})^{2} +(\overline{y}-\widetilde{y})^{2}+\delta^{2})}{\varepsilon}ds\\ \leq&\ln\widetilde{z}+\frac{2}{\gamma_{0}}|\ln \varepsilon|+\widetilde{D}_{x,y,\varepsilon,T}+L\frac{\sigma^{2}}{4\gamma_{0}} +\frac{L\widetilde{C}_{0}((\overline{x}-\widetilde{x})^{2}+(\overline{y}- \widetilde{y})^{2}+\delta^{2})}{2\varepsilon\gamma_{0}}.\end{split} \tag{3.43}\]
If \(\ln\widetilde{z}<\ln\varsigma:=\ln\delta-\left(\frac{2}{\gamma_{0}}|\ln \varepsilon|+\widetilde{D}_{x,y,\varepsilon,T}+L\frac{\sigma^{2}}{4\gamma_{0} }+\frac{2L\widetilde{C}_{0}((\overline{x}-\widetilde{x})^{2}+(\overline{y}- \widetilde{y})^{2}+\delta^{2})}{\varepsilon\gamma_{0}}\right)\) then it is easily seen that \(\widetilde{\tau}_{\delta}\geq T\) for any \(\omega\in\cap_{i=1}^{4}\Omega_{i}\) because \(\ln\widetilde{Z}(t)\leq\ln\delta\) for any \(t\leq T\wedge\widetilde{\tau}_{\delta}\) and \(\omega\in\cap_{i=1}^{4}\Omega_{i}\).
For \(T\leq t\leq\widetilde{\tau}_{\delta}\) we have
\[\ln\widetilde{Z}(t)\leq\ln\widetilde{z}+(\lambda_{2}+4\gamma_{0})t+\frac{2}{ \gamma_{0}}|\ln\varepsilon|+L\frac{\sigma^{2}}{4\gamma_{0}}+\frac{L\widetilde {C}_{0}((\overline{x}-\widetilde{x})^{2}+(\overline{y}-\widetilde{y})^{2}+ \delta^{2})}{2\varepsilon\gamma_{0}}<\ln\delta.\]
Thus, we must have \(\widetilde{\tau}_{\delta}=\infty\) for \(\omega\in\cap_{i=1}^{4}\Omega_{i}\) and that \(\limsup\frac{\ln\widetilde{Z}(t)}{t}\leq\lambda_{2}-4\gamma_{0}<0\) for \(\omega\in\cap_{i=1}^{4}\Omega_{i}\).
For the rest of this proof, we always assume that \((\widetilde{x}-x)^{2}+(\widetilde{y}-y)^{2}+\widetilde{z}^{2}\leq\varsigma^{2} <\widetilde{m}_{\varepsilon,x,y}^{2}\), where \(\widetilde{m}_{\varepsilon,x,y}\) is chosen as in (3.30). Consider the following coupled system:
\[\left\{\begin{array}{ll}d\overline{X}(t)=&[\Lambda-F_{1}(\overline{X}(t), \overline{Y}(t))\overline{X}(t)\overline{Y}(t)-\alpha_{1}\overline{X}(t)+ \alpha_{4}\overline{Y}(t)]dt+\sigma_{1}\overline{X}(t)dW_{1}(t)\\ d\overline{Y}(t)=&[F_{1}(\overline{X}(t),\overline{Y}(t))\overline{X}(t) \overline{Y}(t)-\alpha_{2}\overline{Y}(t)]dt+\sigma_{2}\overline{Y}(t)dW_{2}(t )\\ d\widehat{X}(t)=&[\Lambda-F_{1}(\widehat{X}(t),\widehat{Y}(t))\widehat{X}(t) \widehat{Y}(t)-\alpha_{1}\widehat{X}(t)+\alpha_{4}\widehat{Y}(t)+\alpha_{5} \widehat{Z}(t)]dt+\sigma_{1}\widehat{X}(t)dW_{1}(t)\\ &-\widetilde{N}\mathbf{1}_{\{t<\widetilde{\tau}_{\delta}\}}(1+\overline{X}(t)+ \widehat{X}(t)+\widehat{Y}(t))^{2}(\overline{Y}(t)-\widehat{Y}(t))dt\\ d\widehat{Y}(t)=&[F_{1}(\widehat{X}(t),\widehat{Y}(t))\widehat{X}(t) \widehat{Y}(t)-F_{2}(\widehat{Y}(t),\widehat{Z}(t))\widehat{Y}(t)\widehat{Z}(t )-\alpha_{2}\widehat{Y}(t)]dt+\sigma_{2}\widehat{Y}(t)dW_{2}(t)\\ &-\widetilde{N}\mathbf{1}_{\{t<\widetilde{\tau}_{\delta}\}}(1+\overline{X}(t) +\widehat{Y}(t)+\widehat{Y}(t))^{2}(\overline{Y}(t)-\widehat{Y}(t))dt\\ d\widehat{Z}(t)=&[F_{2}(\widehat{Y}(t),\widehat{Z}(t))\widehat{Y}(t) \widehat{Z}(t)-\alpha_{3}\widehat{Z}(t)]dt+\sigma_{2}\widehat{Y}(t)dW_{2}(t ).\end{array}\right. \tag{3.44}\]
Then, \((\widehat{X}(t),\widehat{Y}(t),\widehat{Z}(t))\equiv(\widetilde{X}(t), \widetilde{Y}(t),\widetilde{Z}(t))\) up to \(\widetilde{\tau}_{\delta}\). Moreover, let \(\mathbb{Q}_{x,y,\widetilde{\mathbf{s}}}\) be the measure defined by
\[\frac{d\mathbb{Q}_{x,y,\widetilde{\mathbf{s}}}}{d\mathbb{P}_{x,y,\widetilde{ \mathbf{s}}}}=\exp\left\{-\int_{0}^{\widetilde{\tau}_{\delta}}[v_{1}(s)dW_{1}(s)+ v_{2}(s)dW_{2}(s)]-\int_{0}^{\widetilde{\tau}_{\delta}}[v_{1}^{2}(s)+v_{2}^{2}(s)]ds \right\}.\]
Then, \(\left(W_{1}(t)+\int_{0}^{t\wedge\widetilde{\tau}_{\delta}}v_{1}(s)ds,W_{2}(t) +\int_{0}^{t\wedge\widetilde{\tau}_{\delta}}v_{2}(s)ds\right)\) is a standard two-dimensional Brownian motion under \(\mathbb{Q}\). As a result, \((\widehat{X}(t),\widehat{Y}(t),\widehat{Z}(t))\) is the solution to (1.3) with initial condition \(\widetilde{\mathbf{s}}\) under \(\mathbb{Q}\).
Let
\[\Omega_{5}:=\left\{\int_{0}^{\widetilde{\tau}_{\delta}}|v_{1}(t)|^{2}+|v_{2}(t) |^{2})dt\geq\widetilde{M}_{\varepsilon,x,y}\right\},\]
and
\[\Omega_{6}:=\left\{\int_{0}^{t}(v_{1}(s)dW_{1}(s)+v_{2}(s)dW_{2}(s))\leq\frac{ \varepsilon^{2}}{2\delta}\int_{0}^{t}|v_{1}(s)|^{2}+|v_{2}(s)|^{2})ds+ \varepsilon\right\}.\]
In view of the exponential martingale inequality (see e.g. [10]), if \(\delta\leq\varepsilon^{3}/(-\ln\varepsilon)\) we have
\[\mathbb{P}_{x,y,\widetilde{\mathbf{s}}}(\Omega_{6})\geq 1-e^{\varepsilon^{3}/ \delta}\geq 1-\varepsilon.\]
For \(\omega\in\Omega_{5}\cap\Omega_{6}\), we have
(3.45) \[\begin{split}\frac{d\mathbb{Q}_{x,y,\widetilde{\mathbf{s}}}}{d \mathbb{P}_{x,y,\widetilde{\mathbf{s}}}}=&\exp\left\{-\int_{0}^{ \widetilde{\tau}_{\widetilde{s}}}[v_{1}(s)dW_{1}(s)+v_{2}(s)dW_{2}(s)]-\int_{0 }^{\widetilde{\tau}_{\widetilde{s}}}[v_{1}^{2}(s)+v_{2}^{2}(s)]ds\right\}\\ \geq&\exp\left\{-\frac{\varepsilon^{2}}{2\delta} \int_{0}^{t}|v_{1}(s)|^{2}+|v_{2}(s)|^{2})ds-\varepsilon-\int_{0}^{\widetilde{ \tau}_{\widetilde{s}}}[v_{1}^{2}(s)+v_{2}^{2}(s)]ds\right\}\\ \geq& e^{-\frac{\varepsilon^{2}\widetilde{M}_{x,y, \widetilde{\mathbf{s}}}}{2\delta}-\varepsilon-\widetilde{M}_{x,x,y}2\delta^{2 }}\geq e^{-3\varepsilon}\geq 1-4\varepsilon\text{ (due to \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq: eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq eq: eq: eq: eq eq: eq: eq eq: eq eq: eq
weak-limit of \(\check{\Pi}^{t}_{\mathbf{s}}\) as \(t\to\infty\) must be an invariant probability measure of \(\{\mathbf{S}(t)\}\), that is, the weak-limit has the form \(p\boldsymbol{\nu}_{1}+(1-p)\boldsymbol{\nu}_{12}\) for some \(p\in[0,1]\); see e.g [1, Theorem 9.9]. We show that \(p\) must be \(0\). Assume that \(\check{\Pi}^{\mathbf{s}}_{t_{k}}\) converges weakly to \(p\boldsymbol{\nu}_{1}+(1-p)\boldsymbol{\nu}_{12}\) as \(t_{k}\uparrow\infty\) for some subsequence \(\{t_{k}\}_{k=1}^{\infty}\). Then, we have
\[\lim_{k\to\infty} \int_{\mathbb{R}^{3}_{+}}\left(F_{1}(u,v)u-F_{2}(v,w)w-\alpha_{2} -\frac{\sigma_{2}^{2}}{2}\right)d\check{\Pi}^{\mathbf{s}}_{t_{k}}\] \[=\int_{\mathbb{R}^{3}_{+}}\left(F_{1}(u,v)u-F_{2}(v,w)w-\alpha_{2} -\frac{\sigma_{2}^{2}}{2}\right)(pd\boldsymbol{\nu}_{1}+(1-p)d\boldsymbol{\nu }_{12}).\]
Note that
\[\int_{\mathbb{R}^{3}_{+}}\left(F_{1}(u,v)u-F_{2}(v,w)w-\alpha_{2}-\frac{ \sigma_{2}^{2}}{2}\right)d\boldsymbol{\nu}_{1}=\lambda_{1},\]
and
\[\int_{\mathbb{R}^{3}_{+}}\left(F_{1}(u,v)u-F_{2}(v,w)w-\alpha_{2}-\frac{ \sigma_{2}^{2}}{2}\right)d\boldsymbol{\nu}_{12}=0,\]
which can be proved in the same manner as [11, Lemma 3.4]. As a result, we have
\[\lim_{k\to\infty}\frac{\mathbb{E}_{\mathbf{s}}\ln Y(t_{k})}{t_{k}}=\lim_{k \to\infty}\int_{\mathbb{R}^{3}_{+}}(F_{1}(u,v)u-F_{2}(v,w)w-\alpha_{2}-\frac{ \sigma_{2}^{2}}{2})d\check{\Pi}^{\mathbf{s}}_{t_{k}}=p\lambda_{1}.\]
If \(p>0\) then we end up with \(\lim_{k\to\infty}\mathbb{E}_{\mathbf{s}}\ln Y(t_{k})=\infty\), which contradicts (1.4). Thus, \(p\) must be \(0\). As a result, for \(\mathbf{s}\in\mathbb{R}^{3,\circ}_{+}\), \(\boldsymbol{\nu}_{12}\) is the unique weak-limit.
Let \(R_{\varepsilon}>0\) such that \(\mu_{12}\big{(}[R_{\varepsilon}^{-1},R_{\varepsilon}]^{2}\big{)}>1-\varepsilon.\) By the Heine-Borel covering theorem, there exists \((x_{1},y_{1}),\cdots,(x_{l},y_{l})\) such that \([R_{\varepsilon}^{-1},R_{\varepsilon}]^{2}\) is covered by the union of disks centered at \((x_{k},y_{k})\) with radius \(\frac{1}{2}\varsigma_{x_{k},y_{k},\varepsilon}\), \(k=1,\cdots,n\); where \(\varsigma\) is determined as in Lemma 3.2. Then, for any \(\widetilde{\mathbf{s}}\in[R_{\varepsilon}^{-1},R_{\varepsilon}]^{2}\times(0, \frac{1}{2}\varsigma_{\min})\) with \(\varsigma_{\min}=\min_{k=1,\cdots,l}\{\varsigma_{x_{k},y_{k},\varepsilon}\}\), there exists \(k_{\widetilde{\mathbf{s}}}\in\{1,\cdots,l\}\) such that
\[(\widetilde{x}-x_{k_{\widetilde{\mathbf{s}}}})^{2}+(\widetilde{y}-y_{k_{ \widetilde{\mathbf{s}}}})^{2}+\widetilde{z}^{2}\leq\varsigma_{\min}^{2}.\]
Thus, we have
\[\mathbb{P}_{\widetilde{\mathbf{s}}}\left\{\lim_{t\to\infty}\frac{\ln Z(t)}{t}= \lambda_{2}<0\right\}>1-\varepsilon,\;\forall\widetilde{\mathbf{s}}\in[R_{ \varepsilon}^{-1},R_{\varepsilon}]^{2}\times(0,\varsigma_{\min}). \tag{3.49}\]
On the other hand, since \(\mu_{12}([R_{\varepsilon}^{-1},R_{\varepsilon}]^{2})>1-\varepsilon\), there exists a \(\check{T}=\check{T}(\mathbf{s},\varepsilon)>0\) such that
\[\check{\Pi}^{\check{T}}_{\mathbf{s}}([R_{\varepsilon}^{-1},R_{\varepsilon}]^{ 2}\times(0,\varsigma_{\min}))>1-2\varepsilon,\]
or equivalently,
\[\frac{1}{\check{T}}\int_{0}^{\check{T}}\mathbb{P}_{\mathbf{s}}\{\mathbf{S}(t) \in([R_{\varepsilon}^{-1},R_{\varepsilon}]^{2}\times(0,\varsigma_{\min}))\} dt>1-2\varepsilon.\]
As a result,
\[\mathbb{P}_{\mathbf{s}}\{\widehat{\tau}\leq\check{T}\}>1-2\varepsilon,\]
where \(\widehat{\tau}=\inf\{t\geq 0:\mathbf{S}(t)=(X(t),Y(t),Z(t))\in[R_{\varepsilon}^{-1},R_{\varepsilon}]^{2}\times(0,\varsigma_{\min})\}\). Therefore, using the strong Markov property and (3.49), we deduce that
\[\mathbb{P}_{\mathbf{s}}\left\{\lim_{t\to\infty}\frac{\ln Z(t)}{t}=\lambda_{2} \right\}\geq(1-\varepsilon)(1-2\varepsilon)\geq 1-3\varepsilon\text{ given }\mathbf{s}\in\mathbb{R}^{3,\circ}_{+}. \tag{3.50}\]
Letting \(\varepsilon\to 0\) we obtain the desired result.
## 4. Proof of Theorem 1.4
The proof of Theorem 1.4 will follow the idea from [1]. We will need the following estimates from [1, Lemma 4.6].
**Lemma 4.1**.: _Let \(1<p\leq 2\). There exists \(c_{p}>0\) such that for any \(a>0\) and \(x\in\mathbb{R}\) we have_
\[|a+x|^{p}\leq a^{p}+pa^{p-1}x+c_{p}|x|^{p}. \tag{4.1}\]
_Moreover, there exists \(d_{p,b}>0\) depending only on \(p,b>0\) such that if \(x+a\geq 0\) then_
\[(a+x)^{p}-b(a+x)^{p-1}\leq a^{p}+pa^{p-1}x-\frac{b}{2}a^{p-1}+c_{b,p}(|x|^{p}+1). \tag{4.2}\]
_It follows straightforwardly from (4.1) that for a random variable \(R\) and a constant \(c>0\), there exists \(\tilde{K}_{c}>0\) such that_
\[\mathbb{E}|R+c|^{p}\leq c^{p}+pc^{p-1}\mathbb{E}R+\tilde{K}_{c}\mathbb{E}|R|^{ 1+p}. \tag{4.3}\]
In this section, let \(\gamma_{2}>0\), \(\gamma_{3}>0\) be such that
\[L(\gamma_{2}\vee\gamma_{3})\leq\frac{1}{2}\min\{\alpha_{1},\alpha_{2}-\alpha _{4},\alpha_{3}-\alpha_{5}\}\text{ and }\gamma_{2}\lambda_{1}-\gamma_{3}\left(\alpha_{3}+\frac{ \sigma_{3}^{2}}{2}\right)>0,\]
and set
\[\rho:=\frac{1}{2}\left[(\gamma_{3}\lambda_{2})\vee\left(\gamma_{2}\lambda_{1 }-\gamma_{3}\left(\alpha_{3}+\frac{\sigma_{3}^{2}}{2}\right)\right)\right]>0.\]
Pick \(c_{1}>0\) such that \(y+z-\gamma_{2}\ln y-\gamma_{3}\ln z+c_{1}\geq 0\) for any \((y,z)\in\mathbb{R}_{+}^{2,\circ}\) and consider
\[V(\mathbf{s})=x+y+z-\gamma_{2}\ln y-\gamma_{3}\ln z+c_{1}\geq 0,\mathbf{s}\in \mathbb{R}_{+}^{3,\circ}.\]
Then, because \(F_{1},F_{2}\) are bounded by \(L\) and \((\gamma_{1}\vee\gamma_{2})L\leq\min\{\alpha_{1},\alpha_{2}-\alpha_{4},\alpha_{ 3}-\alpha_{5}\}\), we have
\[\begin{split}\mathcal{L}V(\mathbf{s})=&\,(\Lambda- \alpha_{1}x+(\alpha_{4}-\alpha_{2})y+(\alpha_{5}-\alpha_{3})z)\\ &+\gamma_{2}\left(F_{1}(x,y)x-F_{2}(y,z)z-\alpha_{2}-\frac{\sigma _{2}^{2}}{2}\right)+\gamma_{3}\left(F_{2}(y,z)z-\alpha_{3}-\frac{\sigma_{3}^{ 2}}{2}\right)\\ \leq& A_{V}-\frac{1}{2}\min\{\alpha_{1},\alpha_{2}- \alpha_{4},\alpha_{3}-\alpha_{5}\}(x+y+z)\leq\mathbf{1}_{\{|\mathbf{s}|\leq M \}}A_{V}-\alpha_{m}V(\mathbf{s}),\end{split} \tag{4.4}\]
for some positive constants \(A_{V},M\) and \(\alpha_{m}\).
Let \(q_{0}\) be as in Theorem 1.1 and \(A_{V}\), \(\alpha_{m}\), \(\rho\) as above. Let \(n^{\circ}>0\) be such that
\[(n^{\circ}-1)\alpha_{m}-2^{q_{0}-1}A_{V}\geq\frac{\rho}{2}. \tag{4.5}\]
The following lemma gives us estimates for \(\mathcal{L}V\) when the solution starts in a neighborhood of the boundary.
**Lemma 4.2**.: _There exist \(T^{\circ}>0,\delta>0\) such that_
\[\mathbb{E}_{\mathbf{s}}\int_{0}^{T}\mathcal{L}V(\mathbf{S}(s))ds\leq-\rho T,\]
_for any \(T\in[T^{\circ},n^{\circ}T^{\circ}]\), \(\mathbf{s}\in\mathbb{R}_{+}^{3,\circ}\), \(|\mathbf{s}|\leq M\), and \(\operatorname{dist}(\mathbf{s},\partial\mathbb{R}_{+}^{3,\circ})\leq\delta\)._
Proof.: On the boundary, there are only two invariant probability \(\boldsymbol{\nu}_{1}:=\mu_{1}\times\boldsymbol{\delta}^{*}\times\boldsymbol{ \delta}^{*}\) and \(\boldsymbol{\nu}_{12}:=\mu_{12}\times\boldsymbol{\delta}^{*}\). In view of Theorem 1.1 we can deduce the following claims:
* \((u+v+w)^{q_{0}}\) is integrable with respect to either \(\boldsymbol{\nu}_{1}\) and \(\boldsymbol{\nu}_{12}\) and (4.6) \[\int_{\mathbb{R}_{+}^{3}}\left(\Lambda-\alpha_{1}u+(\alpha_{4}-\alpha_{2})v+( \alpha_{5}-\alpha_{3})w\right)d\boldsymbol{\nu}=0,\ \boldsymbol{\nu}\in\{\boldsymbol{\nu}_{1},\boldsymbol{\nu}_{12}\},\] and (4.7) \[\int_{\mathbb{R}_{+}^{3}}\left(F_{1}(u,v)u-\alpha_{2}-\frac{\sigma_{2}^{2}}{2} \right)d\boldsymbol{\nu}_{12}=0.\] (The proof is similar to that of [16, Lemma 3.4].)
* \(\{\tilde{\Pi}_{t}^{\mathbf{s}}:t\geq 1,|\mathbf{s}|\leq M\}\) is tight and all its weak-limits, as \(t\to\infty\), must be invariant measures of \((X(t),Y(t),Z(t))\). (See e.g. [1, Theorem 9.9].)
* For a sequence of bounded initial points \(\{\mathbf{s}_{k}\in\mathbb{R}_{+}^{3}\}\) and an increasing sequence \(T_{k}\to\infty\) as \(k\to\infty\), if \(\{\tilde{\Pi}_{T_{k}}\}\) converges to \(\mu\) as \(T_{k}\) tends to \(\infty\) then \[\lim_{k\to\infty}\int_{\mathbb{R}_{+}^{3}}h(\mathbf{s})\tilde{\Pi}_{T_{k}}^{ \mathbf{s}}(d\mathbf{s})=\int_{\mathbb{R}_{+}^{3}}h(\mathbf{s})\mu(d\mathbf{s})\] for any continuous functin \(h(\mathbf{s})\) satisfying \(h(\mathbf{s})\leq C_{h}(1+x+y)^{q}\) for some \(C_{h}>0,0<q<q_{0}\). (See [16, Lemma 3.5] for a similar proof.)
Next, we get from (4.4), (4.6) and (4.7) that
\[\int_{\mathbb{R}_{+}^{3}}\mathcal{L}V(\mathbf{s})d\boldsymbol{\nu}_{12}=- \gamma_{3}\left(\int_{\mathbb{R}_{3}+}F_{2}(u,w)wd\boldsymbol{\nu}_{12}-\alpha _{3}-\frac{\sigma_{3}^{2}}{2}\right)=-\gamma_{3}\lambda_{2}\leq-2\rho, \tag{4.8}\]
and that
\[\int_{\mathbb{R}_{+}^{3}}\mathcal{L}V(\mathbf{s})\boldsymbol{\nu }_{1}(\mathbf{s})= -\gamma_{2}\left(\int_{\mathbb{R}_{3}+}F_{1}(u,v)ud\boldsymbol{ \nu}_{1}-\alpha_{2}-\frac{\sigma_{2}^{2}}{2}\right)-\gamma_{3}\left(\int_{ \mathbb{R}_{3}+}F_{2}(v,w)wd\boldsymbol{\nu}_{1}-\alpha_{3}-\frac{\sigma_{3}^ {2}}{2}\right)\] \[= -\gamma_{2}\lambda_{1}+\gamma_{3}\left(\alpha_{3}+\frac{\sigma_{ 3}^{2}}{2}\right)\leq-2\rho. \tag{4.9}\]
Now, we claim that there exists \(T^{\diamond}=T^{\diamond}(M)>0\) such that if \(\mathbf{s}\in\partial\mathbb{R}_{+}^{3}\) and \(|\mathbf{s}|\leq M\) then
\[\mathbb{E}_{\mathbf{s}}\int_{0}^{T}\mathcal{L}V(X(\mathbf{s}))d\mathbf{s}= \int_{\mathbb{R}_{+}^{3}}\mathcal{L}V(\mathbf{s})d\tilde{\Pi}_{T}^{\mathbf{s} }\leq-\frac{3}{2}\rho T. \tag{4.10}\]
Indeed, assuming the contrary, there exists a sequence \(\{\mathbf{s}_{k}\}\subset\partial\mathbb{R}_{+}^{3}\) such that \(|\mathbf{s}_{k}|\leq M\) and a sequence \(T_{k}\uparrow\infty\) such that
\[\mathbb{E}_{x}\frac{1}{T_{k}}\int_{0}^{T_{k}}\mathcal{L}V(X(s))ds>-\frac{3}{2}\rho.\]
Because of Claim (C2), there exist subsequences, which we still denote by \(\{\mathbf{s}_{k}\}\) and \(\{T_{k}\}\) for convenience, such that \(\tilde{\Pi}_{T_{k}}^{\mathbf{s}_{k}}\) converges to an invariant probability measure \(\boldsymbol{\nu}\) as \(k\to\infty\). Because \(\partial\mathbb{R}_{1}^{3}\) is an invariant set of the process \((X(t),Y(t),Z(t))\), \(\boldsymbol{\nu}\) must be a convex combination of \(\boldsymbol{\nu}_{1}\) and \(\boldsymbol{\nu}_{12}\). Thus, in view of (4.8) and (4.9), we have
\[\int_{\mathbb{R}_{+}^{3}}\mathcal{L}V(\mathbf{s})\boldsymbol{\nu}(d\mathbf{s})< -2\rho.\]
On the other hand, we have from Claim (C3) that
\[\int_{\mathbb{R}_{+}^{3}}\mathcal{L}V(\mathbf{s})\boldsymbol{\nu}(d\mathbf{s})= \lim_{k\to\infty}\int_{\mathbb{R}_{+}^{3}}\mathcal{L}V(\mathbf{s})\tilde{\Pi}_{T _{k}}^{\mathbf{s}_{k}}\geq-\frac{3}{2}\rho.\]
The contradiction shows the existence of \(T^{\diamond}\) satisfying (4.10).
Then, by the Feller-Markov property of the process \((X(t),Y(t),Z(t))\) and the uniform boundedness (1.4), we can show that there exists \(\delta>0\) such that
\[\mathbb{E}_{\mathbf{s}}\int_{0}^{T}\mathcal{L}V(X(\mathbf{s}))d\mathbf{s}\leq- \rho T,\;\forall T\in[T^{\diamond},n^{\diamond}T^{\diamond}],\]
for any \(\mathbf{s}\in\mathbb{R}^{3}_{+}\) satisfying \(|\mathbf{s}|\leq M\).
Now, we are ready to establish a kind of drift condition that will help us establish the ergodicity of the underlying systems and obtain the rate of convergence.
**Proposition 4.1**.: _Let \(q\) be any number in the interval \((1,q_{0})\), and \(U(\mathbf{s})=1+|\mathbf{s}|_{1}:=1+x+y+z\) for \(\mathbf{s}=(x,y,z)\). There is \(\kappa^{\diamond}>0\) and \(C_{\diamond},C^{\diamond}>0\) such that_
\[\mathbb{E}_{\mathbf{s}}[C_{\diamond}|\mathbf{S}(n^{\diamond}T^{\diamond})|^{q }+V^{q}(\mathbf{S}(n^{\diamond}T^{\diamond}))]\leq C_{\diamond}U^{q}(x)+V^{q} (x)-\kappa^{\diamond}[C_{\diamond}U^{q}(\mathbf{s})+V^{q}(\mathbf{s})]^{\frac {q-1}{q}}+C^{\diamond}.\]
Proof.: First we assume that \(1<q\leq 2\). In the sequel, \(C^{\diamond}\) is a generic constant depending on \(T^{\diamond},M,n^{\diamond}\) but independent of \(x\in\mathbb{R}^{n}_{++}\). \(C^{\diamond}\) can differ from line to line. Suppose \(X(0)=x\). We have from Ito's formula that
\[V(X(t))=V(x)+\int_{0}^{t}\mathcal{L}V(X(s))ds+\widetilde{h}(t)\]
Here
\[\widetilde{h}(t):=\int_{0}^{t}(\sigma_{1}X(s)dW_{1}(s)+\sigma_{2}Y(s)dW_{2}(t )+\sigma_{3}Z(s)dW_{3}(s)-\gamma_{2}\sigma_{2}dW_{2}(s)-\gamma_{3}\sigma_{3} dW_{3}(s))\]
is a martingale with quadratic variation given by
\[\langle\widetilde{h}(t)\rangle=\int_{0}^{t}\left(\sigma_{1}^{2}X^{2}(s)+ \sigma_{2}^{2}(Y(s)-\gamma_{2})^{2}+\sigma_{3}^{2}(Z(s)-\gamma_{3})^{2}\right) ds\leq K\int_{0}^{t}U^{2}(\mathbf{S}(s))ds, \tag{4.11}\]
for some constant \(K=K(\sigma_{1},\sigma_{2},\sigma_{3},\gamma_{2},\gamma_{3})\).
Because \(\mathcal{L}V(\mathbf{s})\leq A_{V}\), we have
\[V(X(T))=V(x)+\int_{0}^{T}\mathcal{L}V(X(s))ds+\widetilde{h}(T)\leq V(x)+A_{V}T+ \widetilde{M}(T).\]
Applying (4.3) yields
\[\mathbb{E}_{\mathbf{s}}[V(\mathbf{S}(T))]^{q}\leq V^{q}(\mathbf{s})+qA_{V}TV^{q-1}(\mathbf{s})+C^{\diamond}(1+|\mathbf{s}|_{ 1})^{q},\quad T\leq n^{\diamond}T^{\diamond}. \tag{4.12}\]
where \(|\mathbf{s}|_{1}=x+y+z\). On the other hand, since \(|\mathcal{L}V(\mathbf{s})|\leq K_{0}(|\mathbf{s}|_{1}+1),\forall\mathbf{s} \in\mathbb{R}^{3}_{+}\) for some constant \(K_{0}\), we deduce from Ito's isometry and Holder's inequality that
\[\mathbb{E}_{\mathbf{s}}\left|\int_{0}^{t}LV(\mathbf{S}(s))ds\right|^{q}+ \mathbb{E}_{\mathbf{s}}\left|\widetilde{h}(t)\right|^{q}\leq C^{\diamond}(| \mathbf{s}|_{1}+1)^{q},\quad\forall t\leq n^{\diamond}T^{\diamond},\mathbf{s} \in\mathbb{R}^{3,\diamond}_{+}. \tag{4.13}\]
It follows from (4.13) and (4.3) that
\[\begin{split}\mathbb{E}_{\mathbf{s}}[V(\mathbf{S}(t))]^{q}\leq& V^{q}(\mathbf{s})+q\left[\mathbb{E}_{\mathbf{s}}\int_{0}^{t}\mathcal{L}V( \mathbf{S}(s))ds\right]V^{q-1}(\mathbf{s})+C^{\diamond}\mathbb{E}_{\mathbf{s}} \left|\int_{0}^{t}\mathcal{L}V(\mathbf{S}(s))ds+\widetilde{h}(t)\right|^{q}\\ \leq& V^{q}(\mathbf{s})+q\left[\mathbb{E}_{\mathbf{s}}\int_{0}^{t} \mathcal{L}V(\mathbf{S}(s))ds\right]V^{q-1}(\mathbf{s})+C^{\diamond}(1+| \mathbf{s}|_{1})^{q},\quad\forall t\leq n^{\diamond}T^{\diamond}.\end{split} \tag{4.14}\]
Thus, if \(|\mathbf{s}|_{1}\leq M\) and \(\operatorname{dist}(\mathbf{s},\partial\mathbb{R}^{3}_{+})\leq\delta\), we have \(\mathbb{E}_{\mathbf{s}}\int_{0}^{t}\mathcal{L}V(\mathbf{S}(s))ds\leq-\rho t\), \(t\in[T^{\diamond},n^{\diamond}T^{\diamond}]\). As a result,
\[\mathbb{E}_{\mathbf{s}}[V(\mathbf{S}(T))]^{q}\leq V^{q}(\mathbf{s})-q\rho TV^{q-1}(\mathbf{s})+C^{\diamond}(1+|\mathbf{s}|_{ 1})^{q},\quad T\in[T^{\diamond},n^{\diamond}T^{\diamond}],|\mathbf{s}|_{1}\leq M. \tag{4.15}\]
. Noting that \(V(\mathbf{s})\) is bounded on the set \(\{\mathbf{s}\in\mathbb{R}^{3}_{+}:|\mathbf{s}|_{1}\leq M,\operatorname{dist}( \mathbf{s},\partial\mathbb{R}^{3}_{+})\geq\delta\}\), it follows from (4.15) and (4.12) for \(|\mathbf{s}|_{1}\leq M\) that
\[\mathbb{E}_{\mathbf{s}}[V(\mathbf{S}(T))]^{q}\leq V^{q}(\mathbf{s})-q\rho TV^{q-1}(\mathbf{s})+C^{\diamond},\quad\forall T \in[T^{\diamond},n^{\diamond}T^{\diamond}]. \tag{4.16}\]
Define
\[\zeta=\inf\{t\geq 0:X(t)+Y(t)+Z(t)\leq M\}\wedge(n^{\diamond}T^{\diamond}).\]
From now on, we suppose that \(|\mathbf{s}_{1}|\leq M\). For \(t\leq\zeta\), we deduce from (4.4) that
\[V(\mathbf{S}(t))=V(\mathbf{s})+\int_{0}^{t}\mathcal{L}V(\mathbf{S}(s))ds+ \widetilde{h}(t)\leq V(\mathbf{s})-\alpha_{m}t+\widetilde{h}(t). \tag{4.17}\]
We have from (4.16), (4.17), (4.2), and the strong Markov property of \(X(t)\) that
\[\mathbb{E}_{\mathbf{s}}\left[\mathbf{1}_{\{\zeta\leq T^{\diamond }(n^{\diamond}-1)\}}V^{q}(\mathbf{S}(n^{\diamond}T^{\diamond}))\right]\] \[\leq \mathbb{E}_{\mathbf{s}}\left[\mathbf{1}_{\{\zeta\leq T^{\diamond }(n^{\diamond}-1)\}}\left[V^{q}(\mathbf{S}(\zeta))+C^{\diamond}\right]\right]- \mathbb{E}_{\mathbf{s}}\left[\mathbf{1}_{\{\zeta\leq T^{\diamond}(n^{ \diamond}-1)\}}q\rho(n^{\diamond}T^{\diamond}-\zeta)V^{q-1}(\mathbf{S}(\zeta) )\right]\] \[\leq \mathbb{E}_{\mathbf{s}}\left[\mathbf{1}_{\{\zeta\leq T^{\diamond }(n^{\diamond}-1)\}}(V(\mathbf{s})+\widetilde{h}(\zeta))^{q}+C^{\diamond} \right]-q\rho T^{\diamond}\mathbb{E}_{\mathbf{s}}\left[\mathbf{1}_{\{\zeta \leq T^{\diamond}(n^{\diamond}-1)\}}(V(\mathbf{s})+\widetilde{h}(\zeta))^{q-1 }\right]\] \[\leq \mathbb{E}_{\mathbf{s}}\left[\mathbf{1}_{\{\zeta\leq T^{\diamond }(n^{\diamond}-1)\}}\left(V^{q}(\mathbf{s})-\frac{q\rho T^{\diamond}}{2}V^{q-1 }(\mathbf{s})+q\widetilde{h}(\zeta)V^{q-1}(\mathbf{s})+C^{\diamond}(| \widetilde{h}(\zeta)|^{q}+1)\right)\right]. \tag{4.18}\]
If \(T^{\diamond}(n^{\diamond}-1)\leq\zeta\leq T^{\diamond}n^{\diamond}\), we have
\[\mathbb{E}_{\mathbf{s}}\left[\mathbf{1}_{\{\zeta\geq T^{\diamond }(n^{\diamond}-1)\}}V^{q}(\mathbf{S}(n^{\diamond}T^{\diamond}))\right]\] \[\leq \mathbb{E}_{\mathbf{s}}\left[\mathbf{1}_{\{\zeta\geq T^{\diamond }(n^{\diamond}-1)\}}V^{q}(\mathbf{S}(\zeta))+C^{\diamond}\right]+qA_{V} \mathbb{E}_{\mathbf{s}}\left[\mathbf{1}_{\{\zeta\geq T^{\diamond}(n^{\diamond} -1)\}}(n^{\diamond}T^{\diamond}-\zeta)V^{q-1}(\mathbf{S}(\zeta))\right]\] \[\text{(thanks to \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq
Combining (4.20) and (4.21), we get that
\[\mathbb{E}_{\mathbf{s}}\left[V^{q}(\mathbf{S}(n^{\circ}T^{\circ}))+C_{\circ}U^{q }(\mathbf{S}(n^{\circ}T^{\circ}))\right]\leq V^{q}(\mathbf{s})+C_{\circ}U^{q}( \mathbf{s})-\kappa^{\circ}[V^{q}(\mathbf{s})+C_{\circ}U^{q}(\mathbf{s})]^{(q- 1)/q}+C^{\circ}, \tag{4.22}\]
for some \(\kappa^{\circ}>0,C^{\circ}>0\) and sufficiently large \(C_{\circ}\).
Proof of Theorem 1.4.: Having Proposition 4.1, the proof of Theorem 1.4 is standard. Because of the nondegeneracy of the diffusion process and (4.22), we have from [13, Theorem 3.6] that
\[\lim_{k\to\infty}k^{q-1}\|P_{kn^{\circ}T^{\circ}}(\mathbf{s},\cdot)-\mu^{ \circ}(\cdot)\|_{TV}=0,\;1\leq q<q_{0} \tag{4.23}\]
where \(\mu^{\circ}\) is an invariant probability measure of the Markov chain \(\{\mathbf{S}(n^{\circ}T^{\circ})\}\), which is also an invariant probability measure of the Markov process \(\{\mathbf{S}(t),t\geq 0\}\) due to the uniqueness of invariant probability measures. Because \(\|P_{t}(\mathbf{s},\cdot)-\mu^{\circ}(\cdot)\|_{TV}\) is decreasing in \(t\), we can easily deduce (1.12) from (4.23). A similar argument can be found in [14, Proof of Theorem 1.1] or [19, Theorem 2.2]. The proof is complete.
**Acknowledgments:** The research has been done under the research project QG.22.10 "Asymptotic behaviour of mathematical models in ecology" of Vietnam National University, Hanoi for Nguyen Trong Hieu. A. Hening and D. Nguyen acknowledge support from the NSF through the grants DMS-2147903 and DMS-1853467 respectively. N. Nguyen acknowledges support from an AMS-Simons travel grant.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.